ironic-15.0.0/0000775000175000017500000000000013652514443013111 5ustar zuulzuul00000000000000ironic-15.0.0/LICENSE0000664000175000017500000002363713652514273014132 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ironic-15.0.0/reno.yaml0000664000175000017500000000022513652514273014740 0ustar zuulzuul00000000000000--- # Ignore the kilo-eol tag because that branch does not work with reno # and contains no release notes. closed_branch_tag_re: "(.+)(?/management/indicators`` location. ironic-15.0.0/releasenotes/notes/add-pxe-support-for-petitboot-50d1fe4e7da4bfba.yaml0000664000175000017500000000035513652514273030157 0ustar zuulzuul00000000000000--- features: - Adds the use of DHCP option 210 (tftp-path-prefix). This enables PXE for systems using petitboot, which cannot infer their tftp-path-prefix from the boot file location as petitboot does not use a boot file. ironic-15.0.0/releasenotes/notes/ipa-streams-raw-images-1010327b0dad763c.yaml0000664000175000017500000000122313652514273026205 0ustar zuulzuul00000000000000--- features: - The Agent deploy driver now streams raw images directly to disk (instead of staging in memory) by default. upgrade: - The Agent deploy driver now streams raw images directly to disk (instead of staging in memory) by default; this can be turned off by setting the [agent]stream_raw_images configuration option to False. Streaming may be undesirable if the disk the image is being written is significantly slower than the network. fixes: - Because the agent deploy driver now streams raw images directly to disk, images larger than the RAM available to the deploy ramdisk will no longer fail to deploy. ironic-15.0.0/releasenotes/notes/dynamic-driver-list-show-apis-235e9fca26fc580d.yaml0000664000175000017500000000654513652514273027730 0ustar zuulzuul00000000000000--- features: - | Adds support for dynamic drivers. Using a dynamic driver in a node's ``driver`` field is now possible. Dynamic drivers are composed of a ``hardware type`` and a number of ``hardware interfaces``. NOTE: this feature is considered somewhat experimental, as not all classic drivers have a corresponding dynamic driver, and there is minimal CI for dynamic drivers at the time of this writing. Hardware types are enabled via the ``[DEFAULT]/enabled_hardware_types`` configuration option, and hardware interfaces are enabled via the ``[DEFAULT]/enabled_*_interfaces`` configuration option. A default interface to use when creating or updating nodes can be specified with the ``[DEFAULT]/default_*_interface`` configuration option. The ironic-conductor process will now fail to start if: - a default interface implementation for any enabled hardware type cannot be found. - a dynamic driver and a classic driver with the same name are both enabled. - at least one classic driver *or* one dynamic driver is not enabled. Hardware types available in this release are: - ``ipmi`` for IPMI-compatible hardware. This type is enabled by default. Uses the ``ipmitool`` utility under the hood, similar to existing classic drivers ``pxe_ipmitool`` and ``agent_ipmitool``. Supports both types of serial console: via ``shellinabox`` and via ``socat``, both are disabled by default. - ``irmc`` for FUJITSU PRIMERGY servers, disabled by default. This feature has a number of REST API changes, all of which are available in API version 1.31. - Adds additional parameters and response fields for GET /v1/drivers and GET /v1/drivers/. - Exposes the following fields on the node resource, to allow getting and setting interfaces for a dynamic driver: * boot_interface * console_interface * deploy_interface * inspect_interface * management_interface * power_interface * raid_interface * vendor_interface - Allows dynamic drivers to be used and returned in the following API calls, in all versions of the REST API: * GET /v1/drivers * GET /v1/drivers/ * GET /v1/drivers//properties * GET /v1/drivers//vendor_passthru/methods * GET/POST /v1/drivers//vendor_passthru * GET/POST /v1/nodes//vendor_passthru For more details on the REST API changes, see the `REST API Version History documentation `_. This also adds dynamic interface fields to node-related notifications: * boot_interface * console_interface * deploy_interface * inspect_interface * management_interface * power_interface * raid_interface * vendor_interface The affected notifications are: * baremetal.node.create.*, new payload version 1.1 * baremetal.node.update.*, new payload version 1.1 * baremetal.node.delete.*, new payload version 1.1 * baremetal.node.maintenance.*, new payload version 1.3 * baremetal.node.console.*, new payload version 1.3 * baremetal.node.power_set.*, new payload version 1.3 * baremetal.node.power_state_corrected.*, new payload version 1.3 * baremetal.node.provision_set.*, new payload version 1.3 ironic-15.0.0/releasenotes/notes/node-traits-2d950b62eea24491.yaml0000664000175000017500000000202213652514273024176 0ustar zuulzuul00000000000000--- features: - | Adds a ``traits`` field to the node resource, which will be used by the Compute service to define which nodes may match a Compute flavor using qualitative attributes. The following new endpoints have been added to the Bare Metal REST API in version 1.37: * ``GET /v1/nodes//traits`` lists the traits for a node. * ``PUT /v1/nodes//traits`` sets all traits for a node. * ``PUT /v1/nodes//traits/`` adds a trait to a node. * ``DELETE /v1/nodes//traits`` removes all traits from a node. * ``DELETE /v1/nodes//traits/`` removes a trait from a node. A node's traits are also included in the following node query and list responses: * ``GET /v1/nodes/`` * ``GET /v1/nodes/detail`` * ``GET /v1/nodes?fields=traits`` Traits cannot be specified on node creation, nor can they be updated via a ``PATCH`` request on the node. ironic-15.0.0/releasenotes/notes/fix-cpu-count-8904a4e1a24456f4.yaml0000664000175000017500000000042013652514273024371 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug where the number of CPU sockets was being returned by the ``idrac`` hardware type during introspection, instead of the number of virtual CPUs. See bug `2004155 `_ for details. ironic-15.0.0/releasenotes/notes/lookup-ignore-malformed-macs-09e7e909f3a134a3.yaml0000664000175000017500000000066213652514273027441 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem where the deployment of a node would fail to continue if a malformed MAC address was passed to the lookup mechanism in the Ironic API. For example, if a node contains an Infiniband card, the lookup used to fail because the agent ramdisk passes a MAC address (or GID) with 20 octets (instead of the expected 6 octets) as part of the lookup request. Invalid addresses are now ignored. ironic-15.0.0/releasenotes/notes/error-resilient-enabled_drivers-4e9c864ed6eaddd1.yaml0000664000175000017500000000036013652514273030537 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where the ironic-conductor service would not run if a trailing comma or empty driver was specified in the ``[DEFAULT]enabled_drivers`` configuration option. The service now runs and logs a warning. ironic-15.0.0/releasenotes/notes/build-configdrive-5b3b9095824faf4e.yaml0000664000175000017500000000065513652514273025445 0ustar zuulzuul00000000000000--- features: - | Adds support for building config drives. Starting with API version 1.56, the ``configdrive`` parameter of ``/v1/nodes//states/provision`` can be a JSON object with optional keys ``meta_data`` (JSON object), ``network_data`` (JSON object) and ``user_data`` (JSON object, array or string). See `story 2005083 `_ for more details. ironic-15.0.0/releasenotes/notes/fix-swift-binary-upload-bf9471fca29290e1.yaml0000664000175000017500000000023513652514273026523 0ustar zuulzuul00000000000000--- fixes: - | Fixes binary files upload to Swift. Prior to this fix, binary file upload to Swift might fail at unicode characters interpretation. ironic-15.0.0/releasenotes/notes/remove-neutron-client-workarounds-996c59623684929b.yaml0000664000175000017500000000035113652514273030404 0ustar zuulzuul00000000000000--- fixes: - Update create provisioning ports logic to fail only when no neutron ports were created. If we created at least one neutron port, proceed with the deployment. It was the default behaviour for flat scenario. ironic-15.0.0/releasenotes/notes/no-coreos-f8717f9bb6a64627.yaml0000664000175000017500000000046613652514273023706 0ustar zuulzuul00000000000000--- upgrade: - | Explicit support for CoreOS Ironic Python Agent images has been removed. If you use a ramdisk based on CoreOS, you may want to re-add ``coreos.configdrive=0`` to your PXE templates, see `story 1433812 `_ for the background. ironic-15.0.0/releasenotes/notes/story-2002637-4825d60b096e475b.yaml0000664000175000017500000000050313652514273023700 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue seen during node tear down where a port being deleted by the Bare Metal service could be deleted by the Compute service, leading to an unhandled error from the Networking service. See `story 2002637 `__ for further details. ironic-15.0.0/releasenotes/notes/better-handle-skip-upgrade-3b6f06ac24937aa4.yaml0000664000175000017500000000114413652514273027135 0ustar zuulzuul00000000000000--- fixes: - | Better handles the case when an operator attempts to perform an upgrade from a release older than Pike, directly to a release newer than Pike, skipping one or more releases in between (i.e. a "skip version upgrade"). Instead of crashing, the operator will be informed that upgrading from a version older than the previous release is not supported (skip version upgrades) and that (as of Pike) all database migrations need to be performed using the previous releases for a fast-forward upgrade. [Bug `2002558 `_] ironic-15.0.0/releasenotes/notes/change-updated-at-object-field-a74466f7c4541072.yaml0000664000175000017500000000014113652514273027413 0ustar zuulzuul00000000000000--- fixes: - Now sets node's ``updated_at`` field correctly after a node has been updated. ironic-15.0.0/releasenotes/notes/remove-pxe-http-5a05c54f57747bfe.yaml0000664000175000017500000000026113652514273025107 0ustar zuulzuul00000000000000--- upgrade: - Removes deprecated options "[pxe]/http_url" and "[pxe]/http_root". Configuration files should instead use "[deploy]/http_url" and "[deploy]/http_root". ironic-15.0.0/releasenotes/notes/inspector-for-cisco-bffe1d1af7aec677.yaml0000664000175000017500000000011013652514273026220 0ustar zuulzuul00000000000000--- features: - Enabled Inspector Inspection for CIMC and UCS drivers ironic-15.0.0/releasenotes/notes/deploy_steps-243b341cf742f7cc.yaml0000664000175000017500000000163513652514273024547 0ustar zuulzuul00000000000000--- features: - | The `framework for deployment steps `_ is in place. All in-tree drivers (DeployInterfaces) have one (big) deploy step; the conductor executes this step when deploying a node. Starting with the Bare Metal REST API version 1.44, the current deploy step (if any) being executed is available in a node's ``deploy_step`` field in the responses for the following queries: * ``GET /v1/nodes/`` * ``GET /v1/nodes/detail`` * ``GET /v1/nodes?fields=deploy_step,...`` deprecations: - | All drivers must implement their deployment process using `deploy steps`. Out-of-tree drivers without deploy steps will be supported until the Stein release. For more details, see `story 1753128 `_. ironic-15.0.0/releasenotes/notes/oneview-hardware-type-69bbb79da434871f.yaml0000664000175000017500000000077413652514273026305 0ustar zuulzuul00000000000000--- features: - | Adds a new hardware type ``oneview`` for HPE OneView supported servers. This hardware type supports the following driver interfaces: * boot: ``pxe`` * console: ``no-console`` * deploy: ``oneview-direct`` and ``oneview-iscsi`` (based on "direct" and "iscsi" respectively) * inspect: ``oneview`` and ``no-inspect`` * management: ``oneview`` * network: ``flat``, ``neutron`` and ``no-op`` * power: ``oneview`` * raid: ``no-raid`` and ``agent`` ironic-15.0.0/releasenotes/notes/restart-console-on-conductor-startup-5cff6128c325b18e.yaml0000664000175000017500000000032513652514273031272 0ustar zuulzuul00000000000000--- fixes: - Some nodes' console may be enabled but the corresponding console services stopped while starting conductors, this tries to start consoles on conductor startup to make the status consistent. ironic-15.0.0/releasenotes/notes/add_retirement_support-23c5fed7ce8f97d4.yaml0000664000175000017500000000070213652514273026776 0ustar zuulzuul00000000000000--- features: - | Adds support for node retirement by adding a ``retired`` property to the node. If set, a node moves upon automatic cleaning to ``manageable`` (rather than ``available``). The new property also blocks the ``provide`` keyword, i.e. nodes cannot move from ``manageable`` to ``available``. Furthermore, there is an additional optional field ``retirement_reason`` to store the reason for the node's retirement. ironic-15.0.0/releasenotes/notes/image-checksum-recalculation-sha256-fd3d5b4b0b757e86.yaml0000664000175000017500000000063213652514273030636 0ustar zuulzuul00000000000000--- upgrade: - | If ``[DEFAULT]force_raw_images`` is set to ``true``, then MD5 will not be utilized to recalculate the image checksum. This requires the ``ironic-python-agent`` ramdisk to be at least version 3.4.0. security: - | Image checksum recalculation when images are forced to raw images, are now calculated using ``SHA3-256`` if MD5 was selected. This is now unconditional. ironic-15.0.0/releasenotes/notes/node-storage-interface-api-1d6e217303bd53ff.yaml0000664000175000017500000000144413652514273027125 0ustar zuulzuul00000000000000--- features: - | Adds version 1.33 of the REST API, which exposes the ``storage_interface`` field of the node resource. This version also exposes ``default_storage_interface`` and ``enable_storage_interfaces`` fields of the driver resource. There are 2 available storage interfaces: * ``noop``: This interface provides nothing regarding storage. * ``cinder``: This interface enables a node to attach and detach volumes by leveraging cinder API. A storage interface can be set when creating or updating a node. Enabled storage interfaces are defined via the ``[DEFAULT]/enabled_storage_interfaces`` configuration option. A default interface for a created node can be specified with ``[DEFAULT]/default_storage_interface`` configuration option. ironic-15.0.0/releasenotes/notes/neutron-port-timeout-cbd82e1d09c6a46c.yaml0000664000175000017500000000035113652514273026330 0ustar zuulzuul00000000000000--- other: - Add Neutron ``port_setup_delay`` configuration option. This delay allows Ironic to wait for Neutron port operations until we have a mechanism for synchronizing events with Neutron. Set to 0 by default. ironic-15.0.0/releasenotes/notes/fifteen-0da3cca48dceab8b.yaml0000664000175000017500000000071313652514273024021 0ustar zuulzuul00000000000000--- prelude: > The Ironic Developers are proud to announce the release of Ironic 15.0! This release contains a number of changes that have been sought by operators and users of Ironic for some time, including support for UEFI booting a software RAID system, improved Ironic/Ironic Python Agent security, multi-tenancy constructs, a hardware retirement mechanism, stateful DHCPv6, and numerous fixes. We sincerely hope you enjoy! ironic-15.0.0/releasenotes/notes/build_instance_info-c7e3f12426b48965.yaml0000664000175000017500000000046013652514273025703 0ustar zuulzuul00000000000000--- deprecations: - The function ``build_instance_info_for_deploy`` is deprecated from ``ironic.drivers.modules.agent`` and will be removed in the Pike cycle. Its new home is ``ironic.drivers.modules.deploy_utils``. Out-of-tree drivers that use this function should be updated accordingly. ironic-15.0.0/releasenotes/notes/deprecate-hash-distribution-replicas-ef0626ccc592b70e.yaml0000664000175000017500000000023713652514273031310 0ustar zuulzuul00000000000000--- deprecations: - | The "hash_distribution_replicas" configuration option is now deprecated. If specified in the config file, a warning is logged. ironic-15.0.0/releasenotes/notes/bug-1570283-6cdc62e4ef43cb02.yaml0000664000175000017500000000016613652514273023575 0ustar zuulzuul00000000000000--- fixes: - Fixes the issue of not attaching virtual media during cleaning operation for vmedia based drivers. ironic-15.0.0/releasenotes/notes/add_clean_step_clear_job_queue-7b774d8d0e36d1b2.yaml0000664000175000017500000000066513652514273030176 0ustar zuulzuul00000000000000--- features: - | Adds a ``clear_job_queue`` cleaning step to the ``idrac-wsman`` management interface. The ``clear_job_queue`` cleaning step clears the Lifecycle Controller job queue including any pending jobs. fixes: - | Fixes an issue where if there is a pending BIOS config job in job queue, then ironic will abandon an introspection attempt for the node, which will cause overall introspection to fail. ironic-15.0.0/releasenotes/notes/snmp-noop-mgmt-53e93ac3b6dd8517.yaml0000664000175000017500000000053013652514273024727 0ustar zuulzuul00000000000000--- upgrade: - | The ``snmp`` hardware type now uses the ``noop`` management interface instead of ``fake`` used previously. Support for ``fake`` is left for backward compatibility. deprecations: - | Using the ``fake`` management interfaces with the ``snmp`` hardware type is now deprecated, please use ``noop`` instead. ironic-15.0.0/releasenotes/notes/extends-install-bootloader-timeout-8fce9590bf405cdf.yaml0000664000175000017500000000061113652514273031137 0ustar zuulzuul00000000000000--- fixes: - | Fixes an agent command issue in the bootloader installation process that can present itself as a connection timeout under heavy IO load conditions. Now installation commands have an internal timeout which is double the conductor wide ``[agent]command_timeout``. For more information, see bug `2007483 `_. ironic-15.0.0/releasenotes/notes/add-boot-from-volume-support-9f64208f083d0691.yaml0000664000175000017500000000016513652514273027277 0ustar zuulzuul00000000000000--- other: - Adds a configuration section ``cinder`` and a requirement of cinder client (python-cinderclient). ironic-15.0.0/releasenotes/notes/inspector-pxe-boot-9ab9fede5671097e.yaml0000664000175000017500000000100713652514273025674 0ustar zuulzuul00000000000000--- features: - | The ``pxe`` and ``ipxe`` boot interfaces, as well as all in-tree network interfaces, now support managing in-band inspection boot. upgrade: - | For the managed in-band inspection to work, make sure that the Bare Metal Introspection endpoint (either in the service catalog or in the ``[inspector]endpoint_override`` configuration option) is not set to localhost. Alternatively, set the ``[inspector]callback_endpoint_override`` option to a value with a real IP address. ironic-15.0.0/releasenotes/notes/node-lessee-4fb320a597192742.yaml0000664000175000017500000000025613652514273024022 0ustar zuulzuul00000000000000--- features: - | Adds a ``lessee`` field to nodes. This field is exposed to policy, so if a policy file permits, a lessee will have access to specified node APIs. ironic-15.0.0/releasenotes/notes/queens-prelude-61fb897e96ed64c5.yaml0000664000175000017500000000247113652514273025024 0ustar zuulzuul00000000000000--- prelude: | The 10.1.0 (Queens) release includes many new features and bug fixes. Please review the "Upgrade Notes" sections (for 9.2.0, 10.0.0 and 10.1.0) which describe the required actions to upgrade your installation from 9.1.x (Pike) to 10.1.0 (Queens). A few major changes since 9.1.x (Pike) are worth mentioning: - New `traits API `_. - New ``ansible`` deploy interface that allows greater customization of the provisioning process. - Support for rescuing and unrescuing nodes. - Support for `routed networks `_ when using the ``flat`` network interface. - New ``xclarity`` hardware type for managing Lenovo server hardware. Finally, this release deprecates classic drivers in favor of hardware types. Please check `the migration guide `_ for information on which hardware types and interfaces to enable before upgrade and how to update the nodes. The ``ironic-dbsync online_data_migrations`` command will handle the migration, if all required hardware types and interfaces are enabled before the upgrade. ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/remove-deprecated-ilo-clean-priority-erase-devices-bb3073da562ed41d.yamlironic-15.0.0/releasenotes/notes/remove-deprecated-ilo-clean-priority-erase-devices-bb3073da562ed41d0000664000175000017500000000035113652514273032767 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[ilo]/clean_priority_erase_devices`` was deprecated in the Newton cycle (6.1.0). It is no longer supported. Please use the option ``[deploy]/erase_devices_priority`` instead. ironic-15.0.0/releasenotes/notes/server_profile_template_uri-c79e4f15cc20a1cf.yaml0000664000175000017500000000036213652514273027770 0ustar zuulzuul00000000000000--- upgrade: - When registering a OneView node in ironic, operator should make sure field ``server_profile_template_uri`` is set in properties/capabilities and not in driver_info anymore. Otherwise the node will fail on validation. ironic-15.0.0/releasenotes/notes/ipmi-console-port-ec6348df4eee6746.yaml0000664000175000017500000000024413652514273025515 0ustar zuulzuul00000000000000--- fixes: - | Fixes the IPMI console implementation to respect all supported IPMI ``driver_info`` and configuration options, particularly ``ipmi_port``. ironic-15.0.0/releasenotes/notes/add-redfish-sensors-4e2f7e3f8a7c6d5b.yaml0000664000175000017500000000024013652514273026046 0ustar zuulzuul00000000000000--- features: - | Adds sensor data collector to ``redfish`` management interface. Temperature, power, cooling and drive health metrics are collected. ironic-15.0.0/releasenotes/notes/socat-address-conf-5cf043fabb10bd76.yaml0000664000175000017500000000036313652514273025641 0ustar zuulzuul00000000000000--- features: - | Adds new configuration option ``[console]/socat_address`` to set the binding address for socat-based console. The default is the value of the ``[DEFAULT]my_ip`` option of the conductor responsible for the node. ironic-15.0.0/releasenotes/notes/send-sensor-data-for-all-nodes-a732d9df43e74318.yaml0000664000175000017500000000201613652514273027571 0ustar zuulzuul00000000000000--- features: - | Adds a ``[conductor]send_sensor_data_for_undeployed_nodes`` option to enable ironic to collect and transmit sensor data for all nodes for which sensor data collection is available. By default, this option is not enabled which aligns with the prior behavior of sensor data collection and transmission where such data was only collected if an ``instance_uuid`` was present to signify that the node has been or is being deployed. With this option set to ``True``, operators may be able to identify hardware in a faulty state through the sensor data and take action before an instance workload is deployed. fixes: - | Fixes an issue where nodes in the process of deployment may have metrics data collected and transmitted during the deployment process which may erroneously generate alarms depending on the operator's monitoring configuration. This was due to a database filter relying upon the indicator of an ``instance_uuid`` as opposed to the state of a node. ironic-15.0.0/releasenotes/notes/ipxe_timeout_parameter-03fc3c76c520fac2.yaml0000664000175000017500000000016713652514273026660 0ustar zuulzuul00000000000000--- features: - Add the ability to adjust ipxe timeout during image downloading, default is still unlimited (0). ironic-15.0.0/releasenotes/notes/add_conversion_flags_iscsi-d7f846803a647573.yaml0000664000175000017500000000042613652514273027200 0ustar zuulzuul00000000000000--- features: - | Adds a new configuration option ``[iscsi]conv_flags``, that specifies the conversion options to pass to the ``dd`` utility when copying an image. For example, passing ``sparse`` may result in less network traffic for large whole disk images. ironic-15.0.0/releasenotes/notes/disk-label-fix-7580de913835ff44.yaml0000664000175000017500000000020013652514273024475 0ustar zuulzuul00000000000000--- fixes: - Fixes the bug where the user specified disk_label is ignored for the agent drivers for partition images. ironic-15.0.0/releasenotes/notes/deprecated-inspector-opts-b19a08339712cfd7.yaml0000664000175000017500000000146713652514273027055 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``[inspector]/service_url`` is deprecated and will be ignored in the Rocky release. Instead, use ``[inspector]/endpoint_override`` configuration option to set the specific ironic-inspector API endpoint when its automatic discovery from the keystone catalog is not desired. This new option has no default value (``None``) and must be set explicitly. - | Relying on the value of ``[DEFAULT]/auth_strategy`` configuration option to configure usage of standalone mode for ironic-inspector is deprecated and will be impossible the Rocky release. Instead, set ``[inspector]/auth_type`` configuration option to ``none`` and provide the ironic-inspector inspector API address as ``[inspector]/endpoint_override`` configuration option. ironic-15.0.0/releasenotes/notes/redfish-managed-inspection-936341ffa8e1f22a.yaml0000664000175000017500000000031013652514273027215 0ustar zuulzuul00000000000000--- features: - | The ``redfish-virtual-media`` boot interface now supports managing boot for in-band inspection. This enables using virtual media instead of PXE for in-band inspection. ironic-15.0.0/releasenotes/notes/ilo-bios-settings-bc91524c459a4fd9.yaml0000664000175000017500000000051213652514273025414 0ustar zuulzuul00000000000000--- features: - | Implements ``bios`` interface for ``ilo`` hardware type. Adds the list of supported bios interfaces for the `ilo` hardware type. Adds manual cleaning steps ``apply_configuration`` and ``factory_reset`` which support managing the BIOS settings for the iLO servers using `ilo` hardware type. ironic-15.0.0/releasenotes/notes/drac-missing-lookup-3ad98e918e1a852a.yaml0000664000175000017500000000021113652514273025724 0ustar zuulzuul00000000000000--- fixes: - Add missing "lookup" method to the pxe_drac driver vendor interface enabling it to be deployed using the IPA ramdisk. ironic-15.0.0/releasenotes/notes/cleanwait_timeout_fail-4323ba7d4d4da3e6.yaml0000664000175000017500000000060213652514273026613 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with a baremetal node that times out during cleaning. The ironic-conductor was attempting to change the node's provision state to 'clean failed' twice, resulting in the node's ``last_error`` being set incorrectly. This no longer happens. For more information, see `story 2004299 `_. ironic-15.0.0/releasenotes/notes/redfish-add-root-prefix-03b5f31ec6bbd146.yaml0000664000175000017500000000044113652514273026526 0ustar zuulzuul00000000000000--- features: - | Adds a ``root_prefix`` parameter to the sushy context based on the path of ``redfish_address``. Defaults to sushy ``root_prefix`` default (``/redfish/v1/``). This is needed if the Redfish API is not located in the default ``/redfish/v1/`` endpoint. ironic-15.0.0/releasenotes/notes/zero-temp-url-c21e208f8933c6f6.yaml0000664000175000017500000000042213652514273024503 0ustar zuulzuul00000000000000--- fixes: - | No longer tries to create a temporary URL with zero lifetime if the ``deploy_callback_timeout`` option is set to zero. The default of 1800 seconds is used in that case. Use the new ``configdrive_swift_temp_url_duration`` option to override. ironic-15.0.0/releasenotes/notes/add-support-for-no-poweroff-on-failure-86e43b3e39043990.yaml0000664000175000017500000000070613652514273031154 0ustar zuulzuul00000000000000--- features: - Operators can now set deploy.power_off_after_deploy_failure to leave nodes powered on when a deployment fails. This is useful for troubleshooting deployment issues. As a note, Nova will still attempt to delete a node after a failed deployment, so deploy.power_off_after_deploy_failure may not be very effective in non-standalone deployments until a similar patch to ironic's driver in nova is proposed. ironic-15.0.0/releasenotes/notes/ilo-managed-inspection-8b549c003224e011.yaml0000664000175000017500000000030413652514273026120 0ustar zuulzuul00000000000000--- features: - | The ``ilo-virtual-media`` boot interface now supports managing boot for in-band inspection. This enables using virtual media instead of PXE for in-band inspection. ironic-15.0.0/releasenotes/notes/snmp-reboot-delay-d18ee3f6c6fc0998.yaml0000664000175000017500000000037713652514273025506 0ustar zuulzuul00000000000000--- upgrade: - Adds a configuration option for the Iboot driver, [iboot]reboot_delay, to allow adding a pause between power off and power on. fixes: - Fixes an issue where some SNMP power controllers will not power back on after a deploy. ironic-15.0.0/releasenotes/notes/orphan-nodes-389cb6d90c2917ec.yaml0000664000175000017500000000056013652514273024444 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where only nodes in ``DEPLOYING`` state would have locks cleared for the nodes. Now upon node take over, any locks that are left from the old conductor are cleared by the new one. other: - | On taking over nodes in ``CLEANING`` state, the new conductor moves them to the ``CLEAN FAIL`` state and sets maintenance. ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/correct-api-version-check-conditional-for-nodename-439bebc02fb5493d.yamlironic-15.0.0/releasenotes/notes/correct-api-version-check-conditional-for-nodename-439bebc02fb5493d0000664000175000017500000000030513652514273032767 0ustar zuulzuul00000000000000--- fixes: - Correct api version check conditional for node.name to address an issue that we could set node name to '' using API version lower than 1.5, where node names were introduced. ironic-15.0.0/releasenotes/notes/deprecate-elilo-2beca4800f475426.yaml0000664000175000017500000000045213652514273025006 0ustar zuulzuul00000000000000--- deprecations: - | Support for the ``elilo`` boot loader has been deprecated and will be removed in the Queens release cycle. The elilo boot loader has been orphaned as a project and dropped from the majority of Linux distributions. Please switch to the ``grub2`` boot loader. ironic-15.0.0/releasenotes/notes/add-redfish-auth-type-5fe78071b528e53b.yaml0000664000175000017500000000102113652514273026045 0ustar zuulzuul00000000000000--- features: - | Adds the ``[redfish]auth_type`` ironic configuration option for the ``redfish`` hardware type that is used to choose one of the following authentication methods:: ``basic``, ``session`` and ``auto``. The ``auto`` setting first tries ``session`` method and falls back to ``basic`` if session authentication is not supported by the Redfish BMC. The default is ``auto``. This configuration option can be overridden on a per-node basis by the ``driver_info/redfish_auth_type`` option. ironic-15.0.0/releasenotes/notes/add-oneview-driver-96088bf470b16c34.yaml0000664000175000017500000000020613652514273025372 0ustar zuulzuul00000000000000--- features: - Adds `agent_pxe_oneview` and `iscsi_pxe_oneview` drivers for integration with the HP OneView Management System. ironic-15.0.0/releasenotes/notes/fix-ipxe-macro-4ae8bc4fe82e8f19.yaml0000664000175000017500000000016713652514273025045 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where iPXE may try to boot from the wrong MAC address, resulting in deploy failures. ironic-15.0.0/releasenotes/notes/pxe-snmp-driver-supported-9c559c6182c6ec4b.yaml0000664000175000017500000000011713652514273027130 0ustar zuulzuul00000000000000--- features: - The pxe_snmp and fake_snmp are now supported and tested. ironic-15.0.0/releasenotes/notes/poweroff-after-10-tries-c592506f02c167c0.yaml0000664000175000017500000000065513652514273026164 0ustar zuulzuul00000000000000--- fixes: - | Changes the iPXE behavior to retry a total of 10 times with an increasing backoff time between each retry in order to not create a Denial of Service situation with the iPXE HTTP server. Should the retries fail, the node will be powered-off after a warning is displayed on the console for 30 seconds. For more information, see `story `_. ironic-15.0.0/releasenotes/notes/fix-mac-address-update-with-contrail-b1e1b725cc0829c2.yaml0000664000175000017500000000032713652514273031036 0ustar zuulzuul00000000000000--- fixes: - Fixes `an issue `_ where the update of a MAC address failed for ports that were bound (for example, when using the 'contrail' neutron backend). ironic-15.0.0/releasenotes/notes/ibmc-driver-45fcf9f50ebf0193.yaml0000664000175000017500000000047513652514273024333 0ustar zuulzuul00000000000000--- features: - | Adds a new hardware type ``ibmc`` for HUAWEI 2288H V5, CH121 V5 series servers. This hardware type supports PXE based boot using HUAWEI iBMC RESTful APIs. The following driver interfaces are supported: * management: ``ibmc`` * power: ``ibmc`` * vendor: ``ibmc`` ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/fix-delete_configuration-with-multiple-controllers-06fc3fca94ba870f.yamlironic-15.0.0/releasenotes/notes/fix-delete_configuration-with-multiple-controllers-06fc3fca94ba870f0000664000175000017500000000062313652514273033360 0ustar zuulzuul00000000000000fixes: - | Fixes a bug in the ``idrac`` hardware type where a race condition can occur on a host that has a mix of controllers where some support realtime mode and some do not. The approach is to use only realtime mode if all controllers support realtime. This removes the race condition. See bug `2006502 `_ for details. ironic-15.0.0/releasenotes/notes/name-root-device-hints-a1484ea01e399065.yaml0000664000175000017500000000011113652514273026144 0ustar zuulzuul00000000000000--- features: - Root device hints extended to support the device name. ironic-15.0.0/releasenotes/notes/cleaning-bios-d74a4947d2525b80.yaml0000664000175000017500000000021613652514273024411 0ustar zuulzuul00000000000000--- fixes: - | Fixes traceback on cleaning of nodes with the ``redfish`` hardware type if their BMC does not support BIOS settings. ironic-15.0.0/releasenotes/notes/ramdisk-boot-fails-4e8286e6a4e0dfb6.yaml0000664000175000017500000000034413652514273025614 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue wherein provisioning fails if ironic node is configured with ``ramdisk`` deploy interface. See `bug 2003532 `_ for more details. ironic-15.0.0/releasenotes/notes/node-owner-provision-fix-ee2348b5922f7648.yaml0000664000175000017500000000060413652514273026601 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where a provisioned or allocated node could have its owner changed. For backwards compatibility, we preserve the ability to do so for a provisioned node through the use of the ``baremetal:node:update_owner_provisioned`` policy rule. We always prevent the update if the node is associated with an allocation that specifies an owner. ironic-15.0.0/releasenotes/notes/hexraw-support-removed-8e8fa07595a629f4.yaml0000664000175000017500000000032213652514273026442 0ustar zuulzuul00000000000000--- upgrade: - Removes support for "hexraw" type in the iPXE script (boot.ipxe) since "hexraw" is not supported in older versions of iPXE. "hexhyp" replaced "hexraw" and has been used since kilo. ironic-15.0.0/releasenotes/notes/redfish-bios-interface-a1acd8122c896a38.yaml0000664000175000017500000000011413652514273026341 0ustar zuulzuul00000000000000--- features: - Adds ``bios`` interface to the ``redfish`` hardware type. ironic-15.0.0/releasenotes/notes/change-ramdisk-log-filename-142b10d0b02a5ca6.yaml0000664000175000017500000000066213652514273027214 0ustar zuulzuul00000000000000--- upgrade: - | Changes timestamp part of ramdisk log filename by replacing colon with dash. The ``tar`` command does not handle colon properly, and untar of the file with colon in filename will fail. fixes: - | Changes timestamp part of ramdisk log filename by replacing colon with dash. The ``tar`` command does not handle colon properly, and untar of the file with colon in filename will fail. ironic-15.0.0/releasenotes/notes/persist-redfish-sessions-d521a0846fa45c40.yaml0000664000175000017500000000054113652514273026721 0ustar zuulzuul00000000000000--- fixes: - | Fixes ``redfish`` hardware type to reuse HTTP session tokens when talking to BMC using session authentication. Prior to this fix ``redfish`` hardware type never tried to reuse session token given out by BMC during previous connection what may sometimes lead to session pool exhaustion with some BMC implementations. ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/irmc-dealing-with-ipxe-boot-interface-incompatibility-7d0b2bdb8f9deb46.yamlironic-15.0.0/releasenotes/notes/irmc-dealing-with-ipxe-boot-interface-incompatibility-7d0b2bdb8f9de0000664000175000017500000000055113652514273033347 0ustar zuulzuul00000000000000--- upgrade: - | Users of the ``irmc`` hardware type with iPXE should switch to the ``ipxe`` boot interface from the deprecated ``[pxe]ipxe_enabled`` option. fixes: - | Adds the missing ``ipxe`` boot interface to the ``irmc`` hardware type. It is supposed to be used instead of the deprecated ``[pxe]ipxe_enabled`` configuration option. ironic-15.0.0/releasenotes/notes/irmc-support-ipmitool-power-a3480a70753948e5.yaml0000664000175000017500000000015113652514273027247 0ustar zuulzuul00000000000000--- features: - | Adds support for the ``ipmitool`` power interface to the ``irmc`` hardware type. ironic-15.0.0/releasenotes/notes/rescue-interface-for-ilo-hardware-type-2392989d0fef8849.yaml0000664000175000017500000000051113652514273031270 0ustar zuulzuul00000000000000--- features: - | Adds support for rescue interface ``agent`` for the ``ilo`` hardware type when the corresponding boot interface being used is ``ilo-virtual-media``. The supported values of the rescue interface for the ``ilo`` hardware type are ``agent`` and ``no-rescue``. The default value is ``no-rescue``. ironic-15.0.0/releasenotes/notes/broken-driver-update-fc5303340080ef04.yaml0000664000175000017500000000024113652514273025704 0ustar zuulzuul00000000000000--- fixes: - | A bug has been fixed in the node update code that could cause the nodes to become not updatable if their driver is no longer available. ironic-15.0.0/releasenotes/notes/add-cisco-ucs-hardware-types-ee597ff0416f158f.yaml0000664000175000017500000000171713652514273027435 0ustar zuulzuul00000000000000--- features: - | Adds two new hardware types to support Cisco UCS Servers, ``cisco-ucs-standalone`` and ``cisco-ucs-managed``. ``cisco-ucs-standalone`` supports driver interfaces for controlling UCS servers in standalone mode via either CIMC APIs or via IPMI. ``cisco-ucs-managed`` is a superset of ``cisco-ucs-standalone`` and supports additional driver interfaces for controlling the UCS server via UCSM. To support these hardware types the following Ironic driver interfaces were made available to be configured on a node: * ``node.power_interface`` can be set to: * ``cimc`` for CIMC API power control (power on/off, reboot, etc.) * ``ucsm`` for UCSM API power control (power on/off, reboot, etc.) * ``node.management_interface`` can be set to: * ``cimc`` for CIMC API management control (setting the boot device, etc.) * ``ucsm`` for UCSM API management control (setting the boot device, etc.) ironic-15.0.0/releasenotes/notes/remove-ipminative-driver-3367d25bbcc41fdc.yaml0000664000175000017500000000035513652514273027122 0ustar zuulzuul00000000000000upgrade: - | The agent_pyghmi, pxe_ipminative, and fake_ipminative drivers have all been removed from ironic due to lack of testing. Nodes using these drivers should be changed to the agent_ipmitool or pxe_ipmitool driver. ironic-15.0.0/releasenotes/notes/catch-third-party-driver-validate-exceptions-94ed2a91c50d2d8e.yaml0000664000175000017500000000013613652514273032704 0ustar zuulzuul00000000000000--- fixes: - Catch unknown exceptions with traceback when validating driver interfaces. ironic-15.0.0/releasenotes/notes/context-domain-id-name-deprecation-ae6e40718273be8d.yaml0000664000175000017500000000057313652514273030601 0ustar zuulzuul00000000000000--- deprecations: - | Usage of the following values was deprecated in the policy files: - ``domain_id`` and ``domain_name`` - ``user_domain_id`` should be used instead of those (note - ``user_domain_id`` is an ID of the domain, not its name). - ``tenant`` - ``project_name`` should be used instead. - ``user`` - ``user_id`` should be used instead. ironic-15.0.0/releasenotes/notes/ipxe-with-dhcpv6-2bc7bd7f53a70f51.yaml0000664000175000017500000000063013652514273025216 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue has been corrected where hosts executing ``iPXE`` to boot would error indicating that no configuration was found for networks where IPv6 is in use. This has been remedied through a minor addition to the Networking service in the Stein development cycle. For more information please see `story 2004502 `_. ironic-15.0.0/releasenotes/notes/bug-2006275-a5ca234683ca4c32.yaml0000664000175000017500000000020113652514273023411 0ustar zuulzuul00000000000000--- fixes: | Fixes an issue with using serial number as root device hints with the ``ansible`` deploy interface. ironic-15.0.0/releasenotes/notes/ipxe-use-swift-5ccf490daab809cc.yaml0000664000175000017500000000122613652514273025143 0ustar zuulzuul00000000000000--- features: - By default, the ironic-conductor service caches the node's deploy ramdisk and kernel images locally and serves them via a separate HTTP server. A new ``[pxe]/ipxe_use_swift`` configuration option (disabled by default) allows images to be accessed directly from object store via Swift temporary URLs. This is only applicable if iPXE is enabled (via ``[pxe]/ipxe_enabled`` configuration option) and image store is in Glance/Swift. For user images that are partition images requiring non-local boot, the default behavior with local caching and an HTTP server will still apply for user image kernel and ramdisk. ironic-15.0.0/releasenotes/notes/ilo-firmware-update-manual-clean-step-e6763dc6dc0d441b.yaml0000664000175000017500000000021013652514273031264 0ustar zuulzuul00000000000000--- features: - iLO drivers now provide out-of-band firmware update as a manual cleaning step, for supported hardware components. ironic-15.0.0/releasenotes/notes/no-classic-idrac-4fbf1ba66c35fb4a.yaml0000664000175000017500000000024713652514273025364 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated iDRAC classic drivers ``pxe_drac`` and ``pxe_drac_inspector`` have been removed. Please use the ``idrac`` hardware type. ironic-15.0.0/releasenotes/notes/bug-1518374-decd73fd82c2eb94.yaml0000664000175000017500000000017113652514273023671 0ustar zuulzuul00000000000000--- critical: - Fixes an issue where the next cleaning for a node would hang if the previous cleaning was aborted. ironic-15.0.0/releasenotes/notes/improve-redfish-set-boot-device-e38e9e9442ab5750.yaml0000664000175000017500000000041213652514273030055 0ustar zuulzuul00000000000000--- features: - | Makes management interface of ``redfish`` hardware type not change the current boot frequency if the current setting is the same as the desired one. The goal is to avoid touching a potentially faulty BMC option whenever possible. ironic-15.0.0/releasenotes/notes/adds-ilo-ipxe-boot-interface-4fc75292122db80d.yaml0000664000175000017500000000105413652514273027307 0ustar zuulzuul00000000000000--- features: - | Adds an ``ilo-ipxe`` boot interface to ``ilo`` hardware type which allows for instance level iPXE enablement as opposed to conductor-wide enablement of iPXE. To perform iPXE boot with ``ilo-ipxe`` boot interface: * Add ``ilo-ipxe`` to ``enabled_boot_interfaces`` in ``ironic.conf`` * Set up TFTP & HTTP server using `Ironic document on iPXE boot configuration `_ * Create/Set baremetal node with ``--boot-interface ilo-ipxe`` ironic-15.0.0/releasenotes/notes/add-error-check-ipmitool-reboot-ca7823202c5ab71d.yaml0000664000175000017500000000022713652514273030074 0ustar zuulzuul00000000000000--- fixes: - Adds a missing error check into ``ipmitool`` power driver's reboot method so that the reboot can fail properly if power off failed. ironic-15.0.0/releasenotes/notes/inspector-enabled-f8a643f03e1e0360.yaml0000664000175000017500000000045413652514273025346 0ustar zuulzuul00000000000000--- upgrade: - | The ``[inspector]/enabled`` configuration option no longer has effect on the ``fake_inspector`` driver. It will also not have effect on new-style dynamic drivers based on hardware types; it will be necessary to use ``[DEFAULT]/enabled_inspect_interfaces`` instead. ironic-15.0.0/releasenotes/notes/fix-mac-address-48060f9e2847a38c.yaml0000664000175000017500000000011713652514273024654 0ustar zuulzuul00000000000000--- fixes: - This fixes InvalidMAC exception of iRMC out-of-band inspection. ironic-15.0.0/releasenotes/notes/node-stuck-when-conductor-down-3aa41a3abed9daf5.yaml0000664000175000017500000000051113652514273030272 0ustar zuulzuul00000000000000--- fixes: - If a node gets stuck in one of the states ``deploying``, ``cleaning``, ``verifying``, ``inspecting``, ``adopting``, ``rescuing``, ``unrescuing`` for some reason (eg. conductor goes down when executing a task), it will be moved to an appropriate failure state in the next time the conductor starts. ironic-15.0.0/releasenotes/notes/validate-node-properties-73509ee40f409ca2.yaml0000664000175000017500000000023513652514273026661 0ustar zuulzuul00000000000000--- fixes: - The `cpus`, `local_gb`, and `memory_mb` properties of a node are now validated at input time to ensure they are non-negative numbers. ironic-15.0.0/releasenotes/notes/adoption-feature-update-d2160954a2c36b0a.yaml0000664000175000017500000000072313652514273026465 0ustar zuulzuul00000000000000--- fixes: - Adoption feature logic was updated to prevent ramdisk creation and default to instance creation where appropriate based on the driver. - Adoption documentation has been updated to note that the boot_option should likely be defined for nodes by a user leveraging the feature. - Adoption documentation has been updated to note that a user may wish to utilize the ``noop`` network interface that arrived with API version 1.20. ironic-15.0.0/releasenotes/notes/ansible-deploy-15da234580ca0c30.yaml0000664000175000017500000000250513652514273024640 0ustar zuulzuul00000000000000--- features: - | Adds a new ``ansible`` deploy interface. It targets mostly undercloud use-case by allowing greater customization of provisioning process. This new deploy interface is usable only with hardware types. It is set as supported for a ``generic`` hardware type and all its subclasses, but must be explicitly enabled in the ``[DEFAULT]enabled_deploy_interfaces`` configuration file option to actually allow setting nodes to use it. For migration from the ``staging-ansible`` interface from the ``ironic-staging-drivers`` project to this ``ansible`` interface, operators have to consider the following differences: - Callback-less operation is not supported. - Node's ``driver_info`` fields ``ansible_deploy_username`` and ``ansible_deploy_key_file`` are deprecated and will be removed in the Rocky release. Instead, please use ``ansible_username`` and ``ansible_key_file`` respectively. - Base path for playbooks can be defined in the node's ``driver_info['ansible_playbooks_path']`` field. The default is the value of the ``[ansible]/playbooks_path`` option from the ironic configuration file. - Default playbooks for actions and cleaning steps file can be set in ironic configuration file as various ``[ansible]/default_*`` options. ironic-15.0.0/releasenotes/notes/remove-inspecting-state-support-10325bdcdd182079.yaml0000664000175000017500000000065713652514273030241 0ustar zuulzuul00000000000000--- upgrade: - The deprecated configuration option ``[conductor]inspect_timeout`` was removed, please use ``[conductor]inspect_wait_timeout`` instead. other: - The support for returning ``INSPECTING`` state from ``InspectInterface.inspect_hardware`` was removed. For asynchronous inspection, please return ``INSPECTWAIT`` instead of ``INSPECTING``, otherwise the node will be moved to ``inspect failed`` state. ironic-15.0.0/releasenotes/notes/disable_periodic_tasks-0ea39fa7a8a108c6.yaml0000664000175000017500000000452313652514273026602 0ustar zuulzuul00000000000000--- features: - | Setting these configuration options to 0 will disable the periodic tasks: * [conductor]sync_power_state_interval: sync power states for the nodes * [conductor]check_provision_state_interval: * check deployments and time out if the deployment takes too long * check the status of cleaning a node and time out if it takes too long * check the status of inspecting a node and time out if it takes too long * check for and handle nodes that are taken over by new conductors (if an old conductor disappeared) * [conductor]send_sensor_data_interval: send sensor data to ceilometer * [conductor]sync_local_state_interval: refresh a conductor's copy of the consistent hash ring. If any mappings have changed, determines which, if any, nodes need to be "taken over". The ensuing actions could include preparing a PXE environment, updating the DHCP server, and so on. * [oneview]periodic_check_interval: * check for nodes taken over by OneView users * check for nodes freed by OneView users fixes: - | Fixes an issue where setting these configuration options to 0 caused a ValueError exception to be raised. You can now set them to 0 to disable the associated periodic tasks. (For more information, see `story 2002059 `_.): * [conductor]sync_power_state_interval: sync power states for the nodes * [conductor]check_provision_state_interval: * check deployments and time out if the deployment takes too long * check the status of cleaning a node and time out if it takes too long * check the status of inspecting a node and time out if it takes too long * check for and handle nodes that are taken over by new conductors (if an old conductor disappeared) * [conductor]send_sensor_data_interval: send sensor data to ceilometer * [conductor]sync_local_state_interval: refresh a conductor's copy of the consistent hash ring. If any mappings have changed, determines which, if any, nodes need to be "taken over". The ensuing actions could include preparing a PXE environment, updating the DHCP server, and so on. * [oneview]periodic_check_interval: * check for nodes taken over by OneView users * check for nodes freed by OneView users ironic-15.0.0/releasenotes/notes/update-proliantutils-version-54c0cd5c5d3c01dc.yaml0000664000175000017500000000023613652514273030033 0ustar zuulzuul00000000000000--- upgrade: - Updates required proliantutils version for iLO drivers to 2.2.0. This version has support for sanitize disk erase using SSA utility. ironic-15.0.0/releasenotes/notes/add-redfish-inspect-interface-1577e70167f24ae4.yaml0000664000175000017500000000036713652514273027461 0ustar zuulzuul00000000000000--- features: - | Adds out-of-band inspection support to ``redfish`` hardware type'. Successful inspection populates mandatory properties: "cpus", "local_gb", "cpu_arch", "memory_mb" and creates ironic ports for inspected nodes. ironic-15.0.0/releasenotes/notes/node-update-instance-info-extra-policies-862b2a70b941cf39.yaml0000664000175000017500000000057713652514273031654 0ustar zuulzuul00000000000000--- features: - | Adds ``baremetal:node:update_extra`` and ``baremetal:node:instance_info`` policies to allow finer-grained policy control over node updates. In order to use standalone Ironic to provision a node, a user must be able to update ``instance_info`` (and ``extra`` if using metalsmith), and a lessee should not be able to update all node attributes. ironic-15.0.0/releasenotes/notes/story-2004266-4725d327900850bf.yaml0000664000175000017500000000124713652514273023621 0ustar zuulzuul00000000000000--- fixes: - | The IPMI hardware type unconditionally instructed the BMC to not automatically clear boot flag valid bit if Chassis Control command not received within 60-second timeout (countdown restarts when a Chassis Control command is received). Some BMCs do not support setting this; if sent it causes the boot to be aborted instead. For IPMI hardware type a new driver option ``node['driver_info']['ipmi_disable_boot_timeout']`` can be specified. It is ``True`` by default; set it to ``False`` to bypass sending this command. See `story 2004266 `_ for additional information. ironic-15.0.0/releasenotes/notes/add-agent-proxy-support-790e629634ca2eb7.yaml0000664000175000017500000000217213652514273026502 0ustar zuulzuul00000000000000--- features: - Pass proxy information from agent driver to IPA ramdisk, so that images can be cached on the proxy server. issues: - When using caching proxy with ``agent_*`` drivers, caching the image on the proxy server might involve increasing [glance]swift_temp_url_duration config option value. This way, the cached entry will be valid for a period of time long enough to see the benefits of caching. Large temporary URL duration might become a security issue in some cases. upgrade: - Adds a [glance]swift_temp_url_cache_enabled configuration option to enable Swift temporary URL caching. It is only useful if the caching proxy is used. Also adds [glance]swift_temp_url_expected_download_start_delay, which is used to check if the Swift temporary URL duration is long enough to let the image download to start, and, if temporary URL caching is enabled, to determine if a cached entry will be still valid when download starts. The value of [glance]swift_temp_url_expected_download_start_delay must be less than the value for the [glance]swift_temp_url_duration configuration option. ironic-15.0.0/releasenotes/notes/protected-650acb2c8a387e17.yaml0000664000175000017500000000042613652514273024025 0ustar zuulzuul00000000000000--- features: - | It is now possible to protect a provisioned node from being undeployed, rebuilt or deleted by setting the new ``protected`` field to ``True``. The new ``protected_reason`` field can be used to document the reason the node was made protected. ironic-15.0.0/releasenotes/notes/story-2004444-f540d9bbc3532ad0.yaml0000664000175000017500000000031113652514273024073 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with the ``ipmi`` hardware type where ``node['driver_info']['ipmi_force_boot_device']`` could be interpreted as ``True`` when set to values such as "False". ironic-15.0.0/releasenotes/notes/add-validate-rescue-2202e8ce9a174ece.yaml0000664000175000017500000000026613652514273025721 0ustar zuulzuul00000000000000--- other: - | Adds new method ``validate_rescue()`` to ``NetworkInterface`` to validate rescuing network. This method is called during validation of rescue interface. ironic-15.0.0/releasenotes/notes/cisco-drivers-deleted-5a42a8c508704c64.yaml0000664000175000017500000000151113652514273026047 0ustar zuulzuul00000000000000--- upgrade: - | The Cisco ``cisco-ucs-managed`` and ``cisco-ucs-standalone`` hardware types and ``cimc`` and ``ucsm`` hardware interfaces which were deprecated in the 12.1.0 release have now been removed. After upgrading, if any of these hardware types or interfaces are specified in ironic's configuration options, the ironic-conductor service will fail to start. Any existing ironic nodes with these hardware types or interfaces will become inoperational via ironic after the upgrade. If these hardware types or interfaces are being used, the affected nodes should be changed to use other hardware types or interfaces; or install these hardware types (and interfaces) from elsewhere separately. For more information, see `story 2005033 `_. ironic-15.0.0/releasenotes/notes/default-swift_account-b008d08e85bdf154.yaml0000664000175000017500000000135413652514273026327 0ustar zuulzuul00000000000000features: - | If the ``[glance]swift_account`` option is not set, the default value is now calculated based on the ID of the project used to access the object store. Previously this option was required. This change does not affect using RadosGW as an object store backend. - | If the ``[glance]swift_temp_url_key`` option is not set, ironic now tries to fetch the key from the project used to access swift (often called ``service``). This change does not affect using RadosGW as an object store backend. - | If the ``[glance]swift_endpoint_url`` option is not set, ironic now tries to fetch the Object Store service URL from the service catalog. The ``/v1/AUTH_*`` suffix is stripped, if present. ironic-15.0.0/releasenotes/notes/next-link-for-instance-uuid-f46eafe5b575f3de.yaml0000664000175000017500000000047113652514273027534 0ustar zuulzuul00000000000000--- fixes: - | Fixes a `bug `_ with the response for a ``GET /nodes?limit=1&instance_uuid=`` request. If a node matched, a ``next`` link was returned, even though there are no more nodes that will match. That link is no longer returned. ironic-15.0.0/releasenotes/notes/bug-2005377-5c63357681a465ec.yaml0000664000175000017500000000052013652514273023302 0ustar zuulzuul00000000000000--- fixes: - Fixes overflowing of the node fields ``last_error`` and ``maintenance_reason``, which would prevent the object from being correctly committed to the database. The maximum message length can be customized through a new configuration parameter, ``[DEFAULT]/log_in_db_max_size`` (default, 4096 characters). ironic-15.0.0/releasenotes/notes/prevent-callback-url-from-being-updated-41d50b20fb236e82.yaml0000664000175000017500000000076213652514273031431 0ustar zuulzuul00000000000000--- security: - | Prevents additional updates of an agent ``callback_url`` through the agent heartbeat ``/v1/heartbeat/`` endpoint as the ``callback_url`` should remain stable through the cleaning, provisioning, or rescue processes. Should anything such as an unexpected agent reboot cause the ``callback_url``, heartbeat operations will now be ignored. More information can be found at `story 2006773 `_. ironic-15.0.0/releasenotes/notes/add-node-description-790097704f45af91.yaml0000664000175000017500000000027413652514273025626 0ustar zuulzuul00000000000000--- features: - | Adds a ``description`` field to the node object to enable operators to store any information related to the node. The field is up to 4096 UTF-8 characters. ironic-15.0.0/releasenotes/notes/fix-boot-from-volume-for-iscsi-deploy-71c1f2905498c50d.yaml0000664000175000017500000000072113652514273031062 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in boot from volume for a node with the ``iscsi`` deploy interface. It would fail if no ``image_source`` was provided in the node's ``instance_info`` field because it would try to validate the ``image_source`` which didn't exist. There is no need to specify the ``image_source`` and the validation is no longer being attempted. See `bug 1714147 `_ for details. ironic-15.0.0/releasenotes/notes/security_groups-b57a5d6c30c2fae4.yaml0000664000175000017500000000102213652514273025426 0ustar zuulzuul00000000000000--- features: - Adds support for security groups for the provisioning and cleaning network. These are optionally specified by the configuration options ``[neutron]/provisioning_network_security_groups`` and ``[neutron]/cleaning_network_security_groups``, respectively. If not specified, the default security group for the network is used. These options are only applicable for nodes using the "neutron" network interface. These options are ignored for nodes using the "flat" and "noop" network interfaces. ironic-15.0.0/releasenotes/notes/list-nodes-by-driver-a1ab9f2b73f652f8.yaml0000664000175000017500000000012513652514273026077 0ustar zuulzuul00000000000000--- features: - Add support for filtering nodes using the same driver via the API. ironic-15.0.0/releasenotes/notes/rename-iso-builder-func-46694ed6ded84f4a.yaml0000664000175000017500000000032513652514273026550 0ustar zuulzuul00000000000000--- fixes: - | Renames misleadingly named ``images.create_isolinux_image_for_uefi`` function into ``images.create_esp_image_for_uefi``. The new name reflects what's actually going on under the hood. ironic-15.0.0/releasenotes/notes/no-root-device-as-kernel-param-5e5326acae7b77a4.yaml0000664000175000017500000000102413652514273027720 0ustar zuulzuul00000000000000--- upgrade: - Ironic no longer passes ``root_device`` as kernel parameter via boot config files. Passing root device hints to Ironic Python Agent (IPA) as kernel parameters was deprecated in Newton release. As a consequence, using root device hints with Ironic as of Ocata release will not be possible when deploying nodes with the help of ramdisks based on IPA as of Mitaka release. Operators relying on root device hints functionality are advised to update their IPA-based Ironic deploy images. ironic-15.0.0/releasenotes/notes/intel-ipmi-hardware-30aaa65cdbcb779a.yaml0000664000175000017500000000077513652514273026116 0ustar zuulzuul00000000000000--- features: - | Adds support for the Intel IPMI Hardware with a new hardware type ``intel-ipmitool``. This hardware type is the same as the ``ipmi`` hardware type with additional support of `Intel Speed Select Performance Profile Technology `_. It uses the ``intel-ipmitool`` management interface, which supports setting the desired configuration level for Intel SST-PP. ironic-15.0.0/releasenotes/notes/wwn-extension-root-device-hints-de40ca1444ba4888.yaml0000664000175000017500000000014213652514273030214 0ustar zuulzuul00000000000000--- features: - Adds root device hints for `wwn_with_extension` and `wwn_vendor_extension`. ironic-15.0.0/releasenotes/notes/bug-2001832-62e244dc48c1f79e.yaml0000664000175000017500000000040613652514273023437 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug where a node's hardware type cannot be changed to another hardware type which doesn't support any hardware interface currently used. See `bug 2001832 `_ for details. ironic-15.0.0/releasenotes/notes/socat-respawn-de9e8805c820a7ac.yaml0000664000175000017500000000102413652514273024704 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where the socat process would exit on client disconnect, which would (a) leave a zombie socat process in the process table and (b) disable any subsequent serial console connections. This issue was addressed by updating ironic to call socat with the ``fork,max-children=1`` options, which makes socat persist and accept multiple connections (but only one at a time). Please see story `2005024 `_ for additional information. ironic-15.0.0/releasenotes/notes/ipmitool-bootdev-persistent-uefi-b1181a3c82343c8f.yaml0000664000175000017500000000041713652514273030367 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem where the boot mode (UEFI or BIOS) wasn't being considered when setting the boot device of a node using the "ipmitool" management interface. It would incorrectly switch from UEFI to Legacy BIOS mode on some hardware models. ironic-15.0.0/releasenotes/notes/no-last-error-overwrite-b90aac3303eb992e.yaml0000664000175000017500000000011413652514273026631 0ustar zuulzuul00000000000000--- fixes: - | Fixes empty ``last_error`` field on cleaning failures. ironic-15.0.0/releasenotes/notes/removed-glance-host-port-protocol-dc6e682097ba398f.yaml0000664000175000017500000000060113652514273030534 0ustar zuulzuul00000000000000--- upgrade: - | Deprecated options ``glance_host``, ``glance_port`` and ``glance_protocol`` from ``[glance]`` section of ironic configuration file were removed and will be ignored. Please use ``[glance]/glance_api_servers`` options to provide specific addresses for the Image service endpoint when its discovery from keystone service catalog is not desired. ironic-15.0.0/releasenotes/notes/force-out-hung-ipmitool-process-519c7567bcbaa882.yaml0000664000175000017500000000105313652514273030211 0ustar zuulzuul00000000000000--- fixes: - | Kill ``ipmitool`` process invoked by ironic to read node's power state if ``ipmitool`` process does not exit after configured timeout expires. It appears pretty common for ``ipmitool`` to run for five minutes (with current ironic defauls) once it hits a non-responsive bare metal node. This could slow down the management of other nodes due periodic tasks slots exhaustion. The new behaviour could is enabled by default, but could be disabled via the ``[ipmi]kill_on_timeout`` ironic configuration option. ironic-15.0.0/releasenotes/notes/add-chassis_uuid-removal-possibility-8b06341a91f7c676.yaml0000664000175000017500000000026113652514273031125 0ustar zuulzuul00000000000000--- features: - Adds support for removing the chassis UUID associated with a node (via ``PATCH /v1/nodes/``). This is available starting with API version 1.25. ironic-15.0.0/releasenotes/notes/amt-driver-wake-up-0880ed85476968be.yaml0000664000175000017500000000110113652514273025333 0ustar zuulzuul00000000000000--- upgrade: - Adds a config [amt]awake_interval for the interval to wake up the AMT interface for a node. This should correspond to the IdleTimeout config option on the AMT interface. Setting to 0 will disable waking the AMT interface, just like setting IdleTimeout=0 on the AMT interface will disable the AMT interface from sleeping when idle. fixes: - Fixes an issue with talking to a sleeping AMT interface by waking up the interface before sending commands, if needed. This is configured with the [amt]awake_interval config option. ironic-15.0.0/releasenotes/notes/remove-ipxe-enabled-opt-61d106f01c46acab.yaml0000664000175000017500000000046313652514273026524 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[pxe]ipxe_enabled`` was deprecated and now has been removed, thus the support for iPXE from the ``pxe`` interface was removed. To use iPXE, the boot interface should be migrated to ``ipxe`` or other boot interfaces capable of booting from iPXE. ironic-15.0.0/releasenotes/notes/fix-multi-attached-volumes-092ffedbdcf0feac.yaml0000664000175000017500000000027013652514273027652 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with iPXE where the incorrect iscsi volume authentication data was being used with boot from volume when multi-attach volumes were present. ironic-15.0.0/releasenotes/notes/hash-ring-race-da0d584de1f46788.yaml0000664000175000017500000000111113652514273024631 0ustar zuulzuul00000000000000--- fixes: - | Fixes a race condition in the hash ring implementation that could cause an internal server error on any request. See `story 2003966 `_ for details. upgrade: - | The ``hash_ring_reset_interval`` configuration option was changed from 180 to 15 seconds. Previously, this option was essentially ignored on the API side, becase the hash ring was reset on each API access. The lower value minimizes the probability of a request routed to a wrong conductor when the ring needs rebalancing. ironic-15.0.0/releasenotes/notes/heartbeat-locked-6e53b68337d5a258.yaml0000664000175000017500000000026713652514273025110 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where node ramdisk heartbeat operations would collide with conductor locks and erroniously record an error in node's ``last_error`` field. ironic-15.0.0/releasenotes/notes/remove-radosgw-config-b664f3023dc8403c.yaml0000664000175000017500000000065413652514273026156 0ustar zuulzuul00000000000000--- upgrade: - | The ``swift/endpoint_type`` configuration option is now removed. python-swiftclient 3.2.0 (Ocata) and above removed support for the native URL type used by radosgw. Since using a ``swift/endpoint_type`` value of ``radosgw`` would fail anyway, it is removed. Deployers must now configure ceph with ``rgw swift account in url = True``. This must be set before upgrading to this release. ironic-15.0.0/releasenotes/notes/ilo-inject-nmi-f487db8c3bfd08ea.yaml0000664000175000017500000000027213652514273025102 0ustar zuulzuul00000000000000--- features: - | Adds support for the injection of Non-Masking Interrupts (NMI) to ``ilo`` management interface. This is supported on HPE ProLiant Gen9 and Gen10 servers. ironic-15.0.0/releasenotes/notes/remove-messaging-aliases-0a6ba1ed392b1fed.yaml0000664000175000017500000000040613652514273027127 0ustar zuulzuul00000000000000--- upgrade: - | Removes old messaging transport aliases. These are listed below with the new value that should be used. * ``ironic.rpc.impl_kombu`` -> ``rabbit`` * ``ironic.rpc.impl_qpid`` -> ``qpid`` * ``ironic.rpc.impl_zmq`` -> ``zmq`` ironic-15.0.0/releasenotes/notes/node-save-internal-info-c5cc8f56f1d0dab0.yaml0000664000175000017500000000031513652514273026670 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where some internal information for a node was not being saved to the database. See `bug 1679297 `_ for details. ironic-15.0.0/releasenotes/notes/ilo-async-bios-clean-steps-15e49545ba818997.yaml0000664000175000017500000000054213652514273026704 0ustar zuulzuul00000000000000--- fixes: - | Makes all ``ilo`` driver BIOS interface clean steps as asynchronous. This is required to ensure the settings on the baremetal node are consistent with the settings stored in the database irrespective of the node clean step status. Refer bug `2004066 `_ for details. ironic-15.0.0/releasenotes/notes/migrate-to-pysnmp-hlapi-477075b5e69cc5bc.yaml0000664000175000017500000000034013652514273026531 0ustar zuulzuul00000000000000--- upgrade: - The minimum required version of pysnmp has been bumped to 4.3. This pysnmp version introduces simpler, faster and more functional high-level SNMP API on which ironic `snmp` driver has been migrated. ironic-15.0.0/releasenotes/notes/bug-1611555-de1ec64ba46982ec.yaml0000664000175000017500000000007013652514273023577 0ustar zuulzuul00000000000000--- features: - Adds timing metrics to DRAC drivers. ironic-15.0.0/releasenotes/notes/ipmi-disable-timeout-option-e730362007f9bedd.yaml0000664000175000017500000000101213652514273027356 0ustar zuulzuul00000000000000--- features: - | Adds a configuration option ``[ipmi]disable_boot_timeout`` which is used to set the default behavior whether ironic should send a raw IPMI command to disable timeout. This configuration option can be overidden by the per-node option ``ipmi_disable_boot_timeout`` in node's ``driver_info`` field. See `story 2004266 `_ and `Story 2002977 `_ for additional information. ironic-15.0.0/releasenotes/notes/Add-port-option-support-to-ipmitool-e125d07fe13c53e7.yaml0000664000175000017500000000025513652514273031000 0ustar zuulzuul00000000000000--- features: - Add support for ipmitool's port (-p) option. This allows ipmitool support for operators that do not use the default port (623) as their IPMI port. ironic-15.0.0/releasenotes/notes/idrac-drives-conversion-raid-to-jbod-de10755d1ec094ea.yaml0000664000175000017500000000044013652514273031115 0ustar zuulzuul00000000000000--- fixes: - | Hardware type ``idrac`` converts physical drives from ``RAID`` to ``JBOD`` mode after RAID ``delete_configuration`` cleaning step through raid interface. This ensures that the individual disks freed by deleting the virtual disks are visible to the OS.ironic-15.0.0/releasenotes/notes/conf-deploy-image-5adb6c1963b149ae.yaml0000664000175000017500000000034213652514273025410 0ustar zuulzuul00000000000000--- features: - | The deploy and/or rescue kernel and ramdisk can now be configured via the new configuration options ``deploy_kernel``, ``deploy_ramdisk``, ``rescue_kernel`` and ``rescue_ramdisk`` respectively. ironic-15.0.0/releasenotes/notes/osprofiler-61a330800abe4ee6.yaml0000664000175000017500000000035713652514273024207 0ustar zuulzuul00000000000000--- upgrade: - | The minimum required version of the ``osprofiler`` library is now 1.5.0. This is now a new dependency, ironic has not been able to start with 1.4.0 since the Pike release when this dependency was introduced. ironic-15.0.0/releasenotes/notes/remove-deprecated-option-names-6d5d53cc70dd2d49.yaml0000664000175000017500000000041313652514273030117 0ustar zuulzuul00000000000000--- upgrade: - | In the config section ``[agent]`` two config options were deprecated in the Liberty cycle and they have been removed. The options were named: * ``[agent]/agent_erase_devices_priority`` * ``[agent]/agent_erase_devices_iterations`` ironic-15.0.0/releasenotes/notes/fix-esp-grub-path-9e5532993dccc07a.yaml0000664000175000017500000000042613652514273025307 0ustar zuulzuul00000000000000--- fixes: - | Fixes GRUB configuration file generation procedure when building bootable ISO images that include user EFI boot loader image. Prior to this fix, no bootable ISO image could be generated unless EFI boot loader is extracted from deploy ISO image. ironic-15.0.0/releasenotes/notes/build-uefi-only-iso-ce6bcb0da578d1d6.yaml0000664000175000017500000000030513652514273026043 0ustar zuulzuul00000000000000--- other: - | The Bare Metal service now builds UEFI-only bootable ISO image (when being asked to build a UEFI-bootable image) rather than building a hybrid BIOS/UEFI-bootable ISO. ironic-15.0.0/releasenotes/notes/fake-noop-bebc43983eb801d1.yaml0000664000175000017500000000033513652514273023770 0ustar zuulzuul00000000000000--- fixes: - | Adds missed noop implementations (e.g. ``no-inspect``) to the ``fake-hardware`` hardware type. This fixes enabling this hardware type without enabling all (even optional) ``fake`` interfaces. ironic-15.0.0/releasenotes/notes/fix-create-configuration-0e000392d9d7f23b.yaml0000664000175000017500000000135013652514273026637 0ustar zuulzuul00000000000000fixes: - | Fixes a bug in the ``idrac`` hardware type where when creating one or more virtual disks on a RAID controller that supports passthru mode (PERC H730P), the cleaning step would finish before the job to create the virtual disks actually completed. This could result in the client attempting to perform another action against the iDRAC that creates a configuration job, and that action would fail since the job to create the virtual disk would still be executing. This patch fixes this issue by only allowing the cleaning step to finish after the job to create the virtual disk completes. See bug `bug 2007285 `_ for more details. ironic-15.0.0/releasenotes/notes/add-node-boot-mode-control-9761d4bcbd8c3a0d.yaml0000664000175000017500000000143513652514273027210 0ustar zuulzuul00000000000000--- other: - Adds ``get_boot_mode``, ``set_boot_mode`` and ``get_supported_boot_modes`` methods to driver management interface. Drivers can override these methods implementing boot mode management calls to the BMC of the baremetal nodes being managed. features: - The new ironic configuration setting ``[deploy]/default_boot_mode`` allows the operator to set the default boot mode when ironic can't pick boot mode automatically based on node configuration, hardware capabilities, or bare-metal machine configuration. fixes: - If the bare metal machine's boot mode differs from the requested one, ironic will now attempt to set requested boot mode on the bare metal machine and fail explicitly if the driver does not support setting boot mode on the node. ironic-15.0.0/releasenotes/notes/add_infiniband_support-f497767f77277a1a.yaml0000664000175000017500000000017013652514273026424 0ustar zuulzuul00000000000000--- features: - Adds support for InfiniBand networking to allow hardware inspection and PXE boot over InfiniBand. ironic-15.0.0/releasenotes/notes/root-device-hints-rotational-c21f02130394e1d4.yaml0000664000175000017500000000014413652514273027370 0ustar zuulzuul00000000000000--- features: - Extend the root device hints to identify whether a disk is rotational or not. ironic-15.0.0/releasenotes/notes/conductor_early_import-fd29fa8b89089977.yaml0000664000175000017500000000032213652514273026576 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug which prevented the ironic-conductor service from using the interval values from the configuration options, for the periodic tasks. Instead, the default values had been used. ironic-15.0.0/releasenotes/notes/irmc-additional-capabilities-4fd72ba50d05676c.yaml0000664000175000017500000000133113652514273027517 0ustar zuulzuul00000000000000--- features: - | Adds new capabilities (``server_model``, ``rom_firmware_version``, ``pci_gpu_devices``, ``trusted_boot`` and ``irmc_firmware_version``) to the iRMC out-of-band hardware inspection for FUJITSU PRIMERGY bare metal nodes with firmware iRMC S4 and newer. other: - | During the out-of-band inspection for nodes using the ``irmc`` hardware type, nodes will be powered on. The original power state will be restored after inspection is finished. upgrade: - | *python-scciclient* of version 0.6.0 or newer is required by the ``irmc`` hardware type to support new out-of-band inspection capabilities. If an older version is used, the new capabilities will not be discovered. ironic-15.0.0/releasenotes/notes/agent-wol-driver-4116f64907d0db9c.yaml0000664000175000017500000000020113652514273025135 0ustar zuulzuul00000000000000--- features: - Adds an `agent_wol` driver that combines the Agent deploy interface with the Wake-On-LAN power driver. ironic-15.0.0/releasenotes/notes/fix-clean-steps-not-running-0d065cb022bc0419.yaml0000664000175000017500000000056713652514273027215 0ustar zuulzuul00000000000000--- prelude: > A major bug was fixed where clean steps do not run. critical: - This fixes a bug where Ironic skipped all clean steps, which may leave the previous tenant's data on disk available to new users. security: - This fixes a bug where Ironic skipped all clean steps, which may leave the previous tenant's data on disk available to new users. ironic-15.0.0/releasenotes/notes/glance-keystone-dd30b884f07f83fb.yaml0000664000175000017500000000075213652514273025217 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[glance]glance_host`` is now empty by default. If neither it nor ``[glance]glance_api_servers`` are provided, ironic will now try fetching the Image service endpoint from the service catalog. deprecations: - | The configuration options ``[glance]glance_host``, ``[glance]glance_port`` and ``[glance]glance_protocol`` are deprecated in favor of either using ``[glance]glance_api_servers`` or using the service catalog. ironic-15.0.0/releasenotes/notes/reusing-oneview-client-6a3936fb8f113c10.yaml0000664000175000017500000000017013652514273026347 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug where OneView drivers create a new instance of the OneView client for each request made. ironic-15.0.0/releasenotes/notes/fix-not-exist-deploy-image-for-irmc-cb82c6e0b52b8a9a.yaml0000664000175000017500000000037513652514273031000 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with the ``irmc`` hardware type failing to create a boot image via the ``irmc-virtual-media`` boot interface. See `story 2003338 `_ for more information. ironic-15.0.0/releasenotes/notes/deprecate-xclarity-config-af9b753f96779f42.yaml0000664000175000017500000000132013652514273027035 0ustar zuulzuul00000000000000--- features: - | Adds new parameter fields to driver_info, which will become mandatory in Stein release: * ``xclarity_manager_ip``: IP address of the XClarity Controller. * ``xclarity_username``: Username for the XClarity Controller. * ``xclarity_password``: Password for XClarity Controller username. * ``xclarity_port``: Port to be used for XClarity Controller connection. deprecations: - | Configuration options ``[xclarity]/manager_ip``, ``[xclarity]/username``, and ``[xclarity]/password`` are deprecated and will be removed in the Stein release. fixes: - | Fixes an issue where parameters required in driver_info and descriptions in documentation are different. ironic-15.0.0/releasenotes/notes/reboot-do-not-power-off-if-already-1452256167d40009.yaml0000664000175000017500000000051613652514273030053 0ustar zuulzuul00000000000000--- fixes: - | Fixes a problem when rebooting a node using the ``ipmitool`` power interface could cause a deploy to fail. Now it no longer tries to power off nodes that are already off, because some BMCs will error in these cases. See `bug 1718794 `_ for details. ironic-15.0.0/releasenotes/notes/pass-metrics-config-to-agent-on-lookup-6db9ae187c4e8151.yaml0000664000175000017500000000033713652514273031357 0ustar zuulzuul00000000000000--- features: - Adds the ability for ironic conductor to pass configurations for agent metrics on lookup. When paired with a sufficiently new ironic python agent, this will configure the metrics backends. ironic-15.0.0/releasenotes/notes/heartbeat_agent_version-70f4e64b19b51d87.yaml0000664000175000017500000000072513652514273026651 0ustar zuulzuul00000000000000--- other: - The agent heartbeat API (POST ``/v1/heartbeat/``) can now receive a new ``agent_version`` parameter. If received, this will be stored in the node's ``driver_internal_info['agent_version']`` field. This information will be used by the Bare Metal service to gracefully degrade support for agent features that are requested by the Bare Metal service, ensuring that we don't request a feature that an older ramdisk doesn't support. ironic-15.0.0/releasenotes/notes/xclarity-driver-622800d17459e3f9.yaml0000664000175000017500000000033513652514273024754 0ustar zuulzuul00000000000000--- features: - | Adds the new ``xclarity`` hardware type for managing the Lenovo IMM2 and IMM3 family of server hardware with the following interfaces: * management: ``xclarity`` * power: ``xclarity`` ironic-15.0.0/releasenotes/notes/configdrive-support-using-ceph-radosgw-8c6f7b8bede2077c.yaml0000664000175000017500000000204213652514273031722 0ustar zuulzuul00000000000000--- features: - | Adds support for storing the configdrive in `Ceph Object Gateway `_ (radosgw) instead of the OpenStack Object service (swift) using the compatible API. - | Adds support to use the radosgw authentication mechanism that relies on a user name and a password instead of an authentication token. The following options must be specified in ironic configuration file: * ``[swift]/auth_url`` * ``[swift]/username`` * ``[swift]/password`` deprecations: - The ``[conductor]/configdrive_use_swift`` and ``[glance]/temp_url_endpoint_type`` options are deprecated and will be removed in the Queens release. Use ``[deploy]/configdrive_use_object_store`` and ``[deploy]/object_store_endpoint_type`` respectively instead. upgrade: - Adds the ``[deploy]/object_store_endpoint_type`` option to specify the type of endpoint to use for instance images and configdrive storage. Allowed values are ``swift`` or ``radosgw``. The default is ``swift``. ironic-15.0.0/releasenotes/notes/ipxe-command-line-ip-argument-4e92cf8bb912f62d.yaml0000664000175000017500000000056513652514273027665 0ustar zuulzuul00000000000000--- fixes: - | Fixes a compatability issue where the iPXE kernel command line was no longe compatible with dracut. The ``ip`` parameter has been removed as it is incompatible with the ``BOOTIF`` and missing ``autoconf`` parameters when dracut is used. Further details can be found in `storyboard `_. ironic-15.0.0/releasenotes/notes/oneview-inspection-interface-c2d6902bbeca0501.yaml0000664000175000017500000000011713652514273027653 0ustar zuulzuul00000000000000--- features: - Adds in-band inspection interface usable by OneView drivers. ironic-15.0.0/releasenotes/notes/add-timeout-parameter-to-power-methods-5f632c936497685e.yaml0000664000175000017500000000036513652514273031254 0ustar zuulzuul00000000000000--- other: - | The ironic-conductor expects that all PowerInterface's set_power_state() and reboot() methods accept a ``timeout`` parameter. Any out-of-tree implementations that don't, will cause TypeError exceptions to be raised. ironic-15.0.0/releasenotes/notes/add-owner-information-52e153faf570747e.yaml0000664000175000017500000000046113652514273026165 0ustar zuulzuul00000000000000--- features: - | Adds API version 1.50 which allows for the storage of an ``owner`` field on node objects. This is intended for either storage of human parsable information or the storage of a tenant UUID which could be leveraged in a future version of the Bare Metal as a Service API. ironic-15.0.0/releasenotes/notes/fix-ipmitool-console-empty-password-a8edc5e2a1a7daf6.yaml0000664000175000017500000000014313652514273031402 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where ipmitool console did not work with an empty IPMI password. ironic-15.0.0/releasenotes/notes/remove-agent-passthru-432b18e6c430cee6.yaml0000664000175000017500000000133613652514273026273 0ustar zuulzuul00000000000000--- features: - | Agent lookup/heartbeat as vendor passthru is removed from most of in-tree ironic drivers. Affected drivers are * agent_ipmitool * agent_ipmitool_socat * agent_ipminative * agent_irmc * agent_ssh * agent_vbox * agent_ucs * pxe_agent_cimc * pxe_ipmitool * pxe_ipmitool_socat * pxe_ssh * pxe_ipminative * pxe_seamicro * pxe_snmp * pxe_irmc * pxe_vbox * pxe_msftocs * pxe_ucs * pxe_iscsi_cimc * pxe_drac * pxe_drac_inspector * iscsi_irmc * agent_ilo * iscsi_ilo * pxe_ilo * agent_pxe_oneview * iscsi_pxe_oneview All the other vendor passthru methods are left in place if the driver had them. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/make-versioned-notifications-topics-configurable-18d70d573c27809e.yamlironic-15.0.0/releasenotes/notes/make-versioned-notifications-topics-configurable-18d70d573c27809e.y0000664000175000017500000000027313652514273032732 0ustar zuulzuul00000000000000--- features: - | Adds a ``[DEFAULT]/versioned_notifications_topics`` configuration option. This enables operators to configure the topics used for versioned notifications. ironic-15.0.0/releasenotes/notes/deny-too-long-chassis-description-0690d6f67ed002d5.yaml0000664000175000017500000000017513652514273030426 0ustar zuulzuul00000000000000fixes: - The API now returns an appropriate error message when a chassis description over 255 characters is specified. ironic-15.0.0/releasenotes/notes/bug-2005764-15f45e11b9f9c96d.yaml0000664000175000017500000000056613652514273023461 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue encountered during deployment, more precisely during the configdrive partition creation step. On some specific devices like NVMe drives, the created configdrive partition could not be correctly identified (required to dump data onto it afterward). See `story 2005764 `_. ironic-15.0.0/releasenotes/notes/ilo-vendor-e8d299ae13388184.yaml0000664000175000017500000000013413652514273023767 0ustar zuulzuul00000000000000--- features: - | Adds missing ``ilo`` vendor interface to the ``ilo`` hardware type. ironic-15.0.0/releasenotes/notes/ironic-cfg-defaults-4708eed8adeee609.yaml0000664000175000017500000000035113652514273026034 0ustar zuulzuul00000000000000--- other: - | The default rootwrap configuration files are now included when building the ironic python package. The files are included in the path ``etc/ironic`` relative to the root of where ironic is installed. ironic-15.0.0/releasenotes/notes/html-errors-27579342e7e8183b.yaml0000664000175000017500000000024413652514273024110 0ustar zuulzuul00000000000000--- fixes: - | The bare metal API no longer returns HTML as part of the ``error_message`` field in error responses when no ``Accept`` header is provided. ironic-15.0.0/releasenotes/notes/notify-node-storage-interface-7fd07ee7ee71cd22.yaml0000664000175000017500000000113413652514273030034 0ustar zuulzuul00000000000000--- features: - | Adds ``storage_interface`` field to the node-related notifications: * ``baremetal.node.create.*`` (new payload version 1.2) * ``baremetal.node.update.*`` (new payload version 1.2) * ``baremetal.node.delete.*`` (new payload version 1.2) * ``baremetal.node.maintenance.*`` (new payload version 1.4) * ``baremetal.node.console.*`` (new payload version 1.4) * ``baremetal.node.power_set.*`` (new payload version 1.4) * ``baremetal.node.power_state_corrected.*`` (new payload version 1.4) * ``baremetal.node.provision_set.*`` (new payload version 1.4) ironic-15.0.0/releasenotes/notes/deprecate-glance-url-scheme-ceff3008cf9cf590.yaml0000664000175000017500000000046113652514273027432 0ustar zuulzuul00000000000000--- other: - | Support for parsing the glance API endpoint from the full REST path to a glance image was removed as it was not working anyway. The image service API is now always resolved from keystone catalog or via the options in the ``[glance]`` section in ironic configuration file. ironic-15.0.0/releasenotes/notes/remove-most-unsupported-049f3401c2554a3c.yaml0000664000175000017500000000214513652514273026526 0ustar zuulzuul00000000000000--- upgrade: - | A number of drivers that were declared as unsupported in Newton release have been removed from ironic tree. This includes drivers with power and/or management driver interfaces based on: - MSFT OCS - SeaMicro client - Virtualbox over pyremotevbox client As a result, the following ironic drivers will no longer be available: - agent_vbox - fake_msftocs - fake_seamicro - fake_vbox - pxe_msftocs - pxe_seamicro - pxe_vbox After upgrading, if one or more of these drivers are in the 'enabled_drivers' configuration option, the ironic-conductor service will fail to start. Any existing ironic nodes with these drivers assigned will become inoperational via ironic after ironic upgrade, as it will be not possible to change any node state/properties except changing the node driver. Operators having one of the drivers listed above enabled are required to either disable those drivers and assign another existing driver to affected nodes as appropriate, or install these drivers from elsewhere separately. ironic-15.0.0/releasenotes/notes/add-tooz-dep-85c56c74733a222d.yaml0000664000175000017500000000026513652514273024167 0ustar zuulzuul00000000000000--- upgrade: - | Adds a new dependency on the `tooz library `_, as the consistent hash ring code was moved out of ironic and into tooz. ironic-15.0.0/releasenotes/notes/add-parallel-power-syncs-b099d66e80aab616.yaml0000664000175000017500000000056513652514273026655 0ustar zuulzuul00000000000000--- features: - | Parallelizes periodic power sync calls by running up to ironic configuration ``[conductor]/sync_power_state_workers`` simultaneously. The default is to run up to ``8`` workers. This change should let larger-scale setups running power syncs more frequently and make the whole power sync procedure more resilient to slow or dead BMCs. ironic-15.0.0/releasenotes/notes/add-deploy-steps-redfish-bios-interface-f5e5415108f87598.yaml0000664000175000017500000000031113652514273031322 0ustar zuulzuul00000000000000--- features: - | Adds support for deploy steps to ``bios`` interface of ``redfish`` hardware type. The methods ``factory_reset`` and ``apply_configuration`` can be used as deploy steps. ironic-15.0.0/releasenotes/notes/idrac-wsman-bios-interface-b39a51828f61eff6.yaml0000664000175000017500000000050013652514273027137 0ustar zuulzuul00000000000000--- features: - | Implemented the ``BIOS interface`` for the ``idrac`` hardware type. Primarily, implemented ``factory_reset`` and ``apply_configuration`` clean and deploy steps, as asynchronous operations. For more details, see story `2007400 `_.ironic-15.0.0/releasenotes/notes/fix-get-deploy-info-port.yaml0000664000175000017500000000025013652514273024364 0ustar zuulzuul00000000000000--- fixes: - Fixed the default value of 'port' in iscsi_deploy.get_deploy_info to be set to [iscsi]/portal_port option value, instead of hardcoding it to '3260'. ironic-15.0.0/releasenotes/notes/messaging-log-level-5f870ea69db53d26.yaml0000664000175000017500000000017013652514273025704 0ustar zuulzuul00000000000000--- fixes: - | ``DEBUG``-level logging from the ``oslo.messaging`` library is no longer displayed by default. ironic-15.0.0/releasenotes/notes/idrac-remove-commit_required-d9ea849e8f5e78e2.yaml0000664000175000017500000000065213652514273027715 0ustar zuulzuul00000000000000--- upgrade: - | Removes ``commit_required`` from the dictionary returned by the ``set_bios_config`` vendor passthru call in the ``idrac`` hardware type. ``commit_required`` was split into two keys: ``is_commit_required`` and ``is_reboot_required``, which indicate the actions necessary to complete setting the BIOS settings. ``commit_required`` was removed in ``python-dracclient`` version 3.0.0. ironic-15.0.0/releasenotes/notes/prelude-to-the-stein-f25b6073b6d1c598.yaml0000664000175000017500000000326013652514273025740 0ustar zuulzuul00000000000000--- prelude: | The Bare Metal as a Service team joyfully announces our OpenStack Stein release of ironic 12.1.0. While no steins nor speakers were harmed during the development of this release, we might have suffered some hearing damage after we learned that we could increase the volume well past eleven! Notable items include: * Increased parallelism of power synchronization to improve overall conductor efficiency. * API fields to support node ``description`` and ``owner`` values. * HPE iLO ``ilo5`` and Huawei ``ibmc`` hardware types. * Allocations API interface to enable operators to find and select bare metal nodes for deployment. * JSON-RPC can now be used for ``ironic-api`` to ``ironic-conductor`` communication as opposed to using an AMQP messaging provider. * Support for customizable PXE templates and streamlined deployment sequences. * Initial support for the definition of "deployment templates" to enable operators to define and match customized deployment sequences. * Initial work for supporting SmartNIC configuration is included, however the Networking Service changes required are not anticipated until sometime during the Train development cycle. * And numerous bug fixes, including ones for IPv6 and IPMI. This release includes the changes in ironic's ``12.0.0`` release which was also released during the Stein development cycle and includes a number of improvements for Bare Metal infrastructure operators. More about our earlier stein release can be found in our `release notes `_. ironic-15.0.0/releasenotes/notes/fix-cleaning-spawn-error-60b60281f3be51c2.yaml0000664000175000017500000000022713652514273026562 0ustar zuulzuul00000000000000--- fixes: - Fix issues with error handling when spawning a new thread to continue cleaning. See https://bugs.launchpad.net/ironic/+bug/1539118. ironic-15.0.0/releasenotes/notes/remove-agent-passthru-complete-a6b2df65b95889d5.yaml0000664000175000017500000000140413652514273030123 0ustar zuulzuul00000000000000--- upgrade: - Ironic no longer supports agent lookup/heartbeats as vendor passthru methods. All out-of-tree drivers must be updated to use ``AgentDeployMixin`` classes directly without relying on ``BaseAgentVendor`` class and other classes that were inheriting from it (e.g. ``agent.AgentVendorInterface`` and ``iscsi_deploy.VendorPassthru``). This means that ironic is incompatible with deploy ramdisks based on Ironic Python Agent (IPA) < 1.5.0. Operators must update their IPA-based deploy ramdisks in this case. Operators using non-IPA based deploy ramdisks which use ironic lookup/heartbeats functionality must update their ramdisks to use the top level ironic lookup/heartbeats REST API, available since ironic API v1.22. ironic-15.0.0/releasenotes/notes/train-release-59ff1643ec92c10a.yaml0000664000175000017500000000064213652514273024571 0ustar zuulzuul00000000000000--- prelude: > "Choooooo! Choooooo!" The Train is now departing the station. The OpenStack Bare Metal as a service team is proud to announce the release of Ironic 13.0.0. This release brings the long desired feature of software RAID configuration, Redfish virtual media boot support, sensor data improvements, and numerous bug fixes. We hope you enjoy your ride on the OpenStack Ironic Train. ironic-15.0.0/releasenotes/notes/bug-30315-e46eafe5b575f3da.yaml0000664000175000017500000000044613652514273023514 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue regarding the ``ansible`` deploy interface cleaning workflow. Handling the error in the driver and returning nothing caused the manager to consider the step done and go to the next one instead of interrupting the cleaning workflow. ironic-15.0.0/releasenotes/notes/bug-1672457-563d5354b41b060e.yaml0000664000175000017500000000012013652514273023266 0ustar zuulzuul00000000000000--- fixes: - Fixed a bug that was causing an increase in CPU usage over time. ironic-15.0.0/releasenotes/notes/fix-shellinabox-console-subprocess-timeout-d3eccfe0440013d7.yaml0000664000175000017500000000040413652514273032504 0ustar zuulzuul00000000000000--- fixes: - | Fixed the `issue `_ with node being locked for longer than ``[console]subprocess_timeout`` seconds when shellinabox process fails to start before the specifed timeout elapses. ironic-15.0.0/releasenotes/notes/idrac-hardware-type-54383960af3459d0.yaml0000664000175000017500000000113413652514273025455 0ustar zuulzuul00000000000000--- features: - | Adds a new hardware type, ``idrac``, for Dell EMC integrated Dell Remote Access Controllers (iDRAC). ``idrac`` hardware type supports PXE-based provisioning using an iDRAC. It supports the following driver interfaces: * boot: ``pxe`` * console: ``no-console`` * deploy: ``iscsi`` and ``direct`` * inspect: ``idrac``, ``inspector``, and ``no-inspect`` * management: ``idrac`` * network: ``flat``, ``neutron``, and ``noop`` * power: ``idrac`` * raid: ``idrac`` and ``no-raid`` * storage: ``noop`` and ``cinder`` * vendor: ``idrac`` ironic-15.0.0/releasenotes/notes/newton-driver-deprecations-e40369be37203057.yaml0000664000175000017500000000124713652514273027076 0ustar zuulzuul00000000000000--- deprecations: - | The following drivers are marked as unsupported and therefore deprecated. Some or all of these drivers may be removed in the Ocata cycle or later. * ``agent_amt`` * ``agent_iboot`` * ``agent_pyghmi`` * ``agent_ssh`` * ``agent_vbox`` * ``agent_wol`` * ``fake_ipminative`` * ``fake_ssh`` * ``fake_seamicro`` * ``fake_iboot`` * ``fake_snmp`` * ``fake_vbox`` * ``fake_amt`` * ``fake_msftocs`` * ``fake_wol`` * ``pxe_ipminative`` * ``pxe_ssh`` * ``pxe_vbox`` * ``pxe_seamicro`` * ``pxe_iboot`` * ``pxe_snmp`` * ``pxe_amt`` * ``pxe_msftocs`` * ``pxe_wol`` ironic-15.0.0/releasenotes/notes/fix-fast-track-entry-path-467c20f97aeb2f4b.yaml0000664000175000017500000000042513652514273027033 0ustar zuulzuul00000000000000--- fixes: - | Corrects logic in the entry path of node cleaning and deployment processes to prohibit ``agent_url`` from being preemptively removed if ``fast_track`` is enabled and in use. This allows fast track cleaning and deployment operations to succeed. ironic-15.0.0/releasenotes/notes/optional-redfish-system-id-3f6e8b0ac989cb9b.yaml0000664000175000017500000000053513652514273027376 0ustar zuulzuul00000000000000--- features: - | The ``redfish_system_id`` property of redfish hardware type has been made optional. If not specified in ``driver_info``, and the target BMC manages a single ComputerSystem, ironic will assume that system. Otherwise, ironic will fail requiring explicit ``redfish_system_id`` specification in ``driver_info``. ironic-15.0.0/releasenotes/notes/add-port-is-smartnic-4ce6974c8fe2732d.yaml0000664000175000017500000000135013652514273026012 0ustar zuulzuul00000000000000--- features: - | Adds an ``is_smartnic`` field to the port object in REST API version 1.53. ``is_smartnic`` field indicates if this port is a Smart NIC port, False by default. This field may be set by operator to use baremetal nodes with Smart NICs as ironic nodes. The REST API endpoints related to ports provide support for the ``is_smartnic`` field. The `ironic admin documentation `_ provides information on how to configure and use Smart NIC ports. upgrade: - | Adds an ``is_smartnic`` field to the port object in REST API version 1.53. Upgrading to this release will set ``is_smartnic`` to False for all ports. ironic-15.0.0/releasenotes/notes/ilo-automated-cleaning-fails-14ee438de3dd8690.yaml0000664000175000017500000000035613652514273027467 0ustar zuulzuul00000000000000--- fixes: - Fixes issue where automated cleaning fails for iLO drivers. Automated cleaning fails for iLO driver if iLO is in System POST state. iLO does not allow setting of boot device when it is in System POST state. ironic-15.0.0/releasenotes/notes/remove-clean-nodes-38cfa633ca518f99.yaml0000664000175000017500000000024213652514273025530 0ustar zuulzuul00000000000000--- upgrade: - Remove the deprecated "[conductor]/clean_nodes" option. Configuration files should instead use the "[conductor]/automated_clean" option. ironic-15.0.0/releasenotes/notes/drac-fix-oob-cleaning-b4b717895e243c9b.yaml0000664000175000017500000000045413652514273026020 0ustar zuulzuul00000000000000--- fixes: - Fixes `bug 1691808 `_ causing RAID creation/deletion to frequently fail when using the iDRAC driver due to an *Export Configuration* job running. The fix requires the ``python-dracclient`` library of version 1.3.0 or higher. ironic-15.0.0/releasenotes/notes/dell-boss-raid1-ec33e5b9c59d4021.yaml0000664000175000017500000000030113652514273024707 0ustar zuulzuul00000000000000--- issues: - | Building RAID1 is known to not work with Dell BOSS cards using **python-dracclient** 1.4.0 or earlier. Upgrade to **python-dracclient** 1.5.0 to use this feature. ironic-15.0.0/releasenotes/notes/provide_mountpoint-58cfd25b6dd4cfde.yaml0000664000175000017500000000040113652514273026302 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug where cinder block storage service volumes volume fail to attach expecting a mountpoint to be a valid string. See `story 2004864 `_ for additional information. ironic-15.0.0/releasenotes/notes/deprecated-cinder-opts-e10c153768285cab.yaml0000664000175000017500000000051213652514273026273 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``[cinder]/url`` is deprecated and will be ignored in the Rocky release. Instead, use ``[cinder]/endpoint_override`` configuration option to set a specific cinder API address when automatic discovery of the cinder API endpoint from keystone catalog is not desired. ironic-15.0.0/releasenotes/notes/allow-pxelinux-config-folder-to-be-defined-da0ddd397d58dcc8.yaml0000664000175000017500000000077513652514273032400 0ustar zuulzuul00000000000000--- features: - | Adds a new configuration option ``[pxe]pxe_config_subdir`` to allow operators to define the specific directory that may be used inside of ``/tftpboot`` or ``/httpboot`` for a boot loader to locate the configuration file for the node. This option defaults to ``pxelinux.cfg`` which is the directory that the Syslinux `pxelinux.0` bootloader utilized. Operators may wish to change the directory name if they are using other boot loaders such as `GRUB` or `iPXE`. ironic-15.0.0/releasenotes/notes/inject-nmi-dacd692b1f259a30.yaml0000664000175000017500000000043413652514273024144 0ustar zuulzuul00000000000000--- features: - Add support for the injection of Non-Masking Interrupts (NMI) for a node in REST API version 1.29. This feature can be used for hardware diagnostics, and actual support depends on the driver. In 7.0.0, this is available in the ipmitool and iRMC drivers. ironic-15.0.0/releasenotes/notes/v1-discovery-4311398040581fe8.yaml0000664000175000017500000000026413652514273024073 0ustar zuulzuul00000000000000--- fixes: - | Adds the version discovery information to the versioned API endpoint (``/v1``). This allows *keystoneauth* version discovery to work on this endpoint. ironic-15.0.0/releasenotes/notes/added-redfish-driver-00ff5e3f7e9d6ee8.yaml0000664000175000017500000000100213652514273026161 0ustar zuulzuul00000000000000--- features: - | Adds support for the `Redfish `_ standard via a new ``redfish`` hardware type. (There is no equivalent "classic" driver for this.) It uses two new interfaces: * ``redfish`` power interface supports all hard and soft power operations * ``redfish`` management interface supports: - getting and setting the boot device (PXE, disk, CD-ROM or BIOS) - making the configured boot device persistent or not - injecting NMI ironic-15.0.0/releasenotes/notes/add-redfish-boot-mode-support-2f1a2568e71c65d0.yaml0000664000175000017500000000020013652514273027511 0ustar zuulzuul00000000000000--- features: - Adds support to the ``redfish`` management interface for reading and setting bare metal node's boot mode. ironic-15.0.0/releasenotes/notes/ironic-12.0-prelude-9dd8e80a1a3e8f60.yaml0000664000175000017500000000145113652514273025421 0ustar zuulzuul00000000000000--- prelude: | The OpenStack Bare Metal as a Service team announces the release of ironic version 12.0 which introduces a number of new features as part of the Stein development cycle. * Per-node automated cleaning control * Redfish out-of-band introspection * Redfish BIOS configuration management * Support for direct image downloads from the conductor host. * Support for validating the enhanced image checksums introduced in the Image service in Rocky. * Dedicated ``ipxe`` boot interface enabling better co-existence of different hardware types. * Configurable ``disk_erasure_concurrency`` to speed disk cleaning. * Configurable "protected" nodes to help prevent accidential actions upon critical nodes. And many many bug fixes, Enjoy! ironic-15.0.0/releasenotes/notes/update-proliantutils-version-20ebcc22dc2df527.yaml0000664000175000017500000000031113652514273030027 0ustar zuulzuul00000000000000--- upgrade: - | Updates required proliantutils version for iLO drivers to 2.2.1. This version has support for HPSUM firmware update and matches requirements to meet global-requirements. ironic-15.0.0/releasenotes/notes/shred-final-overwrite-with-zeros-50b5ba5b19c0da27.yaml0000664000175000017500000000136613652514273030434 0ustar zuulzuul00000000000000--- features: - A new configuration option, `shred_final_overwrite_with_zeros` is now available. This option controls the final overwrite with zeros done on all block devices for a node under cleaning. This feature was previously always enabled and not configurable. This option is only used when a block device could not be ATA Secure Erased. deprecations: - The [deploy]/erase_devices_iterations config is deprecated and will be removed in the Ocata cycle. It has been replaced by the [deploy]/shred_random_overwrite_iterations config. This configuration option controls the number of times block devices are overwritten with random data. This option is only used when a block device could not be ATA Secure Erased. ironic-15.0.0/releasenotes/notes/cleanup-provision-ports-before-retry-ec3c89c193766d70.yaml0000664000175000017500000000031013652514273031210 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue with the ``neutron`` network interface that could lead to an inability to retry the deployment in case of failure on boot interface's ``prepare_ramdisk`` stage. ironic-15.0.0/releasenotes/notes/drac-raid-interface-f4c02b1c4fb37e2d.yaml0000664000175000017500000000100513652514273025742 0ustar zuulzuul00000000000000--- features: - Adds out-of-band RAID management to DRAC driver using the generic RAID interface which makes the functionality available via manual cleaning steps. - New configuration option, ``[drac]/query_raid_config_job_status_interval`` was added. After Ironic has created the RAID config job on the DRAC card, it continues to check for status update on the config job to determine whether the RAID configuration was successfully finished within this interval. Default is 120 seconds. ironic-15.0.0/releasenotes/notes/keystone-auth-3155762c524e44df.yaml0000664000175000017500000000461113652514273024500 0ustar zuulzuul00000000000000--- upgrade: - | Changes the way to configure access credentials for OpenStack services clients. For each service, both Keystone session options (timeout, SSL-related ones) and Keystone auth_plugin options (auth_url, auth_type and corresponding auth_plugin options) should be specified in the configuration section for this service. Configuration sections affected are: * ``[neutron]`` for Neutron service user * ``[glance]`` for Glance service user * ``[swift]`` for Swift service user * ``[inspector]`` for Ironic Inspector service user * ``[service_catalog]`` *new section* for Ironic service user, used to discover Ironic endpoint from Keystone Catalog This enables fine tuning of authentication for each service. Backward-compatible options handling is provided using values from ``[keystone_authtoken]`` config section, but operators are advised to switch to the new config options as the old options are deprecated. The old options will be removed during the Ocata cycle. For more information on sessions, auth plugins and their settings, please refer to http://docs.openstack.org/developer/keystoneauth/. - | Small change in semantics of default for ``[neutron]/url`` option * default is changed to None. * For the case when ``[neutron]/auth_strategy`` is ``noauth``, default means use ``http://$my_ip:9696``. * For the case when ``[neutron]/auth_strategy`` is ``keystone``, default means to resolve the endpoint from Keystone Catalog. - New config section ``[service_catalog]`` for access credentials used to discover Ironic API URL from Keystone Catalog. Previously credentials from ``[keystone_authtoken]`` section were used, which is now deprecated for such purpose. deprecations: - The ``[keystone_authtoken]`` configuration section is deprecated for configuring clients for other services (but is still used for configuring API token authentication), in favor of the ``[service_catalog]`` section. The ability to configure clients for other services via the ``[keystone_authtoken]`` section will be removed during the Ocata cycle. fixes: - Do not rely on keystonemiddleware config options for instantiating clients for other OpenStack services. This allows changing keystonemiddleware options from legacy ones and thus support Keystone V3 for token validation. ironic-15.0.0/releasenotes/notes/fix-cleaning-with-traits-3a54faa70d594fd0.yaml0000664000175000017500000000045213652514273026732 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue seen during cleaning when the node being cleaned has one or more traits assigned. This issue caused cleaning to fail, and the node to enter the ``clean failed`` state. See `bug 1750027 `_ for details. ironic-15.0.0/releasenotes/notes/add-agent-erase-fallback-b07613a7042fe236.yaml0000664000175000017500000000125613652514273026345 0ustar zuulzuul00000000000000--- features: - A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, which has a default value of False, has been added. If set to True, the Ironic Python Agent will revert to a disk shred operation if an ATA secure erase operation fails. Under normal circumstances, the failure of an ATA secure erase operation results in the node being put in ``clean failed`` state. upgrade: - A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, which has a default value of False, has been added. The default setting represents the standard behavior of the Ironic Python Agent during a cleaning failure. ironic-15.0.0/releasenotes/notes/transmit-all-ports-b570009d1a008067.yaml0000664000175000017500000000166213652514273025357 0ustar zuulzuul00000000000000--- fixes: - | Provides an opt-in fix to change the default port attachment behavior for deployment and cleaning operations through a new configuration option, ``[neutron]add_all_ports``. This option causes ironic to transmit all port information to neutron as opposed to only a single physical network port. This enables operators to successfully operate static Port Group configurations with Neutron ML2 drivers, where previously configuration of networking would fail. When these ports are configured with ``pxe_enabled`` set to ``False``, neutron will be requested not to assign an IP address to the port. This is to prevent additional issues that may occur depending on physical switch configuration with static Port Group configurations. - | Fixes an issue during provisioning network attachment where neutron ports were being created with the same data structure being re-used. ironic-15.0.0/releasenotes/notes/deploy-templates-5df3368df862631c.yaml0000664000175000017500000000112013652514273025250 0ustar zuulzuul00000000000000--- features: - | Adds the deploy templates API. Deploy templates can be used to customise the node deployment process, each specifying a list of deploy steps to execute with configurable priority and arguments. Introduces the following new API endpoints, available from Bare Metal API version 1.55: * ``GET /v1/deploy_templates`` * ``GET /v1/deploy_templates/`` * ``POST /v1/deploy_templates`` * ``PATCH /v1/deploy_templates/`` * ``DELETE /v1/deploy_templates/`` ironic-15.0.0/releasenotes/notes/remove-python-oneviewclient-b1d345ef861e156e.yaml0000664000175000017500000000115213652514273027523 0ustar zuulzuul00000000000000--- issues: - | The library ``python-ilorest-library`` is a fork of the ``python-redfish-library`` and imported with same name, hence conflict when together. ``python-redfish-library`` cannot be used when ``oneview`` hardware type is in use. upgrade: - | The ``oneview`` hardware type now uses ``hpOneView`` and ``python-ilorest-library`` libraries to communicate with OneView appliances. The ``python-oneviewclient`` library is no longer used. upgrade: - | Configuration option ``[oneview]max_polling_attempts`` is removed since the ``hpOneView`` library doesn't support it. ironic-15.0.0/releasenotes/notes/agent-token-support-0a5b5aa1585dfbb5.yaml0000664000175000017500000000151613652514273026112 0ustar zuulzuul00000000000000--- features: - | Adds support of "agent token" which serves as a mechanism to secure the normally unauthenticated API endpoints in ironic which are used in the mechanics of baremetal provisioning. This feature is optional, however operators may require this feature by changing the ``[DEFAULT]require_agent_token`` setting to ``True``. upgrades: - | In order to use the new Agent Token support, all ramdisk settings should be updated for all nodes in ironic. If token use is required by ironic's configuration, and the ramdisks have not been updated, then all deployment, cleaning, and rescue operations will fail until the version of the ironic-python-agent ramdisk has been updated. issues: - | The ``ansible`` deployment interface does not support use of an ``agent token`` at this time. ironic-15.0.0/releasenotes/notes/agent-takeover-60f27cef21ebfb48.yaml0000664000175000017500000000020013652514273025101 0ustar zuulzuul00000000000000--- fixes: - Drivers using the ``AgentDeploy`` interface now correctly support take-over for ``ACTIVE`` netboot-ed nodes. ironic-15.0.0/releasenotes/notes/use-ironic-lib-exception-4bff237c9667bf46.yaml0000664000175000017500000000045113652514273026670 0ustar zuulzuul00000000000000--- deprecations: - The configuration option ``[DEFAULT]/fatal_exception_format_errors`` is now deprecated. Please use the configuration option ``[ironic_lib]/fatal_exception_format_errors`` instead. upgrade: - Updates the minimum required version of ``ironic-lib`` to ``2.17.1``. ironic-15.0.0/releasenotes/notes/ocata-summary-a70f995cb3b18e18.yaml0000664000175000017500000000235713652514273024633 0ustar zuulzuul00000000000000--- prelude: | The 7.0.0 release includes many new features and bug fixes. Please review the upgrade section which describes the required actions to upgrade your ironic installation from 6.2.2 (Newton) to 7.0.0 (Ocata). A few major changes are worth mentioning. This is not an exhaustive list: - "Port group" support allows users to take advantage of bonded network interfaces. - State change and CRUD notifications can now be emitted. - Soft power off, soft reboot, and sending non-maskable interrupts (NMI) are now supported in the REST API. - The AMT, iBoot, msftocs, seamicro, VirtualBox, and Wake-On-Lan drivers have been removed from ironic. Please see the upgrade notes for additional details and options. - "Dynamic drivers" is a revamp of how drivers are composed. Rather than a huge matrix of hardware drivers supporting different things, now users select a "hardware type" for a machine, and can independently change the deploy method, console manager, RAID management, power control interface, etc. This is experimental, as not all "classic" drivers have a dynamic equivalent yet, but we encourage users to try this feature out and submit feedback. ironic-15.0.0/releasenotes/notes/logging-keystoneauth-9db7e56c54c2473d.yaml0000664000175000017500000000031113652514273026207 0ustar zuulzuul00000000000000--- other: - Do not show DEBUG logging from keystoneauth and keystonemiddleware by default. - Log eventlet.wsgi.server events with a proper logger name and ignore DEBUG logging by default. ironic-15.0.0/releasenotes/notes/fix-reboot-log-collection-c3e22fc166135e61.yaml0000664000175000017500000000056313652514273026740 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where if a failure occurs during deployment, the Bare Metal service could attempt to collect logs from a node that had been powered off. This would result in a number of failed attempts to collect the logs before failing the deployment. See `bug 1732939 `_ for details. ironic-15.0.0/releasenotes/notes/deprecate-ibmc-9106cc3a81171738.yaml0000664000175000017500000000075213652514273024462 0ustar zuulzuul00000000000000--- deprecations: - | The ``ibmc`` hardware type has been deprecated. While the Huawei team setup Third-Party CI for the driver's inclusion into ironic, the CI unfortunately went down around the time the United States of America announced commerce restrictions against Huawei. Unfortunantely, without third party CI and no contacts to maintain the driver, the ironic community is left with little choice but to deprecate and ultimately remove the driver. ironic-15.0.0/releasenotes/notes/fix-updating-node-driver-to-classic-16b0d5ba47e74d10.yaml0000664000175000017500000000014513652514273030673 0ustar zuulzuul00000000000000--- fixes: - Fixes failure to update a node's driver from a hardware type to a classic driver. ironic-15.0.0/releasenotes/notes/add-more-retryable-ipmitool-errors-1c9351a89ff0ec1a.yaml0000664000175000017500000000066413652514273030743 0ustar zuulzuul00000000000000--- fixes: - Adds more ``ipmitool`` error messages to be treated as retryable by the ipmitool interfaces (such as power and management hardware interfaces). Specifically, ``Node busy``, ``Timeout``, ``Out of space`` and ``BMC initialization in progress`` reporting emitted by ``ipmitool`` will cause ironic to retry IPMI command. This change should improve the reliability of IPMI-based communicaton with BMC. ironic-15.0.0/releasenotes/notes/fix-dir-permissions-bc56e83a651bbdb0.yaml0000664000175000017500000000063113652514273026076 0ustar zuulzuul00000000000000--- fixes: - Adds the capability for an operator to explicitly define the permission for created tftpboot folders. This provides the ability for ironic to be utilized with a restrictive umask, where the tftp server may not be able to read the file. Introduces a new configuration option ``[pxe]/dir_permission`` to specify the permission for the tftpboot directories to be created with. ironic-15.0.0/releasenotes/notes/ilo-boot-from-iscsi-volume-41e8d510979c5037.yaml0000664000175000017500000000032713652514273026727 0ustar zuulzuul00000000000000--- features: - The ``ilo-pxe`` and ``ilo-virtual-media`` boot interfaces now support firmware-based booting from iSCSI volume in UEFI boot mode. Requires **proliantutils** library version 2.5.0 or newer. ironic-15.0.0/releasenotes/notes/get-commands-status-timeout-ecbac91ea149e755.yaml0000664000175000017500000000030213652514273027554 0ustar zuulzuul00000000000000--- fixes: - | Add timeout when querying agent for commands status. Without it, node can lock up for a quite long time and ironic will not allow to perform any operations with it. ironic-15.0.0/releasenotes/notes/bug-1694645-57289200e35bd883.yaml0000664000175000017500000000103213652514273023234 0ustar zuulzuul00000000000000--- fixes: - | Fixes netboot with virtual media boot in an environment using syslinux 5.00 or later, such as Ubuntu 16.04. It was broken by a change in the location of the ``ldlinux.c32`` file. features: - | New configuration option ``[DEFAULT]/ldlinux_32`` can be used to set the location of the ``ldlinux.c32`` file (from the syslinux package). The default behavior is to look for it in the following locations: * ``/usr/lib/syslinux/modules/bios/ldlinux.c32`` * ``/usr/share/syslinux/ldlinux.c32`` ironic-15.0.0/releasenotes/notes/adds-secure-erase-switch-23f449c86b3648a4.yaml0000664000175000017500000000043513652514273026502 0ustar zuulzuul00000000000000--- features: - | Adds the ``[deploy]enable_ata_secure_erase`` option which allows an operator to disable ATA Secure Erase for all nodes being managed by the conductor. This setting defaults to ``True`` which aligns with the prior behavior of the Bare Metal service. ironic-15.0.0/releasenotes/notes/story-2006217-redfish-bios-cleaning-fails-fee32f04dd97cbd2.yaml0000664000175000017500000000035113652514273031502 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where clean steps of ``redfish`` BIOS interface do not boot up the IPA ramdisk after cleaning reboot. See `story 2006217 `__ for details. ironic-15.0.0/releasenotes/notes/remove-ansible_deploy-driver-options-a28dc2f36110a67a.yaml0000664000175000017500000000046113652514273031267 0ustar zuulzuul00000000000000--- upgrade: - | Deprecated options ``ansible_deploy_username`` and ``ansible_deploy_key_file`` in node driver_info for the ``ansible`` deploy interface were removed and will be ignored. Use ``ansible_username`` and ``ansible_key_file`` options in the node driver_info respectively. ironic-15.0.0/releasenotes/notes/boot-from-url-98d21670e726c518.yaml0000664000175000017500000000063113652514273024332 0ustar zuulzuul00000000000000--- fixes: - | Fixes a misunderstanding in how DHCPv6 booting of machines operates in that only a URL to the boot loader is expected in that case, as opposed to traditional TFTP parameters. Now a URL is sent to the client in the form of ``tftp:////``. See `story 1744620 `_ for more information. ironic-15.0.0/releasenotes/notes/fixes-deployment-failure-with-fasttrack-f1fe05598fbdbe4a.yaml0000664000175000017500000000067613652514273032144 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with fasttrack where a recent security related change to prevent the ``agent_url`` field from being updated in a node, to functionally prevent fast_track from succeeding as the node would fail with an exception indicating the ``agent_url`` could not be found. The required ``agent_url`` value is now preserved when the fast track feature is enabled as the running ramdisk is not shut down. ironic-15.0.0/releasenotes/notes/ipxe-uefi-f5be11c7b0606a84.yaml0000664000175000017500000000025413652514273023721 0ustar zuulzuul00000000000000--- fixes: - Fixes bug where ironic reboots the node with deploy image instead of the user image during second reboot in uefi boot mode when ipxe is enabled. ironic-15.0.0/releasenotes/notes/new_capabilities-5241619c4b46a460.yaml0000664000175000017500000000137713652514273025120 0ustar zuulzuul00000000000000--- features: - | Adds support for the following Boolean capabilities keys to the ``ilo`` inspect interface: * ``sriov_enabled`` * ``has_ssd`` * ``has_rotational`` * ``rotational_drive_4800_rpm`` * ``rotational_drive_5400_rpm`` * ``rotational_drive_7200_rpm`` * ``rotational_drive_10000_rpm`` * ``rotational_drive_15000_rpm`` * ``logical_raid_level_0`` * ``logical_raid_level_1`` * ``logical_raid_level_2`` * ``logical_raid_level_10`` * ``logical_raid_level_5`` * ``logical_raid_level_6`` * ``logical_raid_level_50`` * ``logical_raid_level_60`` * ``cpu_vt`` * ``hardware_supports_raid`` * ``has_nvme_ssd`` * ``nvdimm_n`` * ``logical_nvdimm_n`` * ``persistent_memory`` ironic-15.0.0/releasenotes/notes/bug-2002062-959b865ced05b746.yaml0000664000175000017500000000035413652514273023364 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug that exposes an internal node ID in an error message when requested to delete a trait which doesn't exist. See `bug 2002062 `_ for details. ironic-15.0.0/releasenotes/notes/fix-api-node-name-updates-f3813295472795be.yaml0000664000175000017500000000045013652514273026472 0ustar zuulzuul00000000000000--- fixes: - Remove the possibility to set incorrect node name by specifying multiple add/replace operations in patch request. Since this version, all the values specified in the patch for name are checked, in order to conform to JSON PATCH RFC https://tools.ietf.org/html/rfc6902. ironic-15.0.0/releasenotes/notes/image-no-data-c281f638d3dedfb2.yaml0000664000175000017500000000036513652514273024606 0ustar zuulzuul00000000000000--- fixes: - | Fails deployment with the correct error message in a node's ``last_error`` field if an image from the Image service doesn't contain any data. See `bug 1741223 `_ for details. ironic-15.0.0/releasenotes/notes/bug-2004265-cd9056868295f374.yaml0000664000175000017500000000042013652514273023232 0ustar zuulzuul00000000000000--- fixes: - | Fixes 'Invalid parameter value for SpanLength' when configuring RAID using Python 3. This passed incorrect data type to iDRAC, e.g., instead of `2` it passed `2.0`. See `story 2004265 `_. ironic-15.0.0/releasenotes/notes/irmc-boot-interface-8c2e26affd1ebfc4.yaml0000664000175000017500000000010413652514273026160 0ustar zuulzuul00000000000000--- other: - iRMC drivers are now based on the new BootInterface. ironic-15.0.0/releasenotes/notes/conductor-version-backfill-9d06f2ad81aebec3.yaml0000664000175000017500000000027513652514273027504 0ustar zuulzuul00000000000000--- upgrade: - | The ``conductors`` database table's ``version`` column is populated as part of the data migration (via the command ``ironic-dbsync online_data_migrations``). ironic-15.0.0/releasenotes/notes/root-api-version-info-9dd6cadd3d3d4bbe.yaml0000664000175000017500000000045013652514273026557 0ustar zuulzuul00000000000000--- features: - | The API root endpoint (GET /) now returns version information for the server; specifically: * min_version - minimum API version supported by the server; * version - maximum API version supported by the server; * status - version status, "CURRENT" for v1. ironic-15.0.0/releasenotes/notes/add-port-advanced-net-fields-55465091f019d962.yaml0000664000175000017500000000057613652514273027061 0ustar zuulzuul00000000000000--- features: - | Exposes the ``local_link_connection`` and ``pxe_enabled`` properties of the Port resource to the REST API, raising the API maximum version to 1.19. * The ``pxe_enabled`` field indicates whether this Port should be used when PXE booting this Node. * The ``local_link_connection`` field may be used to supply the port binding profile. ironic-15.0.0/releasenotes/notes/deprecate-xclarity-d687571fb65ad099.yaml0000664000175000017500000000127513652514273025572 0ustar zuulzuul00000000000000deprecations: - | The ``xclarity`` hardware type, as well as the supporting driver interfaces have been deprecated and are scheduled to be removed from ironic in the Stein development cycle. This is due to the lack of operational Third Party testing to help ensure that the support for Lenovo XClarity is functional. The ``xclarity`` hardware type was introduced at the end of the Queens development cycle. During implementation of Third Party CI, the Lenovo team encountered some unforseen delays. Lenovo is continuing to work towards Third Party CI, and upon establishment and verification of functional Third Party CI, this deprecation will be rescinded. ironic-15.0.0/releasenotes/notes/fix-shellinabox-pipe-not-ready-f860c4b7a1ef71a8.yaml0000664000175000017500000000035213652514273030045 0ustar zuulzuul00000000000000--- fixes: - | Fixes a possible `console lockup issue `_ in case of PID file not being yet created while daemon start has call already returned success return code. ironic-15.0.0/releasenotes/notes/deprecate-cisco-drivers-3ae79a24b76ff963.yaml0000664000175000017500000000111313652514273026553 0ustar zuulzuul00000000000000--- deprecations: - | The Cisco ``cisco-ucs-managed`` and ``cisco-ucs-standalone`` drivers have been deprecated due to a lack of reporting third-party CI and vendor maintenance of the driver code. In the present state of these drivers, they would have been removed as part of the eventual removal of support for Python2. These drivers should be anticipated to be removed prior to the final Train release of the Bare Metal service. More information can be found `here `_. ironic-15.0.0/releasenotes/notes/ilo-hardware-type-48fd1c8bccd70659.yaml0000664000175000017500000000111313652514273025456 0ustar zuulzuul00000000000000--- features: - | Adds a new hardware type ``ilo`` for iLO 4 based Proliant Gen 8 and Gen 9 servers. This hardware type supports virtual media and PXE based boot using HPE iLO 4 management engine. The following driver interfaces are supported: * boot: ``ilo-virtual-media`` and ``ilo-pxe`` * console: ``ilo`` and ``no-console`` * deploy: ``iscsi`` and ``direct`` * inspect: ``ilo``, ``inspector`` and ``no-inspect`` * management: ``ilo`` * network: ``flat``, ``noop`` and ``neutron`` * power: ``ilo`` * raid: ``no-raid`` and ``agent`` ironic-15.0.0/releasenotes/notes/clear-target-stable-states-4545602d7aed9898.yaml0000664000175000017500000000023713652514273027042 0ustar zuulzuul00000000000000--- fixes: - Ensure node's target_provision_state is cleared when the node is moved to a stable state, indicating that the state transition is done. ironic-15.0.0/releasenotes/notes/add_standalone_ports_supported_field-4c59702a052acf38.yaml0000664000175000017500000000034513652514273031410 0ustar zuulzuul00000000000000--- features: - Add the field `standalone_ports_supported` to the port group object. This field indicates whether ports that are members of this port group can be used as stand-alone ports. The default is ``True``. ironic-15.0.0/releasenotes/notes/add-protection-for-available-nodes-25f163d69782ef63.yaml0000664000175000017500000000140713652514273030441 0ustar zuulzuul00000000000000--- features: - Adds option ``allow_deleting_available_nodes`` to control whether nodes in state ``available`` should be deletable (which is and stays the default). Setting this option to ``False`` will remove ``available`` from the list of states in which nodes can be deleted from ironic. It hence provides protection against accidental removal of nodes which are ready for allocation (and is meant as a safeguard for the operational effort to bring nodes into this state). For backwards compatibility reasons, the default value for this option is ``True``. The other states in which nodes can be deleted from ironic (``manageable``, ``enroll``, and ``adoptfail``) remain unchanged. This option can be changed without service restart. ironic-15.0.0/releasenotes/notes/add-inspection-abort-a187e6e5c1f6311d.yaml0000664000175000017500000000064013652514273026041 0ustar zuulzuul00000000000000--- features: - | Adds support to abort the inspection of a node in the ``inspect wait`` state, as long as this operation is supported by the inspect interface in use. A node in the ``inspect wait`` state accepts the ``abort`` provisioning verb to initiate the abort process. This feature is supported by the ``inspector`` inspect interface and is available starting with API version 1.41. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/fix-do-not-tear-down-nodes-upon-cleaning-failure-a9cda6ae71ed2540.yamlironic-15.0.0/releasenotes/notes/fix-do-not-tear-down-nodes-upon-cleaning-failure-a9cda6ae71ed2540.y0000664000175000017500000000066213652514273032652 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug where ironic would shut a node down upon cleaning failure. Now, the node stays powered on (as documented and intended). upgrade: - | When a failure occurs during cleaning, nodes will no longer be shut down. The behaviour was changed to prevent harm and allow for an admin intervention when sensitive operations, such as firmware upgrades, are performed and fail during cleaning. ironic-15.0.0/releasenotes/notes/stop-console-during-unprovision-a29d8facb3f03be5.yaml0000664000175000017500000000041713652514273030556 0ustar zuulzuul00000000000000--- security: - | Fixes an issue where an enabled console could be left running after a node was unprovisioned. This allowed a user to view the console even after the instance was gone. Ironic now stops the console during unprovisioning to block this. ironic-15.0.0/releasenotes/notes/ipv6-provision-67bd9c1dbcc48c97.yaml0000664000175000017500000000010113652514273025111 0ustar zuulzuul00000000000000--- fixes: - Adds support for deploying to IPv6 iSCSI portals. ironic-15.0.0/releasenotes/notes/support-root-device-hints-with-operators-96cf34fa37b5b2e8.yaml0000664000175000017500000000063313652514273032170 0ustar zuulzuul00000000000000--- features: - Adds support for using operators with the root device hints mechanism. The supported operators are, ``=``, ``==``, ``!=``, ``>=``, ``<=``, ``>``, ``<``, ``s==``, ``s!=``, ``s>=``, ``s>``, ``s<=``, ``s<``, ````, ```` and ````. See http://docs.openstack.org/project-install-guide/baremetal/draft/advanced.html#specifying-the-disk-for-deployment-root-device-hints ironic-15.0.0/releasenotes/notes/fix-url-collisions-43abfc8364ca34e7.yaml0000664000175000017500000000021213652514273025650 0ustar zuulzuul00000000000000--- fixes: - Removed invalid API URL ``/v1/nodes/ports``. For more information, see https://bugs.launchpad.net/ironic/+bug/1580997. ironic-15.0.0/releasenotes/notes/add-redfish-boot-interface-e7e05bdd2c894d80.yaml0000664000175000017500000000137713652514273027203 0ustar zuulzuul00000000000000--- features: - | Adds virtual media boot interface to ``redfish`` hardware type supporting virtual media boot. The ``redfish-virtual-media`` boot interface operates on the same kernel/ramdisk as, for example, PXE boot interface does, however ``redfish-virtual-media`` boot interface can additionally require EFI system partition image (ESP) when performing UEFI boot. Either the ``[conductor]bootloader`` configuration option or the ``[driver_info]/bootloader`` node attribute can be used to convey ESP location to ironic. Bootable ISO images can be served to BMCs either from Swift or from an HTTP server running on an ironic conductor machine. This is controlled by the ``[redfish]use_swift`` ironic configuration option. ironic-15.0.0/releasenotes/notes/agent-can-request-reboot-6238e13e2e898f68.yaml0000664000175000017500000000053213652514273026534 0ustar zuulzuul00000000000000--- features: - This adds the reboot_requested option for in-band cleaning. If set to true, Ironic will reboot the node after that step has completed and before continuing with the next step. This option is useful for when some action, such as a BIOS upgrade or setting change, requires a reboot to take effect. ironic-15.0.0/releasenotes/notes/fix-fields-missing-from-next-url-fd9fddf8e70b65ea.yaml0000664000175000017500000000056313652514273030574 0ustar zuulzuul00000000000000--- fixes: - | Fixes issue where the resource list API returned results with requested fields only until the API ``MAX_LIMIT``. After the API ``MAX_LIMIT`` is reached the API started ignoring user requested fields. This fix will make sure that the next url generated by the pagination code will include the user requested fields as query parameter. ironic-15.0.0/releasenotes/notes/fix-commit-to-controller-d26f083ac388a65e.yaml0000664000175000017500000000053213652514273026716 0ustar zuulzuul00000000000000fixes: - | Fixes a bug in the ``idrac`` hardware type where configuration job for RAID ``delete_configuration`` cleaning step gets created even when there are no virtual disks or hotspares/dedicated hotspares present on any controller. See bug `2006562 `_ for details. ironic-15.0.0/releasenotes/notes/remove-glance-num-retries-24898fc9230d9497.yaml0000664000175000017500000000023713652514273026641 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[glance]glance_num_retries`` was deprecated and now removed, please use ``[glance]num_retries`` instead. ironic-15.0.0/releasenotes/notes/iscsi-verify-attempts-28b1d00b13ba365a.yaml0000664000175000017500000000056513652514273026266 0ustar zuulzuul00000000000000--- deprecations: - | The ironic-lib configuration option ``[disk_utils]iscsi_verify_attempts`` has been deprecated in favor of: * ``[iscsi]verify_attempts`` to specify the number of attempts to establish an iSCSI connection. * ``[disk_utils]partition_detection_attempts`` to specify the number of attempts to find a newly created partition. ironic-15.0.0/releasenotes/notes/smartnic-logic-has-merged-in-neutron-79078280d40f042c.yaml0000664000175000017500000000043313652514273030635 0ustar zuulzuul00000000000000--- features: - | The Smart-Nic functionality that was added to the Bare Metal Service during the Stein cycle can now be used with a Train version of the Networking Service (neutron) as Smart-Nic support merged into that project during the Train development cycle. ironic-15.0.0/releasenotes/notes/mask-configdrive-contents-77fc557d6bc63b2b.yaml0000664000175000017500000000201013652514273027176 0ustar zuulzuul00000000000000--- features: - Adds a new policy rule that may be used to mask instance-specific secrets, such as configdrive contents or the temp URL used to store a configdrive or instance image. This is similar to how passwords are already masked. upgrade: - Instance secrets will now, by default, be masked in API responses. Operators wishing to expose the configdrive or instance image to specific users will need to update their policy.json file and grant the relevant keystone roles. security: - Configdrives often contain sensitive information. Users may upload their own images, which could also contain sensitive information. The Agent drivers may store this information in a Swift temp URL to allow access from the Agent ramdisk. These URLs are considered sensitive information because they grant unauthenticated access to sensitive information. Now, we only selectively expose this information to privileged users, whereas previously it was exposed to all authenticated users. ironic-15.0.0/releasenotes/notes/irmc-manual-clean-create-raid-configuration-bccef8496520bf8c.yaml0000664000175000017500000000043113652514273032512 0ustar zuulzuul00000000000000--- features: - Adds out-of-band RAID configuration solution for the iRMC driver which makes the functionality available via manual cleaning. See `iRMC hardware type documentation `_ for more details. ironic-15.0.0/releasenotes/notes/grub-default-change-to-mac-1e301a96c49acec4.yaml0000664000175000017500000000207513652514273027074 0ustar zuulzuul00000000000000--- upgrade: - | Operators utilizing ``grub`` for PXE booting, typically with UEFI, should change their deployed master PXE configuration file provided for nodes PXE booting using grub. Ironic 11.1 now writes both MAC address and IP address based PXE confiuration links for network booting via ``grub``. The grub variable should be changed from ``$net_default_ip`` to ``$net_default_mac``. IP address support is deprecated and will be removed in the Stein release. deprecations: - | Support for ironic to link PXE boot configuration files via the assigned interface IP address has been deprecated. This option was only the case when ``[pxe]ipxe_enabled`` was set to ``false`` and the node was being deployed using UEFI. fixes: - | Fixes support for ``grub`` based UEFI PXE booting by enabling links to the PXE configuration files to be written using the MAC address of the node in addition to the interface IP address. If the ``[dhcp]dhcp_provider`` option is set to ``none``, only the MAC based links will be created. ironic-15.0.0/releasenotes/notes/add_cpu_fpga_trait_for_irmc_inspection-2b63941b064f7936.yaml0000664000175000017500000000030013652514273031525 0ustar zuulzuul00000000000000--- features: - | The iRMC driver can now automatically update the node.traits field with CUSTOM_CPU_FPGA value based on information provided by the node during node inspection. ironic-15.0.0/releasenotes/notes/allocation-backfill-c31e84c5fcf24216.yaml0000664000175000017500000000023313652514273025722 0ustar zuulzuul00000000000000--- features: - | API version 1.58 allows backfilling allocations for existing deployed nodes by providing ``node`` to ``POST /v1/allocations``. ironic-15.0.0/releasenotes/notes/iscsi-optional-cpu-arch-ebf6a90dde34172c.yaml0000664000175000017500000000020513652514273026624 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where iSCSI based deployments fail if the ``cpu_arch`` property is not specified on a node. ironic-15.0.0/releasenotes/notes/.placeholder0000664000175000017500000000000013652514273021204 0ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/notes/debug-no-api-tracebacks-a8a0caddc9676b06.yaml0000664000175000017500000000034413652514273026541 0ustar zuulzuul00000000000000--- upgrade: - Adds a config option 'debug_tracebacks_in_api' to allow the API service to return tracebacks in API responses in an error condition. fixes: - No longer returns tracebacks for API errors in debug mode. ironic-15.0.0/releasenotes/notes/snmp-driver-udp-transport-settings-67419be988fcff40.yaml0000664000175000017500000000055613652514273031012 0ustar zuulzuul00000000000000--- features: - Adds SNMP request timeout and retries settings for the SNMP UDP transport. Some SNMP devices take longer than others to respond. The new Ironic configuration settings ``[snmp]/udp_transport_retries`` and ``[snmp]/udp_transport_timeout`` allow to change the number of retries and the timeout values respectively for the SNMP driver. ironic-15.0.0/releasenotes/notes/fix-sendfile-size-cap-d9966a96e2d7db51.yaml0000664000175000017500000000024013652514273026136 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue when the image source is a local file, the image will be truncated to 2G and fails deployment due to image corruption. ironic-15.0.0/releasenotes/notes/fips-hashlib-bca9beacc2b48fe7.yaml0000664000175000017500000000015613652514273024766 0ustar zuulzuul00000000000000fixes: - | Use SHA256 for comparing file contents instead of MD5. This improves FIPS compatibility. ironic-15.0.0/releasenotes/notes/add-socat-console-ipmitool-ab4402ec976c5c96.yaml0000664000175000017500000000027713652514273027200 0ustar zuulzuul00000000000000--- features: - Adds support for socat-based serial console to ipmitool-based drivers. These are available by using the ``agent_ipmitool_socat`` and ``pxe_ipmitool_socat`` drivers. ironic-15.0.0/releasenotes/notes/fix-swift-ssl-options-d93d653dcd404960.yaml0000664000175000017500000000034713652514273026174 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug when SSL-related options in ``[swift]`` section of ironic configuration file were ignored when performing API requests to Swift. See https://launchpad.net/bugs/1736158 for more information. ironic-15.0.0/releasenotes/notes/expose-conductor-d13c9c4ef9d9de86.yaml0000664000175000017500000000114513652514273025521 0ustar zuulzuul00000000000000--- features: - | Adds support to retrieve the information of conductors known by ironic: * a new endpoint ``GET /v1/conductors`` for listing conductor resources. * a new endpoint ``GET /v1/conductors/{hostname}`` for showing a conductor resource. Adds a read-only ``conductor`` field to the Node, which represents the conductor currently servicing a node, and can be retrieved from following node endpoints: * ``GET /v1/nodes?detail=true`` or ``GET /v1/nodes/detail`` * ``GET /v1/nodes/`` * ``POST /v1/nodes`` * ``PATCH /v1/nodes/`` ironic-15.0.0/releasenotes/notes/refactor-ironic-lib-22939896d8d46a77.yaml0000664000175000017500000000127713652514273025506 0ustar zuulzuul00000000000000--- upgrade: - | Adds new configuration [ironic_lib]root_helper, to specify the command that is prefixed to commands that are run as root. Defaults to using the rootwrap config file at /etc/ironic/rootwrap.conf. - | Moves these configuration options from [deploy] group to the new [disk_utils] group: efi_system_partition_size, dd_block_size and iscsi_verify_attempts. deprecations: - | The following configuration options have been moved to the [disk_utils] group; they are deprecated from the [deploy] group: efi_system_partition_size, dd_block_size and iscsi_verify_attempts. other: - Code related to disk partitioning was moved to ironic-lib. ironic-15.0.0/releasenotes/notes/ilo-update-proliantutils-version-fd41a7c2a27be735.yaml0000664000175000017500000000073413652514273030551 0ustar zuulzuul00000000000000--- upgrade: - | Updates required proliantutils version for iLO drivers to 2.4.0. This version of the library comes with quite a few features: * Adds support for Gen10 servers using `Redfish `_ protocol. * Provides support for one-pass disk erase using HPE SSA CLI through Proliant hardware manager in IPA. * ``local_gb`` defaults to 0 (zero) when no disk could be discovered during inspection. ironic-15.0.0/releasenotes/notes/oneview-agent-mixin-removal-b7277e8f20df5ef2.yaml0000664000175000017500000000031013652514273027455 0ustar zuulzuul00000000000000--- features: - | The OneView drivers now retain the next boot device in node's internal info when setting a boot device is requested. It is applied on the node when it is power cycled. ironic-15.0.0/releasenotes/notes/deprecated-neutron-opts-2e1d9e65f00301d3.yaml0000664000175000017500000000431513652514273026520 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``[neutron]/url`` is deprecated and will be ignored in the Rocky release. Instead, use ``[neutron]/endpoint_override`` configuration option to set specific neutron API address when automatic discovery of neutron API endpoint from keystone catalog is not desired. This option has no default value, and must be set explicitly for a stand alone deployment of ironic and neutron (when ``[neutron]/auth_type`` is set to ``none``), since the service catalog is not available in this case. Otherwise it is generally recommended to rely on keystone service catalog for service endpoint discovery. - | Configuration option ``[neutron]/url_timeout`` is deprecated and will be ignored in the Rocky release. Instead, use ``[neutron]/timeout`` configuration option. This new option has no default value and must be set explicitly to ``30`` to keep previous default behavior. - | Configuration option ``[neutron]/auth_strategy`` is deprecated and will be ignored in the Rocky release. Instead, set ``[neutron]/auth_type`` configuration option to ``none``, and provide neutron API address as ``[neutron]/endpoint_override`` configuration option. other: - | Signatures of several networking-related functions/methods have been changed to include request context as an optional keyword argument. The functions/methods in question are: - ``ironic.common.neutron.get_client`` - ``ironic.common.neutron.unbind_neutron_port`` - ``ironic.common.neutron.update_port_address`` - ``ironic.common.neutron.validate_network`` - ``ironic.common.neutron.NeutronNetworkInterfaceMixin.get_cleaning_network`` - ``ironic.common.neutron.NeutronNetworkInterfaceMixin.get_provisioning_network`` - ``ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts`` - ``ironic.dhcp.none.NeutronDHCPApi.update_port_dhcp_opts`` If you are using any of the above functions/methods in your out-of-tree ironic driver or driver interface code, you should update the code to pass an instance of ``ironic.common.context.RequestContext`` class as a ``context`` keyword argument to those functions/methods. ironic-15.0.0/releasenotes/notes/sighup-service-reloads-configs-0e2462e3f064a2ff.yaml0000664000175000017500000000123613652514273030042 0ustar zuulzuul00000000000000--- features: - | Issuing a SIGHUP (e.g. ``pkill -HUP ironic``) to an ironic-api or ironic-conductor service will cause the service to reload and use any changed values for *mutable* configuration options. The mutable configuration options are: * [DEFAULT]/debug * [DEFAULT]/log_config_append * [DEFAULT]/pin_release_version Mutable configuration options are indicated as such in the `sample configuration file `_ by ``Note: This option can be changed without restarting``. A warning is logged for any changes to immutable configuration options. ironic-15.0.0/releasenotes/notes/drac-fix-raid10-greater-than-16-drives-a4cb107e34371a51.yaml0000664000175000017500000000034213652514273030603 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where RAID 10 creation fails with greater than 16 drives when using the ``idrac`` hardware type. See bug `2002771 `_ for details. ironic-15.0.0/releasenotes/notes/keystoneauth-config-1baa45a0a2dd93b4.yaml0000664000175000017500000000034313652514273026136 0ustar zuulzuul00000000000000--- features: - | Adds the ability to set keystoneauth settings for automatic service discovery in the following configuration sections: ``[glance]``, ``[cinder]``, ``[inspector]``, ``[swift]`` and ``[neutron]``. ironic-15.0.0/releasenotes/notes/fix-net-ifaces-rebuild-1cc03df5d37f38dd.yaml0000664000175000017500000000023013652514273026415 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with node rebuild, when tenant network ports were not unbound prior to moving the node to provisioning network. ironic-15.0.0/releasenotes/notes/kill-old-ramdisk-6fa7a16269ff11b0.yaml0000664000175000017500000000040613652514273025170 0ustar zuulzuul00000000000000--- prelude: > Starting with this release IPA is the only deployment and inspection ramdisk supported by Ironic. upgrade: - Support for the old ramdisk ("deploy-ironic" diskimage-builder element) was removed. Please switch to IPA before upgrading. ironic-15.0.0/releasenotes/notes/multitenant-networking-0a13c4aba252573e.yaml0000664000175000017500000001000213652514273026533 0ustar zuulzuul00000000000000--- features: - | Adds multitenant networking support. Ironic now has the concept of "network interfaces" for a node, which represent a networking driver. There are three network interfaces available: * ``flat``: this replicates the old flat network behavior and is the default when using neutron for DHCP. * ``noop``: this replicates the old flat behavior when not using neutron for DHCP, and is the default when the configuration option ``[DHCP]/dhcp_provider`` is set to "none". * ``neutron``: this allows for separating the provisioning and cleaning networks from the tenant networks, and provides isolation from tenant network to tenant network, and tenant network to control plane. The following configuration options must be set if the neutron interface is enabled, or ironic-conductor will fail to start: * ``[neutron]/provisioning_network_uuid`` * ``[neutron]/cleaning_network_uuid`` A ``[DEFAULT]/enabled_network_interfaces`` option (which must be set for both ironic-api and ironic-conductor services) controls which network interfaces are available for use. A network interface is set for a node by setting the ``network_interface`` field for the node via the REST API. This field is available in API version 1.20 and above. Changing the network interface may only be done in the ``enroll``, ``inspecting``, and ``manageable`` states. The configuration option ``[DEFAULT]/default_network_interface`` may be used to specify which network interface is defined when a node is created. **WARNING: don't set the option ``[DEFAULT]/default_network_interface`` before upgrading to this release without reading the upgrade notes about it, due to data migrations depending on the value.** deprecations: - | ``create_cleaning_ports`` and ``delete_cleaning_ports`` methods in DHCP providers are deprecated and will be removed completely in the Ocata release. The logic they are implementing should be moved to a custom network interface's ``add_cleaning_network`` and ``remove_cleaning_network`` methods respectively. After that, the methods themselves should be removed from DHCP provider so that the custom network interface is used instead. ``flat`` network interface does not require ``[neutron]/cleaning_network_uuid`` for now so as not to break standalone deployments upon upgrade, but it will be required in the Ocata release if the ``flat`` network interface is enabled. upgrade: - | ``[DEFAULT]/default_network_interface`` configuration option is introduced, with empty default value. If set, the specified interface will be used as the network interface for nodes that don't have ``network_interface`` field set. If it is not set, the network interface is determined by looking at the ``[dhcp]/dhcp_provider`` value. If it is ``neutron`` - ``flat`` network interface is the default, ``noop`` otherwise. The network interface will be set for all nodes without network_interface already set via a database migration. This will be set following the logic above. When running database migrations for an existing deployment, it's important to check the above configuration options to ensure the existing nodes will have the expected network_interface. If ``[DEFAULT]/default_network_interface`` is not set, everything should go as expected. If it is set, ensure that it is set to the value that you wish existing nodes to use. - Note that if the configuration option ``[DEFAULT]/default_network_interface`` is set, it must be set in the configuration file for both the API and conductor hosts. - If ``neutron`` network interface is specified for the configuration option ``[DEFAULT]/enabled_network_interfaces``, then ``[neutron]/provisioning_network_uuid`` and ``[neutron]/cleaning_network_uuid`` configuration options are required. If either of them is not specified, the ironic-conductor service will fail to start. ironic-15.0.0/releasenotes/notes/async-deprecate-b3d81d7968ea47e5.yaml0000664000175000017500000000072613652514273025127 0ustar zuulzuul00000000000000--- other: For out-of-tree drivers that have `vendor passthru methods `_. The ``async`` parameter of the ``passthru`` and ``driver_passthru`` decorators is deprecated and will be removed in the Stein cycle. Please use its replacement instead, the ``async_call`` parameter. For more information, see `bug 1751306 `_. ironic-15.0.0/releasenotes/notes/bug-30316-8c53358681e464eb.yaml0000664000175000017500000000021213652514273023132 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue with the ``ansible`` deploy interface where raw images could not be streamed correctly to the host. ironic-15.0.0/releasenotes/notes/multiple-workers-for-send-sensor-data-89d29c12da30ec54.yaml0000664000175000017500000000073313652514273031314 0ustar zuulzuul00000000000000--- features: - Adds new configuration option ``[conductor]/send_sensor_data_workers`` to allow concurrent sending of sensor data using the specified number of green threads. The ``[conductor]/wait_timeout_for_send_sensor_data`` configuration option is the time to wait for all spawned green threads before running the periodic task again. upgrade: - Increases the default number of workers for the ``send_sensor_data`` periodic task from 1 to 4. ironic-15.0.0/releasenotes/notes/pass-region-to-swiftclient-c8c8bf1020f62ebc.yaml0000664000175000017500000000034413652514273027366 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug where the keystone_authtoken/region_name wasn't passed to Swift when instantiating its client, in a multi-region environment this is needed so the client can choose the correct swift endpoint. ironic-15.0.0/releasenotes/notes/fix-ipv6-option6-tag-549093681dcf940c.yaml0000664000175000017500000000142213652514273025522 0ustar zuulzuul00000000000000--- issues: - | Support for IPv6 and iPXE is restricted and is unlikely to work in default scenarios and configurations without external intervention. This is due to the way DHCPv6 and dnsmasq operate. At present this issue is being tracked in story `2005402 `_. fixes: - | Fixes an issue introduced during the Stein development cycle in an attempt to fix IPv6 support where the networking service was also prepending the DHCP option indicator to the number. A fix has been been submitted to the Networking service to address this issue, and the prepending code has been removed from ironic. See story `2004501 `_ for more information. ironic-15.0.0/releasenotes/notes/setting_provisioning_cleaning_network-fb60caa1cf59cdcf.yaml0000664000175000017500000000136113652514273032304 0ustar zuulzuul00000000000000--- features: - | Allows specifying the provisioning and cleaning networks on a node as ``driver_info['cleaning_network']`` and ``driver_info['provisioning_network']`` respectively. If these values are defined in the node's driver_info at the time of provisioning or cleaning the baremetal node, they will be used. Otherwise, the configuration options ``cleaning_network`` and ``provisioning_network`` are used as before. fixes: - | A network UUID for provisioning and cleaning network is no longer cached locally if the requested network (either via node's ``driver_info`` or via configuration options) is specified as a network name. Fixes the situation when a network is re-created with the same name. ironic-15.0.0/releasenotes/notes/add-storage-interface-d4e64224804207fc.yaml0000664000175000017500000000034513652514273026023 0ustar zuulzuul00000000000000--- features: - | Adds the initial substrate to allow for the creation of storage interfaces. The default storage interface for nodes is ``noop``, which routes to a no-op driver that is included with the substrate. ironic-15.0.0/releasenotes/notes/active-node-creation-a41c9869c966c82b.yaml0000664000175000017500000000145313652514273025777 0ustar zuulzuul00000000000000--- features: - Addition of the provision state target verb of ``adopt`` which allows an operator to move a node into an ``active`` state from ``manageable`` state, without performing a deployment operation on the node. This can be used to represent nodes that have been previously deployed by other means that will now be managed by ironic and be later released to the available hardware pool. other: - When a node is enrolled into ironic, upon transition to the ``manageable`` state, the current power state of the node is recorded. Once the node is adopted and in an ``active`` state, that recorded power state will be enforced by ironic unless an operator changes the power state in ironic. This was the default behavior of ironic prior to the adoption feature. ironic-15.0.0/releasenotes/notes/story-2006321-ilo5-raid-create-fails-1bb1e648da0db0f1.yaml0000664000175000017500000000033413652514273030266 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in creation of RAID for ``ilo5`` RAID interface wherein second time RAID creation fails. See `story 2006321 `__ for details. ironic-15.0.0/releasenotes/notes/automated_clean_config-0170c95ae210f953.yaml0000664000175000017500000000027013652514273026336 0ustar zuulzuul00000000000000--- deprecations: - The [conductor]/clean_nodes config is deprecated and will be removed in the Newton cycle. It has been replaced by the [conductor]/automated_clean config. ironic-15.0.0/releasenotes/notes/bug-1749860-457292cf62e18a0e.yaml0000664000175000017500000000027113652514273023373 0ustar zuulzuul00000000000000--- fixes: - | Fixes rescue timeout due to incorrect kernel parameter in the iPXE script. See `bug 1749860 `_ for details. ironic-15.0.0/releasenotes/notes/add-prep-partition-support-d808849795906e64.yaml0000664000175000017500000000052713652514273027011 0ustar zuulzuul00000000000000--- features: - Added support for local booting a partition image for ppc64* hardware. If a PReP partition is detected when deploying to a ppc64* machine, the partition will be specified to IPA causing the bootloader to be installed there directly. This feature requires a ironic-python-agent ramdisk with ironic-lib >=2.14. ironic-15.0.0/releasenotes/notes/streaming-partition-images-d58fe619658b066e.yaml0000664000175000017500000000030113652514273027233 0ustar zuulzuul00000000000000--- features: - | Allows streaming raw partition images to the ramdisk when using the ``direct`` deploy interface. Requires **ironic-python-agent** from the Stein release series. ironic-15.0.0/releasenotes/notes/adding-audit-middleware-b95f2a00baed9750.yaml0000664000175000017500000000071113652514273026552 0ustar zuulzuul00000000000000--- features: - | The ironic-api service now supports logging audit messages of API calls. The following configuration parameters have been added. By default audit logging for ironic-api service is turned off. * ``[audit]/enabled`` * ``[audit]/ignore_req_list`` * ``[audit]/audit_map_file`` Further documentation for this feature is available at http://docs.openstack.org/developer/ironic/deploy/api-audit-support.html . ironic-15.0.0/releasenotes/notes/resume-cleaning-post-oob-reboot-b76c23f98219a8d2.yaml0000664000175000017500000000071513652514273030076 0ustar zuulzuul00000000000000--- fixes: - | The cleaning operation may fail, if an in-band clean step were to execute after the completion of out-of-band clean step that performs reboot of the node. The failure is caused because of race condition where in cleaning is resumed before the Ironic Python Agent(IPA) is ready to execute clean steps. This has been fixed. For more information, see `bug 2002731 `_. ironic-15.0.0/releasenotes/notes/make-terminal-session-timeout-configurable-b2365b7699b0f98b.yaml0000664000175000017500000000027413652514273032326 0ustar zuulzuul00000000000000--- features: - | Adds configuration option ``[console]terminal_timeout`` to allow setting the time (in seconds) of inactivity, after which a socat-based console terminates. ironic-15.0.0/releasenotes/notes/removed-keystone-section-1ec46442fb332c29.yaml0000664000175000017500000000073013652514273026711 0ustar zuulzuul00000000000000--- upgrade: - | Deprecated option ``[keystone]\region_name`` was removed and will be ignored. Instead use ``region_name`` option in other sections related to contacting other services (``[service_catalog]``, ``[cinder]``, ``[glance]``, ``[neutron]``, [``swift``] and ``[inspector]``). As the option ``[keystone]\region_name`` was the only option in ``[keystone]`` section of ironic configuration file, this section was removed as well. ironic-15.0.0/releasenotes/notes/multi-arch-deploy-bcf840107fc94bef.yaml0000664000175000017500000000154513652514273025542 0ustar zuulzuul00000000000000--- features: - | Adds support to deploy to nodes with different CPU architectures from a single conductor. This depends on two new configuration options, ``[pxe]/pxe_config_template_by_arch`` and ``[pxe]/pxe_bootfile_name_by_arch``. Each is a dictionary mapping CPU architecture to PXE config template or PXE boot file name, respectively. As an example, the syntax might look like:: pxe_config_template_by_arch=aarch64:pxe_grubaa64_config.template,ppc64:pxe_ppc64_config.template Ironic attempts to map the CPU architecture in this mapping to the ``properties/cpu_arch`` field for a node. If the node's CPU architecture is not found in the mapping, ironic will fall back to the standard options ``pxe_config_template``, ``pxe_bootfile_name``, ``uefi_pxe_config_template``, and ``uefi_pxe_bootfile_name``. ironic-15.0.0/releasenotes/notes/clear-node-target-power-state-de1f25be46d3e6d7.yaml0000664000175000017500000000014113652514273027742 0ustar zuulzuul00000000000000--- fixes: - Clear target_power_state of the nodes locked by the conductor on its startup. ironic-15.0.0/releasenotes/notes/ironic-status-upgrade-check-framework-9cd216ddf3afb271.yaml0000664000175000017500000000062413652514273031472 0ustar zuulzuul00000000000000--- features: - | New framework for ``ironic-status upgrade check`` command is added. This framework allows adding various checks which can be run before a Ironic upgrade to ensure if the upgrade can be performed safely. upgrade: - | Operator can now use new CLI tool ``ironic-status upgrade check`` to check if Ironic deployment can be safely upgraded from N-1 to N release. ironic-15.0.0/releasenotes/notes/idrac-advance-python-dracclient-version-01c6ef671670ffb3.yaml0000664000175000017500000000035213652514273031626 0ustar zuulzuul00000000000000--- fixes: - | Advances required ``python-dracclient`` version to 1.5.0 and later. That version is required by the fix to the ``idrac`` hardware type's `bug 2004340 `_. ironic-15.0.0/releasenotes/notes/ilo-fix-uefi-iscsi-boot-702ced18e28c5c61.yaml0000664000175000017500000000043313652514273026401 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug in iLO UEFI iSCSI Boot, where it fails if a server has multiple NIC adapters, since Proliant Servers have a limitation of creating only four iSCSI NIC sources and the existing implementation would try to create for more and failed accordingly. ironic-15.0.0/releasenotes/notes/reactive-ibmc-driver-d2149ca81a198090.yaml0000664000175000017500000000016613652514273025677 0ustar zuulzuul00000000000000--- fixes: - | Now that HUAWEI ironic 3rd party CI is back, the ``ibmc`` hardware type driver is supported. ironic-15.0.0/releasenotes/notes/oneview-node-free-for-ironic-61b05fee827664cb.yaml0000664000175000017500000000017113652514273027422 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue with ironic being able to change the power state of nodes currently in use by OneView. ironic-15.0.0/releasenotes/notes/node-in-maintenance-fail-afd0eace24fa28be.yaml0000664000175000017500000000052113652514273027123 0ustar zuulzuul00000000000000--- fixes: - | If a node is in mid-deployment or cleaning and its conductor dies, ironic will move that node into a failed state. However, this wasn't being done if those nodes were also in maintenance. This has been fixed. See `story 2007098 `_ for more details. ironic-15.0.0/releasenotes/notes/get-supported-boot-devices-manadatory-task-0462fc072d6ea517.yaml0000664000175000017500000000032213652514273032224 0ustar zuulzuul00000000000000--- upgrade: - The `task` parameter to `ManagementInterface.get_supported_boot_devices` was previously deprecated as optional, and is now mandatory for all implementations of ManagementInterface. ironic-15.0.0/releasenotes/notes/idrac-no-vendor-911904dd69457826.yaml0000664000175000017500000000020213652514273024534 0ustar zuulzuul00000000000000--- fixes: - | Adds missing ``no-vendor`` implementation to supported vendor interfaces of the ``idrac`` hardware type. ironic-15.0.0/releasenotes/notes/pass_portgroup_settings_to_neutron-a6aec830a82c38a3.yaml0000664000175000017500000000034013652514273031364 0ustar zuulzuul00000000000000--- features: - Port group information (``mode`` and ``properties`` fields) is now passed to Neutron via the port's ``binding:profile`` field. This allows an ML2 driver to configure the port bonding automatically. ironic-15.0.0/releasenotes/notes/noop-mgmt-a4b1a248492c7638.yaml0000664000175000017500000000070313652514273023612 0ustar zuulzuul00000000000000--- deprecations: - | Using the ``fake`` management interface with the ``manual-management`` hardware type is deprecated, please use ``noop`` instead. Existing nodes will have to be updated after the upgrade. fixes: - | The ``manual-management`` hardware type now defaults to the ``noop`` management interface. Unlike the ``fake`` management interface, it does not fail on attempt to set the boot device to the local disk. ironic-15.0.0/releasenotes/notes/no-classic-oneview-e46ee2838d2b1d37.yaml0000664000175000017500000000025313652514273025546 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated classic drivers ``iscsi_pxe_oneview`` and ``agent_pxe_oneview`` have been removed. Please use the ``oneview`` hardware type. ironic-15.0.0/releasenotes/notes/mask-ssh-creds-54ab7b2656578d2e.yaml0000664000175000017500000000016613652514273024612 0ustar zuulzuul00000000000000--- security: - Private SSH keys are now masked when using the SSH power driver and node details are requested. ironic-15.0.0/releasenotes/notes/instance-info-root-device-0a5190240fcc8fd8.yaml0000664000175000017500000000035213652514273027003 0ustar zuulzuul00000000000000--- features: - | Allows reading the ``root_device`` from ``instance_info``, overriding the value in ``properties``. This enables per-instance root device settings and requires the Ussuri release of ironic-python-agent. ironic-15.0.0/releasenotes/notes/adds-external-storage-interface-9b7c0a0a2afd3176.yaml0000664000175000017500000000104313652514273030232 0ustar zuulzuul00000000000000--- features: - | Adds ``external`` storage interface which is short for "externally managed". This adds logic to allow the Bare Metal service to identify when a BFV scenario is being requested based upon the configuration set for ``volume targets``. The user must create the entry, and no syncronizaiton with a Block Storage service will occur. `Documentation `_ has been updated to reflect how to use this interface. ironic-15.0.0/releasenotes/notes/caseless-conductor-restart-check-f70005fbf65f6bb6.yaml0000664000175000017500000000024313652514273030451 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where a node may be locked from changes if a conductor's hostname case is changed before restarting the conductor service. ironic-15.0.0/releasenotes/notes/fixes-get-boot-option-for-software-raid-baa2cffd95e1f624.yaml0000664000175000017500000000036213652514273031747 0ustar zuulzuul00000000000000--- fixes: - | Fixes a minor issue with ``get_boot_option`` logic that did not account for Software RAID. This can erroniously cause the deployment to take the the incorrect deployment path and attempt to install a boot loader. ironic-15.0.0/releasenotes/notes/fix-ilo-drivers-log-message-c3c64c1ca0a0bca8.yaml0000664000175000017500000000034613652514273027455 0ustar zuulzuul00000000000000--- fixes: - | When the deletion of a swift temporary object fails because the object is no longer available in swift, a message is logged. The log level of this message was changed from ``WARNING`` to ``INFO``. ironic-15.0.0/releasenotes/notes/partprobe-retries-e69e9d20f3a3c2d3.yaml0000664000175000017500000000127613652514273025574 0ustar zuulzuul00000000000000--- upgrade: - | Adds a new configuration option ``[disk_utils]partprobe_attempts`` which defaults to 10. This is the maximum number of times to try to read a partition (if creating a config drive) via a ``partprobe`` command. Set it to 1 if you want the previous behavior, where no retries were done. fixes: - | Adds a new configuration option ``[disk_utils]partprobe_attempts`` which defaults to 10. This is the maximum number of times to try to read a partition (if creating a config drive) via a ``partprobe`` command. Previously, no retries were done which caused failures. This addresses `bug 1756760 `_. ironic-15.0.0/releasenotes/notes/bug-2006334-0cd8f59073f56241.yaml0000664000175000017500000000045013652514273023274 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue regarding the ``ansible`` deploy interface, where the configdrive partition could not be correctly built if the node root device was set to some logical device (like an md array, /dev/md0). https://storyboard.openstack.org/#!/story/2006334 ironic-15.0.0/releasenotes/notes/reserved-node-names-67a08012ed1131ae.yaml0000664000175000017500000000043113652514273025573 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem which allowed nodes to be named with some reserved words that are implicitly not allowed due the way the Ironic API works. The reserved words are "maintenance", "management", "ports", "states", "vendor_passthru", "validate" and "detail". ironic-15.0.0/releasenotes/notes/futurist-e9c55699f479f97a.yaml0000664000175000017500000000132313652514273023704 0ustar zuulzuul00000000000000--- prelude: > This release features switch to Oslo Futurist library for asynchronous thread execution and periodic tasks. Main benefit is that periodic tasks are now executed truly in parallel, and not sequentially in one green thread. upgrade: - Configuration option "workers_pool_size" can no longer be less or equal to 2. Please set it to greater value (the default is 100) before update. deprecations: - Configuration option "periodic_interval" is deprecated. - Using "driver_periodic_task" decorator is deprecated. Please update your out-of-tree drivers to use "periodics.periodic" decorator from Futurist library. fixes: - Periodic tasks are no longer executed all in one thread. ironic-15.0.0/releasenotes/notes/ipxe_retry_on_failure-e71fc6b3e9a5be3b.yaml0000664000175000017500000000036213652514273026653 0ustar zuulzuul00000000000000--- features: - ipxe will now retry to download the kernel or the initrd in case of failure. The previous behavior was to give up and continue the boot on the next boot device. See https://bugs.launchpad.net/ironic/+bug/1326656 ironic-15.0.0/releasenotes/notes/add-iscsi-portal-port-option-bde3b386f44f2a90.yaml0000664000175000017500000000045013652514273027543 0ustar zuulzuul00000000000000--- features: - IPA supported iSCSI portal port customization already. With this patch, we added new portal_port argument into agent_client.start_iscsi_target() method to pass iSCSI portal port to IPA side. And add new configuration into iscsi module as CONF.iscsi.portal_port ironic-15.0.0/releasenotes/notes/bug-2002093-9fcb3613d2daeced.yaml0000664000175000017500000000064013652514273023723 0ustar zuulzuul00000000000000--- upgrade: - | To use CoreOS based deploy/cleaning ramdisk built using Ironic Python Agent from the Rocky release, Ironic should be upgraded to the Rocky release if PXE is used. Otherwise, a node cannot be deployed or cleaned because the IPA fails to boot due to an unsupported parameter passed via PXE. See `bug 2002093 `_ for details. ironic-15.0.0/releasenotes/notes/idrac-add-redfish-inspect-support-ce74bd3d4a97b588.yaml0000664000175000017500000000161613652514273030543 0ustar zuulzuul00000000000000--- features: - | Adds ``idrac`` hardware type support of an inspect interface implementation that utilizes the Redfish out-of-band (OOB) management protocol and is compatible with the integrated Dell Remote Access Controller (iDRAC) baseboard management controller (BMC). It is named ``idrac-redfish``. The ``idrac`` hardware type declares support for that new interface implementation, in addition to all inspect interface implementations it has been supporting. The highest priority inspect interfaces remain the same, those which rely on the Web Services Management (WS-Man) OOB management protocol. The new 'idrac-redfish' immediately follows those. It now supports the following inspect interface implementations, listed in priority order from highest to lowest: ``idrac-wsman``, ``idrac``, ``idrac-redfish``, ``inspector``, and ``no-inspect``. ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/add_clean_step_reset_idrac_and_known_good_state-cdbebf97d7b87fe7.yamlironic-15.0.0/releasenotes/notes/add_clean_step_reset_idrac_and_known_good_state-cdbebf97d7b87fe7.ya0000664000175000017500000000045513652514273033572 0ustar zuulzuul00000000000000--- features: - | Adds ``reset_idrac`` and ``known_good_state`` cleaning steps to hardware type ``idrac``. ``reset_idrac`` actually resets the iDRAC; ``known_good_state`` also resets the iDRAC and clears the Lifecycle Controller job queue to make sure the iDRAC is in good state.ironic-15.0.0/releasenotes/notes/add-snmpv3-security-features-bbefb8b844813a53.yaml0000664000175000017500000000135713652514273027557 0ustar zuulzuul00000000000000--- features: - | Adds SNMPv3 message authentication and encryption features to ironic ``snmp`` hardware type. To enable these features, the following parameters should be used in the node's ``driver_info``: * ``snmp_user`` * ``snmp_auth_protocol`` * ``snmp_auth_key`` * ``snmp_priv_protocol`` * ``snmp_priv_key`` Also adds support for the ``context_engine_id`` and ``context_name`` parameters of SNMPv3 message at ironic ``snmp`` hardware type. They can be configured in the node's ``driver_info``. deprecations: - | Deprecates the ``snmp_security`` field in ``driver_info`` for ironic ``snmp`` hardware type, it will be removed in Stein release. Please use ``snmp_user`` field instead. ironic-15.0.0/releasenotes/notes/deprecate-irmc-031f55c3bb1fb863.yaml0000664000175000017500000000065013652514273024711 0ustar zuulzuul00000000000000--- deprecations: - | The Fujitsu ``irmc`` hardware type has been deprecated. The Third Party CI for the driver stopped responding on or around July 7th, 2019. As such, we cannot claim fixes or changes to the driver are in a working state. We have heard from the Fujitsu team that they intend to return ``irmc`` CI to working order, and as such should that occur this deprecation will be revoked. ironic-15.0.0/releasenotes/notes/pxe-retry-762a00ba1089bd75.yaml0000664000175000017500000000062613652514273023706 0ustar zuulzuul00000000000000--- features: - | Allows retrying PXE/iPXE boot during deployment, cleaning and rescuing. This feature is disabled by default and can be enabled by setting ``[pxe]boot_retry_timeout`` to the timeout (in seconds) after which the boot should be retried. The new option ``[pxe]boot_retry_check_interval`` defines how often to check the nodes for timeout and defaults to 90 seconds. ironic-15.0.0/releasenotes/notes/vendor-passthru-shared-lock-6a9e32952ee6c2fe.yaml0000664000175000017500000000030713652514273027464 0ustar zuulzuul00000000000000--- features: - Adds the ability for node vendor passthru methods to use shared locks. Default behavior of always acquiring an exclusive lock for node vendor passthru methods is unchanged. ironic-15.0.0/releasenotes/notes/whole-disk-scsi-install-bootloader-f7e791d82da476ca.yaml0000664000175000017500000000027413652514273030733 0ustar zuulzuul00000000000000--- fixes: - | When installing a whole disk image using iscsi, set up the bootloader even if a root partition can not be found. The bootloaders will be located on the disk. ironic-15.0.0/releasenotes/notes/story-2006218-uefi-iso-creation-fails-ba0180991fdd0783.yaml0000664000175000017500000000053113652514273030366 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in ISO creation for UEFI boot mode when efiboot.img file is provided and the directory of location of grub.cfg file set using config ``[DEFAULT]/grub_config_path`` is not same as that of efiboot.img file. See `story 2006218 `__ for details. ironic-15.0.0/releasenotes/notes/emit-metrics-for-api-calls-69f18fd1b9d54b05.yaml0000664000175000017500000000025213652514273027074 0ustar zuulzuul00000000000000--- features: - Ironic now emits timing metrics for all API methods to statsd, if enabled by the ``[metrics]`` and ``[metrics_statsd]`` configuration sections. ironic-15.0.0/releasenotes/notes/fix-mitaka-ipa-iscsi.yaml0000664000175000017500000000072413652514273023533 0ustar zuulzuul00000000000000--- upgrade: - Fixed Mitaka ironic python agent ramdisk iSCSI deploy compatibility with newer versions of ironic by logging the warning and retrying the deploy if wiping root disk metadata before exposing it over iSCSI fails. If custom iSCSI port is requested, an error clarifying the issue is logged and the operator is requested either to use the default iSCSI portal port, or to upgrade ironic python agent ramdisk to version >= 1.3 (Newton). ironic-15.0.0/releasenotes/notes/add-ansible-python-interpreter-2035e0f23d407aaf.yaml0000664000175000017500000000147213652514273030044 0ustar zuulzuul00000000000000--- features: - Adds option ``[ansible]default_python_interpreter`` to choose the python interpreter that ansible uses on managed machines. By default, ansible uses ``/usr/bin/python`` as interpreter, making the assumption that that path is always present on remote managed systems. This might not be always the case, for example in custom build images or Python 3 native distributions. With this option the operator has the ability to set the absolute path of the python interpreter on the remote machines, for example ``/usr/bin/python3``. The same interpreter will be used in all operations that use the ansible deploy interface. It is also possible to override the value set in the configuration for a node by passing ``ansible_python_interpreter`` in its ``driver_info``.ironic-15.0.0/releasenotes/notes/fixes-noop-network-with-grub-8fd99a73b593ddba.yaml0000664000175000017500000000026113652514273027674 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where users attempting to leverage non-iPXE UEFI booting would experience failures when their ``dhcp_provider`` was set to ``none``. ironic-15.0.0/releasenotes/notes/fake_soft_power-32683a848a989fc2.yaml0000664000175000017500000000023213652514273025067 0ustar zuulzuul00000000000000--- other: - | All ``fake`` classic drivers now implement fake soft power actions. The ``fake_soft_power`` driver is now identical to ``fake``. ironic-15.0.0/releasenotes/notes/nodes-classic-drivers-cannot-set-interfaces-620b37c4e5c88b80.yaml0000664000175000017500000000024713652514273032356 0ustar zuulzuul00000000000000--- fixes: - | Nodes with classic drivers cannot have any interfaces (except for network and storage) specified. HTTP status 400 is returned in these cases. ironic-15.0.0/releasenotes/notes/remove-periodic-interval-45f57ebad9aaa14e.yaml0000664000175000017500000000011313652514273027157 0ustar zuulzuul00000000000000--- upgrade: - Removes the deprecated config option "periodic_interval". ironic-15.0.0/releasenotes/notes/fix-policy-checkers-1a08203e3c2cf859.yaml0000664000175000017500000000017613652514273025623 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug where some of the API methods were not using the right context values for checking the policy. ironic-15.0.0/releasenotes/notes/bp-nova-support-instance-power-update-49c531ef13982e62.yaml0000664000175000017500000000336013652514273031176 0ustar zuulzuul00000000000000--- features: - | Adds power state change callbacks of an instance to the Compute service by performing API notifications. This feature is enabled by default and can be disabled via the new ``[nova]send_power_notifications`` configuration option. Whenever there is a change in the power state of a physical instance, the Bare Metal service will send a ``power-update`` external event to the Compute service which will cause the power state of the instance to be updated in the Compute database. It also adds the possibility of bringing up/down a physical instance through the Bare Metal service API even if it was put down/up through the Compute service API. fixes: - | By immediately conveying power state changes of a node through external events to the Compute service, the Bare Metal service becomes the source of truth about the node's power state, preventing the Compute service from forcing wrong power states on instances during the periodic power state synchronization between the Compute and Bare Metal services. .. note:: There is a possibility of a race condition due to the nova-ironic power sync task happening during or right before the power state change event is received from the Bare Metal service, in which case the instance state will be forced on the baremetal node. upgrade: - | In order to support power state change call backs to nova, the ``[nova]`` section must be configured in the Bare Metal service configuration. As the functionality to process the event is new to nova's Train release, this should only be set to ``True`` in ironic, once *ALL* ``nova-compute`` instances have been upgraded to the Train release of nova. ironic-15.0.0/releasenotes/notes/deprecated-glance-opts-4825f000d20c2932.yaml0000664000175000017500000000266313652514273026117 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``glance_api_servers`` from the ``[glance]`` section in the configuration file is deprecated and will be ignored in the Rocky release. Instead, use ``[glance]/endpoint_override`` configuration option to set a specific (possibly load-balanced) glance API address when automatic discovery of glance API endpoint from keystone catalog is not desired. This new option defaults to ``None`` and must be set explicitly if needed. This new option is mostly suited for standalone ironic deployments without keystone and its service catalog, and it is generally recommended to rely on keystone service catalog for service endpoint discovery. - | Configuration option ``[glance]/glance_api_insecure`` is deprecated and will be ignored in the Rocky release. Instead, use ``[glance]/insecure`` configuration option (its default is ``False``). - | Configuration option ``[glance]/glance_cafile`` is deprecated and will be ignored in the Rocky release. Instead, use ``[glance]/cafile`` configuration option (its default is ``None``). - | Configuration option ``[glance]/auth_strategy`` is deprecated and will be ignored in the Rocky release. Instead, to setup glance in noauth mode set ``[glance]/auth_type`` configuration option to ``none`` and provide glance API address as ``[glance]/endpoint_override`` configuration option. ironic-15.0.0/releasenotes/notes/idrac-uefi-boot-mode-86f4694b4247a1ca.yaml0000664000175000017500000000110013652514273025643 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue that caused the integrated Dell Remote Access Controller (iDRAC) ``management`` hardware interface implementation, ``idrac``, to fail to boot nodes in Unified Extensible Firmware Interface (UEFI) boot mode. That interface is supported by the ``idrac`` hardware type. The issue is resolved for Dell EMC PowerEdge 13th and 14th generation servers. It is not resolved for PowerEdge 12th generation and earlier servers. For more information, see `story 1656841 `_. ironic-15.0.0/releasenotes/notes/notimplementederror-misspell-276a181afd652cf6.yaml0000664000175000017500000000042113652514273027756 0ustar zuulzuul00000000000000--- fixes: - In conductor/rpcapi.py, object_backport_version(), object_action() and object_class_action_versions() misspell NotImplementedError with NotImplemented which returns nothing useful to users. See https://bugs.launchpad.net/ironic/+bug/1524163. ironic-15.0.0/releasenotes/notes/disk-label-capability-d36d126e0ad36dca.yaml0000664000175000017500000000026313652514273026310 0ustar zuulzuul00000000000000--- features: - Add support for a new capability called 'disk_label' to allow operators to choose the disk label that will be used when Ironic is partitioning the disk. ironic-15.0.0/releasenotes/notes/port_delete-6628b736a1b556f6.yaml0000664000175000017500000000023213652514273024207 0ustar zuulzuul00000000000000--- fixes: - | Prevents deletion of ports for active nodes. It is still possible to delete them after putting the node in the maintenance mode. ironic-15.0.0/releasenotes/notes/check-dynamic-allocation-enabled-e94f3b8963b114d0.yaml0000664000175000017500000000045513652514273030175 0ustar zuulzuul00000000000000--- fixes: - The ``dynamic_allocation`` flag in a node's driver_info previously only accepted a Boolean. It now also accepts the strings 't', 'true', 'on', 'y', 'yes', or '1' as True, and the strings 'f', 'false', 'off', 'n', 'no', or '0' as False. These are matched case-insensitively. ironic-15.0.0/releasenotes/notes/bug-1611556-92cbfde5ee7f44d6.yaml0000664000175000017500000000006613652514273023676 0ustar zuulzuul00000000000000--- features: - Adds timing metrics to iRMC drivers.ironic-15.0.0/releasenotes/notes/portgroup-crud-notifications-91204635528972b2.yaml0000664000175000017500000000046213652514273027324 0ustar zuulzuul00000000000000--- features: - | Adds notifications for creation, updates, or deletions of port groups. Event types are formatted as follows: * baremetal.portgroup.{create, update, delete}.{start,end,error} Also adds portgroup_uuid field to port notifications, port payload version bumped to 1.1. ironic-15.0.0/releasenotes/notes/dynamic-allocation-spt-has-physical-mac-8967a1d926ed9301.yaml0000664000175000017500000000057313652514273031410 0ustar zuulzuul00000000000000--- upgrade: - The minimum version of python-oneviewclient is now 2.5.2. fixes: - A validation step is added to verify that the Server Profile Template's MAC type is set to Physical when dynamic allocation is enabled. The OneView Driver needs this verification because the machine is going to use a MAC that will only be specified at the profile application. ironic-15.0.0/releasenotes/notes/sofware_raid_use_rootfs_uuid-f61eb671d696d251.yaml0000664000175000017500000000107213652514273027726 0ustar zuulzuul00000000000000--- features: - | Software RAID is no longer limited to images which have the root file system in the first partition. upgrade: - | For Software RAID, the IPA no longer assumes that the root file system of the deployed image is in the first partition. Instead, it will use the UUID passed from the conductor. Operators need hence to make sure that the conductor has the correct UUID (which either comes from the ``rootfs_uuid`` field in the image metadata or from the ``root_uuid_or_disk_id`` in the node's ``internal_driver_info``.) ironic-15.0.0/releasenotes/notes/add-notifications-97b6c79c18b48073.yaml0000664000175000017500000000062213652514273025312 0ustar zuulzuul00000000000000--- features: - Adds support for inter-service notifications (disabled by default until the ``notification_level`` configuration option is set). For more information, see the notifications documentation in the developer's guide (http://docs.openstack.org/developer/ironic/dev/notifications.html). Notifications are not actually emitted yet, but will be added in a future release. ironic-15.0.0/releasenotes/notes/iscsi-inband-cleaning-bff87aac16e5d488.yaml0000664000175000017500000000021213652514273026323 0ustar zuulzuul00000000000000--- features: - Adds support for in-band clean steps in the iSCSI deploy driver, when using ironic-python-agent as the ramdisk. ironic-15.0.0/releasenotes/notes/manual-abort-d3d8985a5de7376a.yaml0000664000175000017500000000025613652514273024441 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug in manual clean step caching, which resulted in all clean steps not being abortable. See https://bugs.launchpad.net/ironic/+bug/1658061. ironic-15.0.0/releasenotes/notes/hctl-root-device-hints-0cab86673bc4a924.yaml0000664000175000017500000000020013652514273026314 0ustar zuulzuul00000000000000--- features: - Add ``hctl`` to root device hints. HCTL is the SCSI address and stands for Host, Channel, Target and Lun. ironic-15.0.0/releasenotes/notes/port-physical-network-a7009dc514353796.yaml0000664000175000017500000000232413652514273026105 0ustar zuulzuul00000000000000--- features: - | Adds a ``physical_network`` field to the port object in REST API version 1.34. This field specifies the name of the physical network to which the port is connected, and is empty by default. This field may be set by the operator to allow the Bare Metal service to incorporate physical network information when attaching virtual interfaces (VIFs). The REST API endpoints related to ports provide support for the ``physical_network`` field. The `multi-tenancy documentation `_ provides information on how to configure and use physical networks. upgrade: - | Following an upgrade to this release, all ports will have an empty ``physical_network`` field. Attachment of Virtual Interfaces (VIFs) will continue to function as in the previous release until any ports have their physical network field set. During a live upgrade to this release, the ``physical_network`` field will not be available. It will also not be possible to create ports which are members of a port group during a live upgrade, as the API service will be unable to validate the consistency of the request. ironic-15.0.0/releasenotes/notes/story-2006316-raid-create-fails-c3661e185fb11c9f.yaml0000664000175000017500000000035713652514273027314 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in creation of RAID if none of the 'logical_disks' in 'target_raid_config' have 'controller' parameter. See `story 2006316 `__ for details. ironic-15.0.0/releasenotes/notes/remove-deprecated-hash_distribution_replicas-08351358eba4c9e1.yaml0000664000175000017500000000021613652514273032747 0ustar zuulzuul00000000000000--- upgrade: - | Removes the configuration option ``[DEFAULT]/hash_distribution_replicas`` which was deprecated in the Stein cycle. ironic-15.0.0/releasenotes/notes/inspection-logging-e1172f549ef80b04.yaml0000664000175000017500000000022113652514273025547 0ustar zuulzuul00000000000000--- fixes: - Correctly handle unexpected exceptions during inspection. Return more detailed error message to a user and log the traceback. ironic-15.0.0/releasenotes/notes/no-classic-drivers-e68d8527491314c3.yaml0000664000175000017500000000072313652514273025336 0ustar zuulzuul00000000000000upgrade: - | It is no longer possible to load a classic driver. Only hardware types are supported from now on. - | The ``/v1/drivers/?type=classic`` API always returns an empty list since classic drivers can no longer be loaded. deprecations: - | The ``enabled_drivers`` option is now deprecated. Since classic drivers can no longer be loaded, setting this option to anything non-empty will result in the conductor failing to start. ironic-15.0.0/releasenotes/notes/idrac-drives-conversion-jbod-to-raid-1a229627708e10b9.yaml0000664000175000017500000000071513652514273030625 0ustar zuulzuul00000000000000--- fixes: - | When using the PERC H730P RAID controller, physical disks must be put into RAID mode prior to creating a virtual disk that includes them. If one or more physical disks are in JBOD/Non-RAID mode when creating a virtual disk from them, then the iDRAC will return an error. This patch ensures that the physical disks being included in a virtual disk are converted to RAID mode prior to creating the virtual disk. ironic-15.0.0/releasenotes/notes/deploy-step-error-d343e8cb7d1b2305.yaml0000664000175000017500000000026413652514273025422 0ustar zuulzuul00000000000000--- fixes: - | Fixes vague node ``last_error`` field reporting upon deploy step failure by providing the exception error message in addition to the step that failed. ironic-15.0.0/releasenotes/notes/raid-dell-boss-e9c5da9ddceedd67.yaml0000664000175000017500000000014713652514273025235 0ustar zuulzuul00000000000000--- features: - Adds support for RAID 1 creation on Dell Boot Optimized Storage Solution (BOSS). ironic-15.0.0/releasenotes/notes/xclarity-mask-password-9fe7605ece7689c3.yaml0000664000175000017500000000016513652514273026515 0ustar zuulzuul00000000000000--- security: - | Xclarity password specified in configuration file is now properly masked during logging. ironic-15.0.0/releasenotes/notes/remove-ipxe-tags-with-ipv6-cf4b7937c27590d6.yaml0000664000175000017500000000026413652514273027022 0ustar zuulzuul00000000000000--- fixes: - | Fixes the duplication of the "ipxe" tag when using IPv6, which leads to the dhcp server possibly returning an incorrect response to the DHCPv6 client. ironic-15.0.0/releasenotes/notes/node-owner-policy-d7168976bba70566.yaml0000664000175000017500000000041713652514273025265 0ustar zuulzuul00000000000000--- features: - Adds an ``is_node_owner`` policy rule. This rule can be used with node policy rules in order to expose specific node APIs to a project ID specified by a node's ``owner`` field. Default rules are unaffected, so default behavior is unchanged. ironic-15.0.0/releasenotes/notes/add-target-raid-config-ansible-deploy-c9ae81d9d25c62fe.yaml0000664000175000017500000000043613652514273031311 0ustar zuulzuul00000000000000--- features: - | Add ``target_raid_config`` data to ``ironic`` variable under ``raid_config`` top-level key which will expose the RAID configuration to the ``ansible`` driver. See `story 2006417 `__ for details. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/story-2002600-return-503-if-no-conductors-online-ead1512628182ec4.yamlironic-15.0.0/releasenotes/notes/story-2002600-return-503-if-no-conductors-online-ead1512628182ec4.y0000664000175000017500000000032513652514273031536 0ustar zuulzuul00000000000000--- fixes: - | Ironic API now returns ``503 Service Unavailable`` for action requiring a conductor when no conductors are online. `Bug: 2002600 `_. ironic-15.0.0/releasenotes/notes/enhanced-checksum-f5a2b7aa8632b88f.yaml0000664000175000017500000000057613652514273025475 0ustar zuulzuul00000000000000--- features: - In accordance with the `multihash support `_ provided by glance, ironic now supports using the new ``os_hash_algo`` and ``os_hash_value`` fields to computes and validates image checksum when deploying instance images by the ``direct`` deploy interface. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/raise-bad-request-exception-on-validating-inspection-failure-57d7fd2999cf4ecf.yamlironic-15.0.0/releasenotes/notes/raise-bad-request-exception-on-validating-inspection-failure-57d7fd0000664000175000017500000000032513652514273033515 0ustar zuulzuul00000000000000--- fixes: - Raises HTTP 400 ``Bad Request`` (instead of HTTP 500 ``Internal Server``) error on failure to validate ``power`` or ``inspect`` interface parameters before performing a hardware inspection. ironic-15.0.0/releasenotes/notes/use-current-node-driver_internal_info-5c11de8f2c2b2e87.yaml0000664000175000017500000000053513652514273031516 0ustar zuulzuul00000000000000--- fixes: - | During node cleaning, the conductor was using a cached copy of the node's driver_internal_info field. It is possible that the copy is outdated, which would cause issues with the state of the node. This has been fixed. For more information, see `bug 2002688 `_. ironic-15.0.0/releasenotes/notes/bug-1548086-ed88646061b88faf.yaml0000664000175000017500000000070013652514273023462 0ustar zuulzuul00000000000000--- features: - Adds support to pass a optional CA certificate using [glance]glance_cafile configuration option to validate the SSL certificate served by glance for secured https communication between Glance and Ironic. upgrade: - Adds a [glance]glance_cafile configuration option to pass a optional certificate for secured https communication. It is used when [glance]glance_api_insecure configuration option is set to False. ironic-15.0.0/releasenotes/notes/check-for-whole-disk-image-uefi-3bf2146588de2423.yaml0000664000175000017500000000041713652514273027612 0ustar zuulzuul00000000000000--- fixes: - Removes a check that was preventing whole disk images from being deployed in UEFI mode without explicitly setting the ``boot_option`` capability to ``local``. For whole disk images, ironic already assumes booting from local storage by default. ironic-15.0.0/releasenotes/notes/ilo-remove-deprecated-power-retry-ba29a21f03fe8dbb.yaml0000664000175000017500000000020713652514273030712 0ustar zuulzuul00000000000000--- upgrade: - | Removes deprecated option ``[ilo]/power_retry``. Please use ``[conductor]/soft_power_off_timeout`` instead. ironic-15.0.0/releasenotes/notes/ipxe-and-uefi-7722bd5db71df02c.yaml0000664000175000017500000000007413652514273024542 0ustar zuulzuul00000000000000--- features: - Adds support for using iPXE in UEFI mode. ironic-15.0.0/releasenotes/notes/use_secrets_to_generate_token-55af0f43e5a80b9e.yaml0000664000175000017500000000026013652514273030213 0ustar zuulzuul00000000000000--- security: - | The secret token that is used for IPA verification will be generated using the secrets module to be in compliance with the ``FIPS 140-2`` standard. ironic-15.0.0/releasenotes/notes/oneview-timeout-power-db5125e05831d925.yaml0000664000175000017500000000024113652514273026160 0ustar zuulzuul00000000000000--- features: - | Adds support for ``timeout`` parameter when powering on/off or rebooting a bare metal node managed by the ``oneview`` hardware type. ironic-15.0.0/releasenotes/notes/fix-capabilities-as-string-agent-7c5c7975560ce280.yaml0000664000175000017500000000020513652514273030122 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where deploy fails during node preparation if the node ``capabilities`` are passed as string.ironic-15.0.0/releasenotes/notes/bug-1648387-92db52cbe007fabd.yaml0000664000175000017500000000024713652514273023664 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where the API service does not start if audit is enabled with the default value of ``[audit]/ignore_req_list`` configuration option. ironic-15.0.0/releasenotes/notes/no-classic-ucs-cimc-7c62bb189ffbe0dd.yaml0000664000175000017500000000050013652514273026006 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated classic drivers ``pxe_ucs`` and ``agent_ucs`` have been removed. Please use the ``cisco-ucs-managed`` hardware type. - | The deprecated classic drivers ``pxe_iscsi_cimc`` and ``pxe_agent_cimc`` have been removed. Please use the ``cisco-ucs-standalone`` hardware type. ironic-15.0.0/releasenotes/notes/parallel-erasure-1943da9b53a2095d.yaml0000664000175000017500000000073613652514273025220 0ustar zuulzuul00000000000000--- features: - Adds a configuration option ``[deploy]disk_erasure_concurrency`` to define the target pool size used by Ironic Python Agent ramdisk to erase disk devices. The number of threads created by IPA to erase disk devices is the minimum value of target pool size and the number of disks to be erased. This feature can greatly reduce the operation time for baremetals with multiple disks. For the backwards compatibility, the default value is 1.ironic-15.0.0/releasenotes/notes/ipminative-bootdev-uefi-954a0dd825bcef97.yaml0000664000175000017500000000044513652514273026664 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem where the boot mode (UEFI or BIOS) wasn't being considered when setting the boot device of a node using the "ipminative" management interface. It would incorrectly switch UEFI to legacy BIOS mode as part of the request to change the boot device. ironic-15.0.0/releasenotes/notes/issue-conntrack-bionic-7483671771cf2e82.yaml0000664000175000017500000000107113652514273026201 0ustar zuulzuul00000000000000--- issues: - | As good security practice[0], in Ubuntu Bionic the ``nf_conntrack_helper`` is disabled. This causes an issue when using the ``pxe`` boot interface with the PXE environment that breaks some of the Ironic CI tests, since Ironic needs conntrack for TFTP traffic. It's still possible to use Ironic with PXE on Ubuntu Xenial, and it's also possible to use Ironic with PXE on Ubuntu Bionic using a workaround based on custom firewall rules as shown in [0]. [0] https://home.regit.org/netfilter-en/secure-use-of-helpers/ ironic-15.0.0/releasenotes/notes/dual-stack-ironic-493ebc7b71263aaa.yaml0000664000175000017500000000122113652514273025412 0ustar zuulzuul00000000000000--- features: - | Adds functionality with neutron integration to support dual-stack (IPv4 and IPv6 environment configurations). This enables ironic to look up the attached port(s) and supply DHCP options in alignment with the protocol version allocated on the port. upgrade: - | The ``[pxe]ip_version`` setting may no longer be required depending on neutron integration. - | Operators that used the ``[DEFAULT]my_ip`` setting with an IPv6 address may wish to explore migrating to the ``[DEFAULT]my_ipv6`` setting. Setting both values enables the appropriate IP addresses based on protocol version for PXE/iPXE. ironic-15.0.0/releasenotes/notes/ilo-inconsistent-default-boot-mode-ef5a7c56372f89f1.yaml0000664000175000017500000000100513652514273030656 0ustar zuulzuul00000000000000--- fixes: - When no boot mode is explicitly set on a node using an iLO driver, ironic automatically picks a boot mode based on hardware capabilities. This confuses deployers, as these factors are system specific and not configurable. In order to ensure predictable behavior, a new configuration parameter, ``[ilo]/default_boot_mode``, was added to allow deployers to explicitly set a default. The default value of this option keeps behavior consistent for existing deployments. ironic-15.0.0/releasenotes/notes/bug-2003972-dae9b7d0f6180339.yaml0000664000175000017500000000056313652514273023445 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug that a node's ``console_enabled`` is reset to ``False`` at undeploying the node, which requires an operator to set it to ``True`` before deploying again. By this fix, while the console is stopped at tearing down, ``console_enabled`` remains ``True``. When the node is deployed again, the console is started automatically. ironic-15.0.0/releasenotes/notes/upgrade-delete_configuration-0f0bb43c57278734.yaml0000664000175000017500000000023013652514273027504 0ustar zuulzuul00000000000000--- features: - | Foreign drives and global and dedicated hot spares will be freed up during the RAID ``delete_configuration`` cleaning step. ironic-15.0.0/releasenotes/notes/fix-oneview-deploy-return-values-ab2ec6ae568d95a5.yaml0000664000175000017500000000025713652514273030550 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where the OneView deploy interface does not return the node properties and in the tear down phase does not return the state of the node. ironic-15.0.0/releasenotes/notes/resource-classes-1bf903547236a473.yaml0000664000175000017500000000141513652514273025076 0ustar zuulzuul00000000000000--- upgrade: - | Due to upcoming changes in the way Nova schedules bare metal nodes, all nodes in a deployment using Nova have to get the ``resource_class`` field populated before the upgrade. See `enrollment documentation `_ and `flavor configuration documentation `_ for details. Once you've migrated your flavors to resource classes, you should unset the deprecated ``use_baremetal_filters`` option in the Compute service configuration. Otherwise you'll be using the filters incompatible with scheduling based on resource classes. ironic-15.0.0/releasenotes/notes/add-gmr-3c9278d5d785895f.yaml0000664000175000017500000000063613652514273023251 0ustar zuulzuul00000000000000--- features: - Adds support for generating `Guru Meditation Reports `_ (GMR) for both ironic-api and ironic-conductor services. GMR provides debugging information that can be used to obtain an accurate view on the current state of the system. For example, what threads are running, what configuration parameters are in effect, and more. ironic-15.0.0/releasenotes/notes/ramdisk-grub-use-user-kernel-ramdisk-7d572fe130932605.yaml0000664000175000017500000000036313652514273030662 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug with the grub ramdisk boot template handling, such that the template now properly references the user provided kernal and ramdisk. Previously the deployment ramdisk and kernel was referenced in the template. ironic-15.0.0/releasenotes/notes/ilo-boot-interface-92831b78c5614733.yaml0000664000175000017500000000010313652514273025220 0ustar zuulzuul00000000000000--- other: - iLO drivers are now based on the new BootInterface. ironic-15.0.0/releasenotes/notes/uefi-grub2-by-default-6b797a9e690d2dd5.yaml0000664000175000017500000000053613652514273026061 0ustar zuulzuul00000000000000--- upgrade: - The default bootloader for PXE + UEFI has changed from ELILO to Grub2 because ELILO is not being actively developed anymore. Operators relying on ELILO should explicitly set the ``[pxe]/uefi_pxe_bootfile_name`` and ``[pxe]/uefi_pxe_config_template`` configuration options to the ELILO ROM and configuration template. ironic-15.0.0/releasenotes/notes/ilo5-oob-sanitize-disk-erase-cc76ea66eb5fe6df.yaml0000664000175000017500000000122713652514273027650 0ustar zuulzuul00000000000000--- features: - Adds functionality to perform out-of-band sanitize disk-erase operation for iLO5 based HPE Proliant servers. Management interface ``ilo5`` has been added to ``ilo5`` hardware type. A clean step ``erase_devices`` has been added to management interface ``ilo5`` to support this operation. upgrade: - The ``do_disk_erase``, ``has_disk_erase_completed`` and ``get_available_disk_types`` interfaces of 'proliantutils' library has been enhanced to support out-of-band sanitize disk-erase operation for ``ilo5`` hardware type. To leverage this feature, the 'proliantutils' library needs to be upgraded to version '2.9.0'. ironic-15.0.0/releasenotes/notes/add-dynamic-allocation-feature-2fd6b4df7943f178.yaml0000664000175000017500000000217613652514273030007 0ustar zuulzuul00000000000000--- features: - OneView drivers now support dynamic allocation of nodes in OneView, allowing for better resource sharing with non-OpenStack users since Server Hardware will be allocated only when the node is scheduled to be used. To enable the new allocation feature for a node, set the flag ``dynamic_allocation=True`` on the node's ``driver_info``. More information is available at http://docs.openstack.org/developer/ironic/drivers/oneview.html. deprecations: - Deprecates pre-allocation feature for the OneView drivers since it requires resource allocation to Ironic prior to boot time, which makes Server Hardware unavailable to non-OpenStack OneView users. Pre-allocation will be removed in the OpenStack Pike release. All nodes with ``dynamic_allocation=False`` set, or that don't have the ``dynamic_allocation`` flag set, will be assumed to be in pre-allocation. Users may use the REST API or the ``ironic-oneview-cli`` to migrate nodes from pre-allocation to dynamic allocation. More information is available at http://docs.openstack.org/developer/ironic/drivers/oneview.html. ironic-15.0.0/releasenotes/notes/release-4.3.0-cc531ab7190f8a00.yaml0000664000175000017500000000016313652514273024076 0ustar zuulzuul00000000000000--- prelude: > Ironic's 4.3.0 release brings a number of new features, driver enhancements, and bug fixes. ironic-15.0.0/releasenotes/notes/fix-socat-command-afc840284446870a.yaml0000664000175000017500000000060113652514273025204 0ustar zuulzuul00000000000000--- fixes: - Fixes issue with socat console support where an unlimited number of connections could be created, resulting in the prior session being destroyed. Connections are now limited to a single connection per server. Socat now closes the console connection upon disconnect or timeout 10min. To reconnect, users should re-activate the console. ironic-15.0.0/releasenotes/notes/fix-baremetal-admin-user-not-neutron-admin-f163df90ab520dad.yaml0000664000175000017500000000026513652514273032323 0ustar zuulzuul00000000000000--- fixes: - Changes interactions with neutron to always use the neutron credentials from ironic configuration, instead of forwarding the credentials from the API client. ironic-15.0.0/releasenotes/notes/no-fake-308b50d4ab83ca7a.yaml0000664000175000017500000000104513652514273023420 0ustar zuulzuul00000000000000--- upgrade: - | All fake classic drivers, deprecated in the Queens release, have been removed. This includes: * ``fake`` * ``fake_agent`` * ``fake_cimc`` * ``fake_drac`` * ``fake_ilo`` * ``fake_inspector`` * ``fake_ipmitool`` * ``fake_ipmitool_socat`` * ``fake_irmc`` * ``fake_oneview`` * ``fake_pxe`` * ``fake_snmp`` * ``fake_soft_power`` * ``fake_ucs`` Please use the ``fake-hardware`` hardware type instead (you can combine it with any other interfaces, fake or real). ironic-15.0.0/releasenotes/notes/ipxe-boot-interface-addition-faacb344a72389f2.yaml0000664000175000017500000000150313652514273027546 0ustar zuulzuul00000000000000--- features: - | Adds an ``ipxe`` boot interface which allows for instance level iPXE enablement as opposed to conductor-wide enablement of iPXE. upgrade: - | Deployments utilizing iPXE should consider use of the ``ipxe`` boot interface as opposed to the ``pxe`` boot interface. iPXE functionality in the ``pxe`` boot interface is deprecated and will be removed during the U* development cycle. deprecations: - | The ``[pxe]ipxe_enabled`` configuration option has been deprecated in preference for the ``ipxe`` boot interface. The configuration option will be removed during the U* development cycle. - | Support for iPXE in the ``pxe`` boot interface has been deprecated, and will be removed during the U* development cycle. The ``ipxe`` boot interface should be used instead. ironic-15.0.0/releasenotes/notes/configure-notifications-72824356e7d8832a.yaml0000664000175000017500000000053413652514273026461 0ustar zuulzuul00000000000000--- features: - It is now possible to configure the notifications to use a different transport URL than the RPCs. These could potentially be completely different message broker hosts (though they don't need to be). If the notification-specific configuration is not provided, the notifier will use the same transport as the RPCs. ironic-15.0.0/releasenotes/notes/fix-prepare-instance-for-agent-interface-56753bdf04dd581f.yaml0000664000175000017500000000202613652514273031704 0ustar zuulzuul00000000000000--- fixes: - | Fixes ``direct`` deploy interface to invoke ``boot.prepare_instance`` irrespective of image type being provisioned. It was calling ``boot.prepare_instance`` only if the image being provisioned is a partition image. See bugs `1713916 `_ and `1750958 `_ for details. upgrade: - | With the deploy ramdisk based on Ironic Python Agent version 3.1.0 and beyond, the drivers using ``direct`` deploy interface performs ``netboot`` or ``local`` boot for whole disk image based on value of boot option setting. When you upgrade Ironic Python Agent in your deploy ramdisk, ensure that boot option is set appropriately for the node. The boot option can be set using configuration ``[deploy]/default_boot_option`` or as a ``boot_option`` capability in node's ``properties['capabilities']``. Also please note that this functionality requires ``hexdump`` command in the ramdisk. ironic-15.0.0/releasenotes/notes/node-fault-8c59c0ecb94ba562.yaml0000664000175000017500000000154513652514273024162 0ustar zuulzuul00000000000000--- features: - | Adds support for the ``fault`` field in the node, beginning with API version 1.42. This field records the fault, if any, detected by ironic for a node. If no fault is detected, the ``fault`` is ``None``. The ``fault`` field value is set to one of following values according to different circumstances: * ``power failure``: when a node is put into maintenance due to power sync failures that exceed max retries. * ``clean failure``: when a node is put into maintenance due to failure of a cleaning operation. * ``rescue abort failure``: when a node is put into maintenance due to failure of cleaning up during rescue abort. The ``fault`` field will be set to ``None`` if an operator manually set maintenance to ``False``. The ``fault`` field can be used as a filter for querying nodes. ironic-15.0.0/releasenotes/notes/add-inspect-wait-state-948f83dfe342897b.yaml0000664000175000017500000000200713652514273026256 0ustar zuulzuul00000000000000--- upgrade: - | Adds an ``inspect wait`` state to handle asynchronous hardware introspection. Caution should be taken due to the timeout monitoring is shifted from ``inspecting`` to ``inspect wait``, please stop all running asynchronous hardware inspection or wait until it is finished before upgrading to the Rocky release. Otherwise nodes in asynchronous inspection will be left at ``inspecting`` state forever unless the database is manually updated. deprecations: - | Adds an ``inspect wait`` state to handle asynchronous hardware introspection. The ``[conductor]inspect_timeout`` configuration option is deprecated for removal, please use ``[conductor]inspect_wait_timeout`` instead to specify the timeout of inspection process. other: - | Adds an ``inspect wait`` state to handle asynchronous hardware introspection. Returning ``INSPECTING`` from the ``inspect_hardware`` method of inspect interface is deprecated, ``INSPECTWAIT`` should be returned instead.ironic-15.0.0/releasenotes/notes/disable-clean-step-reset-ilo-1869a6e08f39901c.yaml0000664000175000017500000000034613652514273027247 0ustar zuulzuul00000000000000--- fixes: - Disables default execution of clean step 'reset_ilo' during automated cleaning. Resetting of iLO is not required during every invocation of automated cleaning. If required, operator can enable the same. ironic-15.0.0/releasenotes/notes/agent-api-bf9f18d8d38075e4.yaml0000664000175000017500000000033413652514273023721 0ustar zuulzuul00000000000000--- other: - The ``continue_deploy`` and ``reboot_to_instance`` methods in the ``BaseAgentVendor`` class stopped accepting ** arguments. They were never used anyway; drivers should stop passing anything there. ironic-15.0.0/releasenotes/notes/add-agent-api-error-77ec6c272390c488.yaml0000664000175000017500000000025013652514273025430 0ustar zuulzuul00000000000000--- fixes: - Fixes propagation of HTTP errors from **ironic-python-agent** commands. Now an operation is aborted on receiving HTTP error status from the ramdisk. ironic-15.0.0/releasenotes/notes/fix-path-a3a0cfd2c135ace9.yaml0000664000175000017500000000020313652514273023751 0ustar zuulzuul00000000000000--- fixes: - | Fixes virtual media boot when served using a local HTTP server, i.e. ``[redfish]use_swift`` is ``false``. ironic-15.0.0/releasenotes/notes/allow-to-attach-vif-to-active-node-55963be2ec269043.yaml0000664000175000017500000000011713652514273030276 0ustar zuulzuul00000000000000--- features: - Adds possibility to attach/detach VIFs to/from active nodes. ironic-15.0.0/releasenotes/notes/pxe-enabled-ports-check-c1736215dce76e97.yaml0000664000175000017500000000037413652514273026376 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where no error was raised if there were no PXE-enabled ports available for the node, when creating a neutron port. See `bug 2001811 `_ for more details. ironic-15.0.0/releasenotes/notes/name-suffix-47aea2d265fa75ae.yaml0000664000175000017500000000154013652514273024414 0ustar zuulzuul00000000000000--- fixes: - | Nodes and port groups with names ending with known file extensions are now correctly handled by the API. See `bug 1643995 `_ for more details. issues: - | If you have two nodes or port groups with names that only differ in a ``.json`` suffix (for example, ``test`` and ``test.json``) you won't be able to get, update or delete the one with the suffix via the ``/v1/nodes/`` endpoint (``/v1/portgroups/`` for port groups). Similarly, the ``/v1/heartbeat/`` endpoint won't work for the node with the suffix. To work around it, add one more ``.json`` suffix (for example, use ``/v1/nodes/test`` for node ``test`` and ``/v1/nodes/test.json.json`` for ``test.json``). This issue will be addressed in one of the future API revisions. ironic-15.0.0/releasenotes/notes/fix-rpc-exceptions-12c70eb6ba177e39.yaml0000664000175000017500000000101513652514273025555 0ustar zuulzuul00000000000000--- fixes: - Ironic exceptions that contained arbitrary objects in ``kwargs`` and were sent via RPC were causing ``oslo_messaging`` serializer to fail. This was leading to 500 errors from ironic API, timing out waiting for response from the conductor. Starting with this release, all non-serializable objects contained in an exception's kwargs are dropped. Whether the error is going to be returned by the service will depend on the configuration option ``[DEFAULT]/fatal_exception_format_errors``. ironic-15.0.0/releasenotes/notes/ilo-do-not-power-off-non-deploying-nodes-0a3aed7c8ea3940a.yaml0000664000175000017500000000062613652514273031740 0ustar zuulzuul00000000000000--- fixes: - A bug was identified in the behavior of the iLO drivers where nodes that are not active but taking part of a conductor takeover could be powered off. In preparation for new features and functionality, that risk encountering this bug, we are limiting the deployment preparation steps to the ``deploying`` state to prevent nodes from being erroneously powered off. ironic-15.0.0/releasenotes/notes/validate-port-info-before-using-it-e26135982d37c698.yaml0000664000175000017500000000025013652514273030333 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue when attaching VIF to a port with missed ``local_link_connection`` field was allowed when node network interface was ``neutron``. ironic-15.0.0/releasenotes/notes/deprecate-agent-passthru-67d1e2cf25b30a30.yaml0000664000175000017500000000041113652514273026711 0ustar zuulzuul00000000000000--- deprecations: - Agent vendor passthru is deprecated and will be removed in Ocata release. Operators should update their IPA image to the Newton version to use the new replacement API. Driver developers should stop using the agent vendor passthru. ironic-15.0.0/releasenotes/notes/allocation-owner-policy-162c43b3abb91c76.yaml0000664000175000017500000000041413652514273026574 0ustar zuulzuul00000000000000--- features: - | Adds an ``owner`` field to allocations. Depending on policy, a non-admin can then create an allocation and have the owner set to their project. Allocation processing will then ensure that only nodes with the same owner are matched. ironic-15.0.0/releasenotes/notes/clear-hung-iscsi-sessions-d3b55c4c65fa4c8b.yaml0000664000175000017500000000014013652514273027170 0ustar zuulzuul00000000000000--- fixes: - An issue with hung iscsi sessions not being cleaned up in case of deploy failure.ironic-15.0.0/releasenotes/notes/remove-driver-object-periodic-tasks-1357a1cd3589becf.yaml0000664000175000017500000000016613652514273031072 0ustar zuulzuul00000000000000--- upgrade: - | Removes support for attaching periodic tasks on a driver object, rather than an interface. ironic-15.0.0/releasenotes/notes/invalid_cross_device_link-7ecf3543a8ada09f.yaml0000664000175000017500000000024513652514273027372 0ustar zuulzuul00000000000000--- fixes: - | Properly reports an error when the image cache and the image HTTP or TFTP location are on different file system, causing hard link to fail. ironic-15.0.0/releasenotes/notes/adopt-ironic-context-5e75540dc2b2f009.yaml0000664000175000017500000000014113652514273026015 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug where Ironic won't log the request-id during hardware inspection. ironic-15.0.0/releasenotes/notes/deprecate-dhcp-update-mac-address-f12a4959432c8e20.yaml0000664000175000017500000000166413652514273030214 0ustar zuulzuul00000000000000--- features: - | Adds new methods to network interfaces, which will become mandatory in Pike release: * ``vif_list``: List attached VIF IDs for a node. * ``vif_attach``: Attach a virtual network interface to a node. * ``vif_detach``: Detach a virtual network interface from a node. * ``port_changed``: Handle any actions required when a port changes. * ``portgroup_changed``: Handle any actions required when a port group changes. * ``get_current_vif``: Return VIF ID attached to port or port group object. deprecations: - | ``update_mac_address`` method in the DHCP provider interface is deprecated and will be removed in the Pike release. The logic should be moved to a custom network interface's ``port_changed`` and ``portgroup_changed`` methods. fixes: - | Fixes an issue where a pre-created tenant port was automatically deleted by ironic on instance delete. ironic-15.0.0/releasenotes/notes/remove-deprecated-drac_host-865be09c6e8fcb90.yaml0000664000175000017500000000030713652514273027470 0ustar zuulzuul00000000000000--- upgrade: - Removes deprecated ``driver_info["drac_host"]`` property for ``idrac`` hardware type that was marked for removal in Pike. Please use ``driver_info["drac_address"]`` instead. ironic-15.0.0/releasenotes/notes/bug-1626453-e8df46aa5db6dd5a.yaml0000664000175000017500000000033513652514273023744 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where setting a boot device as persistent does not work when ``ipmi_force_boot_device`` is set to ``True``. For more information, see https://bugs.launchpad.net/ironic/+bug/1626453. ironic-15.0.0/releasenotes/notes/soft-reboot-poweroff-9fdb0a4306dd668d.yaml0000664000175000017500000000057213652514273026213 0ustar zuulzuul00000000000000--- features: - Adds support for soft reboot and soft power off requests in REST API version 1.27. Also adds an optional ``timeout`` parameter to the node power state API. Adds a new configuration option ``[conductor]/soft_power_off_timeout`` to define the default timeout for soft power actions. In 7.0.0, this is supported for ipmitool and iRMC drivers. ironic-15.0.0/releasenotes/notes/ssh-console-58721af6830f8892.yaml0000664000175000017500000000011013652514273024061 0ustar zuulzuul00000000000000--- features: - Adds ShellinaboxConsole support for virsh SSH driver. ironic-15.0.0/releasenotes/notes/ramdisk-params-6083bfaa7ffa9dfe.yaml0000664000175000017500000000032413652514273025255 0ustar zuulzuul00000000000000--- upgrade: - | Operators using custom PXE/iPXE/Grub templates should update them to remove an explicit mention of ``ipa-api-url``. This field is now a part of ``pxe_append_params`` when required. ironic-15.0.0/releasenotes/notes/net-names-b8a36aa30659ce2f.yaml0000664000175000017500000000134013652514273023776 0ustar zuulzuul00000000000000--- features: - Names can now be used instead of UUIDs for ``[neutron]/cleaning_network`` and ``[neutron]/provisioning_network`` configuration options (formerly called ``[neutron]/cleaning_network_uuid`` and ``[neutron]/provisioning_network_uuid``). Care has to be taken to ensure that the names are unique among all networks in this case. Note that the mapping between a name and a UUID is cached for the lifetime of the conductor. deprecations: - Configuration options ``[neutron]/cleaning_network_uuid`` and ``[neutron]/provisioning_network_uuid`` are deprecated in favor of the new configuration options ``[neutron]/cleaning_network`` and ``[neutron]/provisioning_network`` respectively. ironic-15.0.0/releasenotes/notes/remove-deprecated-dhcp-provider-methods-582742f3000be3c7.yaml0000664000175000017500000000115613652514273031470 0ustar zuulzuul00000000000000--- upgrade: - | Removes these deprecated methods from the neutron DHCP provider built into ironic: * create_cleaning_ports * delete_cleaning_ports Removes these related methods from ``ironic.drivers.modules.deploy_utils``: * prepare_cleaning_ports * tear_down_cleaning_ports If you have your own custom ironic DHCP provider that implements cleaning methods, you may need to update your code to use the ``add_cleaning_network()`` and ``remove_cleaning_network()`` network interface methods. See the modules in ``ironic/drivers/modules/network/`` for more information. ironic-15.0.0/releasenotes/notes/fix-keystone-parameters-cdb93576d7e7885b.yaml0000664000175000017500000000024313652514273026646 0ustar zuulzuul00000000000000--- fixes: - Fixes a multi-region issue where the region specified in the configuration file was ignored when getting the Identity service's (keystone) URL. ironic-15.0.0/releasenotes/notes/5.0-release-afb1fbbe595b6bc8.yaml0000664000175000017500000000033713652514273024260 0ustar zuulzuul00000000000000--- prelude: > This release adds support for manual cleaning and RAID configuration. Operators may now manually run clean steps, including setting up RAID on a node, while a node is in the manageable state. ironic-15.0.0/releasenotes/notes/add-snmp-inspection-support-e68fd6d57cb33846.yaml0000664000175000017500000000107213652514273027442 0ustar zuulzuul00000000000000--- fixes: - Fixes disk size detection for out-of-band inspection in iLO drivers, by optionally using SNMPv3 to get the disk size for certain types of storage. features: - To enable SNMPv3 inspection in iLO drivers, the following parameters must be set in the node's ``driver_info``. * ``snmp_auth_user`` * ``snmp_auth_prot_password`` * ``snmp_auth_priv_password`` * ``snmp_auth_protocol`` (optional, defaults to iLO default value ``MD5``) * ``snmp_auth_priv_protocol`` (optional, defaults to iLO default value ``DES``) ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/dont-validate-local_link_connection-when-port-has-client-id-8e584586dc4fca50.yamlironic-15.0.0/releasenotes/notes/dont-validate-local_link_connection-when-port-has-client-id-8e584580000664000175000017500000000047013652514273033226 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with validation of Infiniband ports. Infiniband ports do not require the ``local_link_connection`` field to be populated as the network topology is discoverable by the Infiniband Subnet Manager. See `bug 1753222 `_ for details. ironic-15.0.0/releasenotes/notes/remove-iscsi-verify-attempts-ede5b56b0545da08.yaml0000664000175000017500000000032613652514273027660 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[disk_utils]iscsi_verify_attempts`` was deprecated in Train and it's now removed from ironic-lib. Please use the ``[iscsi]verify_attempts`` option instead. ironic-15.0.0/releasenotes/notes/fix-api-access-logs-68b9ca4f411f339c.yaml0000664000175000017500000000021013652514273025570 0ustar zuulzuul00000000000000--- fixes: - API service once again records HTTP access logs. See https://bugs.launchpad.net/ironic/+bug/1536828 for details. ironic-15.0.0/releasenotes/notes/fix-security-group-list-add-query-filters-f72cfcefa1e093d2.yaml0000664000175000017500000000060413652514273032350 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where baremetal node deployment would fail on clouds with a high number of security groups. Listing the security groups took too long. Instead of listing all security groups, a query filter was added to list only the security groups to be used for the network. (See bug `2006256 `_.) ironic-15.0.0/releasenotes/notes/remove-enabled-drivers-5afcd77b53da1499.yaml0000664000175000017500000000044113652514273026463 0ustar zuulzuul00000000000000--- upgrade: - | Removes the configuration option ``[DEFAULT]enabled_drivers``. The option was deprecated in Rocky, and setting this option has raised an exception preventing conductor from starting since then. ``[DEFAULT]enabled_hardware_types`` should be used instead. ironic-15.0.0/releasenotes/notes/bug-2006266-85da234583ca0c32.yaml0000664000175000017500000000024713652514273023346 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in the discovery playbook for the ``ansible`` deploy interface that prevented gathering WWN and serial numbers under Python 3. ironic-15.0.0/releasenotes/notes/continue-node-deploy-state-63d9dc9cdcf8e37a.yaml0000664000175000017500000000067313652514273027457 0ustar zuulzuul00000000000000--- deprecations: - | Some deploy interfaces use the ``continue_node_deploy`` RPC call to notify the conductor when they're ready to leave the ``deploy`` core deploy step. Currently ironic allows a node to be either in ``wait call-back`` or ``deploying`` state when entering this call. This is deprecated, and in the next release a node will have to be in the ``wait call-back`` (``DEPLOYWAIT``) state for this call. ironic-15.0.0/releasenotes/notes/bug-1749433-363b747d2db67df6.yaml0000664000175000017500000000033613652514273023460 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug preventing a node from booting into the user instance after unrescuing if instance netboot is used. See `bug 1749433 `_ for details. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/drac-fix-get_bios_config-vendor-passthru-causes-exception-1e1dbeeb3e924f29.yamlironic-15.0.0/releasenotes/notes/drac-fix-get_bios_config-vendor-passthru-causes-exception-1e1dbeeb30000664000175000017500000000051213652514273033537 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue which caused the DRAC driver (``pxe_drac``) ``get_bios_config()`` vendor passthru method to unintentionally raise an ``AttributeError`` exception. That method once again returns the current BIOS configuration. For more information, see https://bugs.launchpad.net/ironic/+bug/1637671. ironic-15.0.0/releasenotes/notes/check_obj_versions-e86d897df673e833.yaml0000664000175000017500000000024613652514273025651 0ustar zuulzuul00000000000000--- upgrade: - | Adds a check to the ``ironic-status upgrade check`` command, to check for compatibility of the object versions with the release of ironic. ironic-15.0.0/releasenotes/notes/shellinabox-locking-fix-2fae2a451a8a489a.yaml0000664000175000017500000000046713652514273026636 0ustar zuulzuul00000000000000--- fixes: - | Fixes a locking issue where ``ipmitool-shellinabox`` console interface users may encounter a situation where the bare metal node is locked until the conductor is restarted. See story `1587313 `_ for additional information. ironic-15.0.0/releasenotes/notes/needs-agent-version-in-heartbeat-4e6806b679c53ec5.yaml0000664000175000017500000000023213652514273030204 0ustar zuulzuul00000000000000--- upgrade: - | The argument ``agent_version`` of the heartbeat interface is now mandatory to all interfaces that inherit from HeartbeatMixin. ironic-15.0.0/releasenotes/notes/node-credentials-cleaning-b1903f49ffeba029.yaml0000664000175000017500000000025613652514273027116 0ustar zuulzuul00000000000000--- security: - | Sensitive information is now removed from a node's ``driver_info`` and ``instance_info`` fields before sending it to the ramdisk during cleaning. ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/config-drive-support-for-whole-disk-images-in-iscsi-deploy-0193c5222a7cd129.yamlironic-15.0.0/releasenotes/notes/config-drive-support-for-whole-disk-images-in-iscsi-deploy-0193c5220000664000175000017500000000037213652514273033035 0ustar zuulzuul00000000000000--- features: - Added configdrive support for whole disk images for iSCSI based deploy. This will work for UEFI only or BIOS only images. It will not work for hybrid images which are capable of booting from BIOS and UEFI boot mode. ironic-15.0.0/releasenotes/notes/volume-connector-and-target-api-dd172f121ab3af8e.yaml0000664000175000017500000000473213652514273030262 0ustar zuulzuul00000000000000--- features: - | Adds support for volume connectors and volume targets with new API endpoints ``/v1/volume/connectors`` and ``/v1/volume/targets``. These endpoints are available with API version 1.32 or later. These new resources are used to connect a node to a volume. A volume connector represents connector information of a node such as an iSCSI initiator. A volume target provides volume information such as an iSCSI target. These endpoints are available: * ``GET /v1/volume/connectors`` for listing volume connectors * ``POST /v1/volume/connectors`` for creating a volume connector * ``GET /v1/volume/connectors/`` for showing a volume connector * ``PATCH /v1/volume/connectors/`` for updating a volume connector * ``DELETE /v1/volume/connectors/`` for deleting a volume connector * ``GET /v1/volume/targets`` for listing volume targets * ``POST /v1/volume/targets`` for creating a volume target * ``GET /v1/volume/targets/`` for showing a volume target * ``PATCH /v1/volume/targets/`` for updating a volume target * ``DELETE /v1/volume/targets/`` for deleting a volume target The Volume resources also can be listed as sub resources of nodes: * ``GET /v1/nodes//volume/connectors`` * ``GET /v1/nodes//volume/targets`` Root endpoints of volume resources are also added. These endpoints provide links to volume connectors and volume targets: * ``GET /v1/volume`` * ``GET /v1/node//volume`` When a volume connector or a volume target is created, updated, or deleted, these CRUD notifications can be emitted: * ``baremetal.volumeconnector.create.start`` * ``baremetal.volumeconnector.create.end`` * ``baremetal.volumeconnector.create.error`` * ``baremetal.volumeconnector.update.start`` * ``baremetal.volumeconnector.update.end`` * ``baremetal.volumeconnector.update.error`` * ``baremetal.volumeconnector.delete.start`` * ``baremetal.volumeconnector.delete.end`` * ``baremetal.volumeconnector.delete.error`` * ``baremetal.volumetarget.create.start`` * ``baremetal.volumetarget.create.end`` * ``baremetal.volumetarget.create.error`` * ``baremetal.volumetarget.update.start`` * ``baremetal.volumetarget.update.end`` * ``baremetal.volumetarget.update.error`` * ``baremetal.volumetarget.delete.start`` * ``baremetal.volumetarget.delete.end`` * ``baremetal.volumetarget.delete.error`` ironic-15.0.0/releasenotes/notes/drac_host-deprecated-b181149246eecb47.yaml0000664000175000017500000000032513652514273026020 0ustar zuulzuul00000000000000--- deprecations: - For DRAC drivers, the node's ``driver_info["drac_host"]`` property is deprecated and will be ignored starting in the Pike release. Please use ``driver_info["drac_address"]`` instead. ironic-15.0.0/releasenotes/notes/rolling-upgrades-ccad5159ca3cedbe.yaml0000664000175000017500000000026713652514273025670 0ustar zuulzuul00000000000000--- features: - Adds support for rolling upgrades, starting from upgrading Ocata to Pike. For details, see http://docs.openstack.org/ironic/latest/admin/upgrade-guide.html. ironic-15.0.0/releasenotes/notes/conductor-groups-c22c17e276e63bed.yaml0000664000175000017500000000144313652514273025432 0ustar zuulzuul00000000000000--- features: - | Conductors and nodes may be arbitrarily grouped to provide a basic level of affinity between conductors and nodes. Conductors use the ``[conductor]/conductor_group`` configuration option to set the group which they belong to. The same value may be set on one or more nodes in the ``conductor_group`` field (available in API version 1.46), and these will be matched such that only conductors with a given group will manage nodes with the same group. A group name may be up to 255 characters containing ``a-z``, ``0-9``, ``_``, ``-``, and ``.``. The group is case-insensitive. The default group is the empty string (``""``). The "node list" API endpoint (``GET /v1/nodes``) may also be filtered by conductor group in API version 1.46. ironic-15.0.0/releasenotes/notes/power-fault-recovery-6e22f0114ceee203.yaml0000664000175000017500000000172613652514273026125 0ustar zuulzuul00000000000000--- features: - | Adds power failure recovery to ironic. For nodes that ironic had put into maintenance mode due to power failure, ironic periodically checks their power state, and moves them out of maintenance mode when power state can be retrieved. The interval of this check is configured via ``[conductor]power_failure_recovery_interval`` configuration option, the default value is 300 (seconds). Set to 0 to disable this behavior. upgrade: - | Power failure recovery introduces a new configuration option ``[conductor]power_failure_recovery_interval``, which is enabled and set to 300 seconds by default. In case the default value is not suitable for the needs or scale of a deployment, please make adjustment or turn it off during upgrade. - | Power failure recovery does not apply to nodes that were in maintenance mode due to power failure before upgrade, they have to be manually moved out of maintenance mode. ironic-15.0.0/releasenotes/notes/deprecate-oneview-drivers-5a487e1940bcbbc6.yaml0000664000175000017500000000115513652514273027176 0ustar zuulzuul00000000000000--- deprecations: - | The ``oneview`` hardware type, as well as the supporting driver interfaces have been deprecated and are scheduled to be removed from ironic in the Stein development cycle. This is due to the lack of operational Third Party testing to help ensure that the support for Oneview is functional. Oneview Third Party CI was shutdown just prior to the start of the Rocky development cycle, and at the time of this deprecation the Ironic community has no indication that testing will be restablished. Should testing be restablished, this deprecation shall be rescinded. ironic-15.0.0/releasenotes/notes/fix-noop-net-vif-list-a3d8ecee29097662.yaml0000664000175000017500000000023113652514273026121 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with the 'noop' network interface where listing the VIFs for a node fails with a HTTP 500 Internal Server Error. ironic-15.0.0/releasenotes/notes/adds-ramdisk-deploy-interface-39fc61bc77b57beb.yaml0000664000175000017500000000115413652514273027776 0ustar zuulzuul00000000000000--- features: - | Adds a ``ramdisk`` deploy interface for deployments that wish to network boot to a ramdisk, as opposed to perform a complete traditional deployment to a physical media. This may be useful in scientific use cases or where ephemeral baremetal machines are desired. The ``ramdisk`` deploy interface is intended for advanced users and has some particular operational caveats that the users should be aware of prior to use, such as network access list requirements and configuration drive architectural restrictions and the inability to leverage configuration drives. ironic-15.0.0/releasenotes/notes/build-iso-from-esp-d156036aa8ef85fb.yaml0000664000175000017500000000071413652514273025536 0ustar zuulzuul00000000000000--- features: - | Allows the user to supply EFI system partition image to ironic, for building UEFI-bootable ISO images, in form of a local file or UUID or URI reference. The new ``[conductor]esp_image`` option can be used to configure ironic to use local file. fixes: - | Makes ironic building UEFI-only bootable ISO image (when being asked to build a UEFI-bootable image) rather than building a hybrid BIOS/UEFI-bootable ISO. ironic-15.0.0/releasenotes/notes/image_checksum_optional-381acf9e441d2a58.yaml0000664000175000017500000000037513652514273026713 0ustar zuulzuul00000000000000--- features: - | Adds the capability for the ``instance_info\image_checksum`` value to be optional in stand-alone deployments if the ``instance_info\image_os_hash_algo`` and ``instance_info\image_os_hash_value`` fields are populated. ironic-15.0.0/releasenotes/notes/fix-gmr-37332a12065c09dc.yaml0000664000175000017500000000027013652514273023230 0ustar zuulzuul00000000000000--- fixes: - | Fixes state report via Guru Meditation Reports that did not work previously because of empty ``log_dir`` and no way to configure this configuration option.ironic-15.0.0/releasenotes/notes/remove-iscsi-deploy-ipa-mitaka-c0efa0d5c31933b6.yaml0000664000175000017500000000033613652514273030016 0ustar zuulzuul00000000000000--- upgrade: - | There is no longer any support for doing an iSCSI deploy on ironic python agent (IPA) ramdisks with versions < 1.3 (Mitaka or earlier). Please upgrade ironic python agent to a newer version. ironic-15.0.0/releasenotes/notes/set-boot-mode-4c42b3fd0b5f5b37.yaml0000664000175000017500000000016313652514273024564 0ustar zuulzuul00000000000000--- features: - | Set boot_mode in node properties during OOB Introspection for ``idrac`` hardware type. ironic-15.0.0/releasenotes/notes/fix-pagination-marker-with-custom-field-query-65ca29001a03e036.yaml0000664000175000017500000000056013652514273032553 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the pagination marker was not being set if ``uuid`` was not in the list of requested fields when executing a list query. The affected API endpoints were: port, portgroup, volume_target, volume_connector, node and chassis. `See story 2003192 for more details `_. ironic-15.0.0/releasenotes/notes/no-more-legacy-auth-eeb32f907d0ab5de.yaml0000664000175000017500000000060413652514273026025 0ustar zuulzuul00000000000000--- upgrade: - | Ironic no longer falls back to loading authentication configuration options for accessing other services from the ``[keystone_authtoken]`` section. As a result, the following configuration sections now must contain proper authentication options for appropriate services: - glance - neutron - swift - inspector - service_catalog ironic-15.0.0/releasenotes/notes/fix-sync-power-state-last-error-65fa42bad8e38c3b.yaml0000664000175000017500000000020313652514273030266 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where `node.last_error` did not show the actual issue when the periodic power state sync failed. ironic-15.0.0/releasenotes/notes/remove-agent-heartbeat-timeout-abf8787b8477bae7.yaml0000664000175000017500000000030113652514273030142 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[agent]heartbeat_timeout`` was deprecated before ocata release and now removed, please use ``[api]ramdisk_heartbeat_timeout`` instead. ironic-15.0.0/releasenotes/notes/remove_vagrant-4472cedd0284557c.yaml0000664000175000017500000000032013652514273024771 0ustar zuulzuul00000000000000--- other: - | Removes Vagrant files and the information in documentation since the files were too outdated. This would lead to errors if developers tried to set up an environment with Vagrant. ironic-15.0.0/releasenotes/notes/remove-metric-pxe-boot-option-1aec41aebecc1ce9.yaml0000664000175000017500000000034013652514273030214 0ustar zuulzuul00000000000000--- other: - | Removes the software metric named ``validate_boot_option_for_trusted_boot``. This was the timing for a short-lived, internal function that is already included in the ``PXEBoot.validate`` metric. ironic-15.0.0/releasenotes/notes/neutron-port-update-598183909d44396c.yaml0000664000175000017500000000054313652514273025517 0ustar zuulzuul00000000000000--- features: - | Changes neutron port updates to use auth values from Ironic's neutron conf, preventing issues that can arise when a non-admin user manages Ironic nodes. A check is added to the port update function to verify that the user can actually see the port. This adds an additional Neutron request call to all port updates. ironic-15.0.0/releasenotes/notes/add_portgroup_support-7d5c6663bb00684a.yaml0000664000175000017500000000160213652514273026417 0ustar zuulzuul00000000000000--- features: - | Adds support for port groups with a new endpoint ``/v1/portgroups/``. Ports can be combined into port groups to support static Link Aggregation Group (LAG) and Multi-Chassis LAG (MLAG) configurations. Note that if the optional ``mode`` field for a port group is not specified, its value will be set to the value of the configuration option ``[DEFAULT]default_portgroup_mode``, which defaults to ``active-backup``. Additionally, adds the following API changes: * a new endpoint ``/v1/nodes//portgroups``. * a new endpoint ``/v1/portgroups//ports``. * a new field ``portgroup_uuid`` on the port object. This is the UUID of a port group that this port belongs to, or None if it does not belong to any port group. All port group API functions are available starting with version 1.26 of the REST API. ironic-15.0.0/releasenotes/notes/ilo-erase-device-priority-config-509661955a11c28e.yaml0000664000175000017500000000032713652514273030060 0ustar zuulzuul00000000000000--- deprecations: - The ``[ilo]/clean_priority_erase_devices`` configuration option is deprecated and will be removed in the Ocata cycle. Please use the ``[deploy]/erase_devices_priority`` option instead. ironic-15.0.0/releasenotes/notes/fail-when-vif-port-id-is-missing-7640669f9d9e705d.yaml0000664000175000017500000000023513652514273030013 0ustar zuulzuul00000000000000--- fixes: - Fail deployment when no ports or port groups are linked to a node. This is to avoid active nodes not connected to any tenant network. ironic-15.0.0/releasenotes/notes/irmc-add-clean-step-reset-bios-config-a8bed625670b7fdf.yaml0000664000175000017500000000165013652514273031233 0ustar zuulzuul00000000000000--- features: - Adds new boot interface named ``irmc-pxe`` for PXE booting FUJITSU PRIMERGY servers. - Adds clean step ``restore_irmc_bios_config`` to restore BIOS config for a node with an ``irmc``-based driver during automatic cleaning. upgrade: - Adds new configuration option ``[irmc]clean_priority_restore_irmc_bios_config``, which enables setting priority for the ``restore_irmc_bios_config`` clean step. The default value for this option is 0, which means the clean step is disabled. deprecations: - The use of the ``pxe`` boot interface with the ``irmc`` hardware type has been deprecated. It is recommended to switch to the new ``irmc-pxe`` boot interface as soon as possible. issues: - The ``restore_irmc_bios_config`` clean step does not work for nodes using the ``pxe`` boot interface with the ``irmc`` hardware type. The ``irmc-pxe`` boot interface has to be used instead. ironic-15.0.0/releasenotes/notes/node-owner-policy-ports-1d3193fd897feaa6.yaml0000664000175000017500000000051513652514273026646 0ustar zuulzuul00000000000000--- features: - | A port is owned by its associated node's owner. This owner is now exposed to policy checks, giving Ironic admins the option of modifying the policy file to allow users specified by a node's owner field to perform API actions on that node's associated ports through the ``is_node_owner`` rule. ironic-15.0.0/releasenotes/notes/resources-crud-notifications-70cba9f761da3afe.yaml0000664000175000017500000000214413652514273030067 0ustar zuulzuul00000000000000--- features: - | Adds the following notifications: - Creation, updates, or deletions of ironic resources (node, port and chassis). Event types are ``baremetal..{create,update,delete}.{start,end,error}``. - Start and stop console on a node. Event types are ``baremetal.node.console_{set,restore}.{start,end,error}``. - Changes in node maintenance status. Event types are ``baremetal.node.maintenance_set.{start,end,error}``. - When ironic attempts to set the power state on the node. Event types are ``baremetal.node.power_set.{start,end,error}``. - When ironic detects the power state on baremetal hardware has changed and updates the node in the database appropriately. Event types are ``baremetal.node.power_state_corrected.success``. - Node provision state changes. Event types are ``baremetal.node.provision_set.{start,end,success,error}``. These are only emitted when notifications are enabled. For more details, see the developer documentation: http://docs.openstack.org/developer/ironic/deploy/notifications.html. ironic-15.0.0/releasenotes/notes/iscsi-whole-disk-cd464d589d029b01.yaml0000664000175000017500000000033013652514273025127 0ustar zuulzuul00000000000000--- fixes: - | No longer validates requested root partition size for whole-disk images using ``iscsi`` deploy interface, see `bug 1742451 `_ for details. ironic-15.0.0/releasenotes/notes/keystoneauth-adapter-opts-ca4f68f568e6cf6f.yaml0000664000175000017500000000312213652514273027343 0ustar zuulzuul00000000000000--- features: - | To facilitate automatic discovery of services from the service catalog, the configuration file sections for service clients may include these configuration options: ``service_type``, ``service_name``, ``valid_interfaces``, ``region_name`` and other keystoneauth options. These options together must uniquely specify an endpoint for a service registered in the service catalog. Alternatively, the ``endpoint_override`` option can be used to specify the endpoint. Consult the `keystoneauth library documentation `_ for a full list of available options, their meaning and possible values. Default values for ``service_type`` are set by ironic to sane defaults based on required services and their entries in the `service types authority `_. The ``valid_interfaces`` option defaults to ``['internal', 'public']``. The ``region_name`` option defaults to ``None`` and must be explicitly set for multi-regional setup for endpoint discovery to succeed. Currently only the ``[service_catalog]`` section supports these options. deprecations: - | Configuration option ``[conductor]api_url`` is deprecated and will be removed in the Rocky release. Instead, use the ``[service_catalog]endpoint_override`` configuration option to set the Bare Metal API endpoint if its automatic discovery from the service catalog is not desired. This new option defaults to ``None`` and must be set explicitly if needed. ironic-15.0.0/releasenotes/notes/remove-clustered-compute-manager-6b45ed3803be53d1.yaml0000664000175000017500000000023113652514273030373 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated ironic.nova.ClusteredComputerManager module is now removed. This is not required with nova >= 14.0.0 (Newton). ironic-15.0.0/releasenotes/notes/no-glance-v1-d249e8079f46f40c.yaml0000664000175000017500000000034113652514273024156 0ustar zuulzuul00000000000000--- upgrade: - | Support for using the Image API v1 was removed. It was removed from Glance in the Rocky release. - | The deprecated option ``[glance]glance_api_version`` was removed. Only v2 is now used. ironic-15.0.0/releasenotes/notes/manual-clean-4cc2437be1aea69a.yaml0000664000175000017500000000031713652514273024523 0ustar zuulzuul00000000000000--- features: - Adds support for manual cleaning. This is available with API version 1.15. For more information, see http://docs.openstack.org/developer/ironic/deploy/cleaning.html#manual-cleaning ironic-15.0.0/releasenotes/notes/json-rpc-0edc429696aca6f9.yaml0000664000175000017500000000055413652514273023665 0ustar zuulzuul00000000000000--- features: - | Adds the ability to use JSON RPC for communication between API and conductor services. To use it set the new ``rpc_transport`` configuration options to ``json-rpc`` and configure the credentials and the ``host_ip`` in the ``json_rpc`` section. Hostnames of all conductors must be resolvable for this implementation to work. ironic-15.0.0/releasenotes/notes/deployment-cleaning-polling-flag-be13a866a7c302d7.yaml0000664000175000017500000000060013652514273030332 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with asynchronous deploy steps that poll for completion where the step could fail to execute. The ``deployment_polling`` and ``cleaning_polling`` flags may be used by driver implementations to signal that the driver is polling for completion. See `story 2003817 `__ for details. ironic-15.0.0/releasenotes/notes/decouple-boot-params-2b05806435ad21e5.yaml0000664000175000017500000000046413652514273025707 0ustar zuulzuul00000000000000--- fixes: - | Improves interoperability with Redfish BMCs by untying node boot mode change from other boot parameters change (such as boot device, boot frequency). upgrade: - | The required minimum version of the ``sushy`` python Redfish API client library is now version ``3.2.0``. ironic-15.0.0/releasenotes/notes/remove-DEPRECATED-options-from-[agent]-7b6cce21b5f52022.yaml0000664000175000017500000000036213652514273030733 0ustar zuulzuul00000000000000--- upgrade: - | In the configuration group ``[agent]``, the following options were deprecated in the Liberty cycle and they have been removed: * ``[agent]/agent_pxe_append_params`` * ``[agent]/agent_pxe_config_template`` ironic-15.0.0/releasenotes/notes/node-deletion-update-resources-53862e48ab658f77.yaml0000664000175000017500000000027013652514273027734 0ustar zuulzuul00000000000000--- fixes: - Fixed performance issue for 'ironic.nova.compute.ClusteredComputeManager' when during Nova instance termination resources were updated for all Nova hypervisors. ironic-15.0.0/releasenotes/notes/tempest_plugin_removal-009f9ce8456b16fe.yaml0000664000175000017500000000063413652514273026636 0ustar zuulzuul00000000000000--- other: - | The tempest plugin code that was in ``ironic_tempest_plugin/`` has been removed. Tempest plugin code has been migrated to the project `openstack/ironic-tempest-plugin `_. This was an OpenStack wide `goal for the Queens cycle `_. ironic-15.0.0/releasenotes/notes/add-neutron-request-timeout-1f7372af81f14ddd.yaml0000664000175000017500000000043013652514273027504 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the Networking Service performs a pre-flight operation which can exceed the prior default for ``30`` seconds. The new default is ``45`` seconds, and operators can tune the setting via the ``[neutron]request_timeout`` setting. ironic-15.0.0/releasenotes/notes/drac-fix-prepare-cleaning-d74ba45135d84531.yaml0000664000175000017500000000016113652514273026576 0ustar zuulzuul00000000000000--- fixes: - Fixes DRAC deploy interface failure when automated cleaning is called without any clean step. ironic-15.0.0/releasenotes/notes/add-port-internal-info-b7e02889416570f7.yaml0000664000175000017500000000043013652514273026103 0ustar zuulzuul00000000000000--- features: - A new dictionary field ``internal_info`` is added to the port API object. It is readonly from the API side, and can contain any internal information ironic needs to store for the port. ``cleaning_vif_port_id`` is being stored inside this dictionary. ironic-15.0.0/releasenotes/notes/raid-hints-c27097ded0137f7c.yaml0000664000175000017500000000043013652514273024077 0ustar zuulzuul00000000000000--- features: - | Target devices for software RAID can now be specified in the form of device hints (same as for root devices) in the ``physical_disks`` parameter of a logical disk configuration. This requires ironic-python-agent from the Ussuri release series. ironic-15.0.0/releasenotes/notes/ilo5-oob-raid-a0eac60f7d77a4fc.yaml0000664000175000017500000000114613652514273024615 0ustar zuulzuul00000000000000--- features: - Adds new hardware type ``ilo5``. Including all other hardware interfaces ``ilo`` hardware type supports, this has one new RAID interface ``ilo5``. - Adds functionality to perform out-of-band RAID operation for iLO5 based HPE Proliant servers. upgrade: - The ``create_raid_configuration``, ``delete_raid_configuration`` and ``read_raid_configuration`` interfaces of 'proliantutils' library has been enhanced to support out-of-band RAID operation for ``ilo5`` hardware type. To leverage this feature, the 'proliantutils' library needs to be upgraded to version '2.7.0'. ironic-15.0.0/releasenotes/notes/cleanup-ipxe-f1349e2ac9ec2825.yaml0000664000175000017500000000022113652514273024425 0ustar zuulzuul00000000000000--- fixes: - | Now passing proper flags during clean up of iPXE boot environments, so that no leftovers are left after node tear down. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/remove-deprecated-deploy-erase-devices-iterations-55680ab95cbce3e9.yamlironic-15.0.0/releasenotes/notes/remove-deprecated-deploy-erase-devices-iterations-55680ab95cbce3e9.0000664000175000017500000000036313652514273033023 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option ``[deploy]/erase_devices_iterations`` was deprecated in the Newton cycle (6.0.0). It is no longer supported. Please use the option ``[deploy]/shred_random_overwrite_iterations`` instead. ironic-15.0.0/releasenotes/notes/add-choice-to-some-options-9fb327c48e6bfda1.yaml0000664000175000017500000000155413652514273027240 0ustar zuulzuul00000000000000--- upgrade: - | Add ``choices`` parameter to config options. Invalid values will be rejected when first accessing them, which can happen in the middle of deployment. ================================= ================ Option Choices ================================= ================ [DEFAULT]/auth_strategy keystone, noauth [glance]/auth_strategy keystone, noauth [glance]/glance_protocol http, https [neutron]/auth_strategy keystone, noauth [amt]/protocol http, https [irmc]/remote_image_share_type CIFS, NFS [irmc]/port 443, 80 [irmc]/auth_method basic, digest [irmc]/sensor_method ipmitool, scci ================================= ================ ironic-15.0.0/releasenotes/notes/drop-ironic-lib-rootwrap-filters-f9224173289c1e30.yaml0000664000175000017500000000031613652514273030127 0ustar zuulzuul00000000000000--- other: - | The rootwrap filter file called "ironic-lib.filters" is no longer part of Ironic. The same file is available from the ironic-lib module which is already an install requirement. ironic-15.0.0/releasenotes/notes/raid-to-support-jbod-568f88207b9216e2.yaml0000664000175000017500000000011013652514273025606 0ustar zuulzuul00000000000000--- features: - Added support for JBOD volumes in RAID configuration. ironic-15.0.0/releasenotes/notes/remove-manage-tftp-0c2f4f417b92b1ee.yaml0000664000175000017500000000022513652514273025604 0ustar zuulzuul00000000000000--- upgrade: - Removes deprecated option "[agent]/manage_tftp". Configuration files should instead use the "[agent]/manage_agent_boot" option. ironic-15.0.0/releasenotes/notes/add-node-resource-class-c31e26df4196293e.yaml0000664000175000017500000000101313652514273026357 0ustar zuulzuul00000000000000--- features: - Adds a ``resource_class`` field to the node resource, which will be used by Nova to define which nodes may quantitatively match a Nova flavor. Operators should populate this accordingly before deploying the Ocata version of Nova. upgrade: - Adds a ``resource_class`` field to the node resource, which will be used by Nova to define which nodes may quantitatively match a Nova flavor. Operators should populate this accordingly before deploying the Ocata version of Nova. ironic-15.0.0/releasenotes/notes/ansible-loops-de0eef0d5b79a9ff.yaml0000664000175000017500000000020413652514273025111 0ustar zuulzuul00000000000000--- upgrade: - | Changes the minimum version of Ansible for use with the ``ansible`` ``deploy_interface`` to version 2.5. ironic-15.0.0/releasenotes/notes/add-healthcheck-middleware-86120fa07a7c8151.yaml0000664000175000017500000000075713652514273027002 0ustar zuulzuul00000000000000--- features: - | Adds the healthcheck middleware from oslo, configurable via the ``[healthcheck]/enabled`` option. This middleware adds a status check at `/healthcheck`. This is useful for load balancers to determine if a service is up (and add or remove it from rotation), or for monitoring tools to see the health of the server. This endpoint is unauthenticated, as not all load balancers or monitoring tools support authenticating with a health check endpoint. ironic-15.0.0/releasenotes/notes/fix-instance-master-path-config-fa524c907a7888e5.yaml0000664000175000017500000000033213652514273030047 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the master instance image cache could not be disabled. The configuration option ``[pxe]/instance_master_path`` may now be set to the empty string to disable the cache. ironic-15.0.0/releasenotes/notes/remove-elilo-support-7fc1227f66e59084.yaml0000664000175000017500000000030413652514273026017 0ustar zuulzuul00000000000000--- upgrade: - | Support for `elilo` has been removed as support was deprecated and `elilo` has been dropped by most Linux distributions. Users should migrate to another PXE loader. ironic-15.0.0/releasenotes/notes/no-classic-snmp-b77d267b535da216.yaml0000664000175000017500000000020713652514273024761 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated ``pxe_snmp`` classic driver has been removed. Please use the ``snmp`` hardware type instead. ironic-15.0.0/releasenotes/notes/dbsync-check-version-c71d5f4fd89ed117.yaml0000664000175000017500000000071213652514273026147 0ustar zuulzuul00000000000000--- upgrade: - | The ``ironic-dbsync`` command will check the database object (record) versions to make sure they are compatible with the new ironic release, before doing the ``upgrade`` or ``online_data_migrations``. other: - | The ``ironic-dbsync`` command will check the database object (record) versions to make sure they are compatible with the new ironic release, before doing the ``upgrade`` or ``online_data_migrations``. ironic-15.0.0/releasenotes/notes/oslo-proxy-headers-middleware-22188a2976f8f460.yaml0000664000175000017500000000100513652514273027503 0ustar zuulzuul00000000000000--- features: - | Ironic API service now supports HTTP proxy headers parsing with the help of oslo.middleware package, enabled via new option ``[oslo_middleware]/enable_proxy_headers_parsing`` (``False`` by default). This enables more complex setups of Ironic API service, for example when the same service instance serves both internal and public API endpoints via separate proxies. When proxy headers parsing is enabled, the value of ``[api]/public_endpoint`` option is ignored. ironic-15.0.0/releasenotes/notes/add-db-deadlock-handling-6bc10076537f3727.yaml0000664000175000017500000000112513652514273026255 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue which caused conductor's periodic tasks to stop executing. See https://bugs.launchpad.net/ironic/+bug/1637210. features: - Adds DBDeadlock handling which may improve stability when using Galera. See https://bugs.launchpad.net/ironic/+bug/1639338. Number of retries depends on the configuration option ``[database]db_max_retries``. upgrade: - All DB API methods doing database writes now retry on deadlock. The ``[database]db_max_retries`` configuration option specifies the maximum number of times to retry, and can be customised if necessary.ironic-15.0.0/releasenotes/notes/type-error-str-6826c53d7e5e1243.yaml0000664000175000017500000000023313652514273024615 0ustar zuulzuul00000000000000--- fixes: - | Returns the correct error message on providing an invalid reference to ``image_source``. Previously an internal error was raised. ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/adds-ramdisk-deploy-interface-support-to-ilo-vmedia-1a7228a834465633.yamlironic-15.0.0/releasenotes/notes/adds-ramdisk-deploy-interface-support-to-ilo-vmedia-1a7228a834465630000664000175000017500000000030413652514273032456 0ustar zuulzuul00000000000000--- features: - | Adds support for booting a ramdisk using virtual media to ``ilo-virtual-media`` boot interface when an ironic node is configured with ``ramdisk`` deploy interface. ironic-15.0.0/releasenotes/notes/fix-get-boot-device-not-persistent-de6159d8d2b60656.yaml0000664000175000017500000000042013652514273030516 0ustar zuulzuul00000000000000--- fixes: - | The ``oneview`` management interface now correctly detects whether the current boot device setting is persistent at the machine's iLO. Previously it always returned ``True``. See https://bugs.launchpad.net/ironic/+bug/1706725 for details. ironic-15.0.0/releasenotes/notes/reset-interface-e62036ac76b87486.yaml0000664000175000017500000000053013652514273024764 0ustar zuulzuul00000000000000--- features: - | Starting with API version 1.45, PATCH requests to ``/v1/nodes/`` accept the new query parameter ``reset_interfaces``. It can be provided whenever the ``driver`` field is updated. If set to 'true', all hardware interfaces wil be reset to their defaults, except for ones updated in the same request. ironic-15.0.0/releasenotes/notes/release-reservation-on-conductor-stop-6ebbcdf92da57ca6.yaml0000664000175000017500000000033413652514273031710 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where a node may be locked from changes if a conductor's hostname case is changed before restarting the conductor service. clean up the reservation once the conductor stopped. ironic-15.0.0/releasenotes/notes/json-rpc-bind-a0348cc6f5efe812.yaml0000664000175000017500000000017713652514273024570 0ustar zuulzuul00000000000000--- fixes: - | The internal JSON RPC server now binds to ``::`` by default, allowing it to work correctly with IPv6. ironic-15.0.0/releasenotes/notes/async_bios_clean_step-7348efff3f6d02c1.yaml0000664000175000017500000000027213652514273026451 0ustar zuulzuul00000000000000--- fixes: - | Fixes the bug in executing asynchronous BIOS interface clean step by honoring the state returned by the BIOS interface clean step which was ignored earlier. ironic-15.0.0/releasenotes/notes/deprecate-support-for-glance-v1-8b194e6b20cbfebb.yaml0000664000175000017500000000031613652514273030254 0ustar zuulzuul00000000000000--- deprecations: - | Support for the Image service v1 API has been deprecated along with the ``[glance]/glance_api_version`` configuration option and will be removed in the `Queens` release. ironic-15.0.0/releasenotes/notes/no-sensors-in-maintenance-7a0ecf418336d105.yaml0000664000175000017500000000026413652514273026741 0ustar zuulzuul00000000000000--- other: - The conductor no longer tries to collect or report sensors data for nodes in maintenance mode. See `bug 1652741 `_. ironic-15.0.0/releasenotes/notes/conf-debug-ipa-1d75e2283ca83395.yaml0000664000175000017500000000043413652514273024462 0ustar zuulzuul00000000000000--- upgrade: - The ``[DEFAULT]/debug`` configuration option now also enables debug logs for the ``ironic-python-agent`` ramdisk. If the ``ipa-debug`` kernel option is already present in the ``[pxe]/pxe_append_params`` configuration option, ironic will not overwrite it. ironic-15.0.0/releasenotes/notes/story-2006223-ilo-hpsum-firmware-update-fails-622883e4785313c1.yaml0000664000175000017500000000052713652514273031642 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in updating firmware using ``update_firmware_sum`` clean step from management interface of ``ilo`` hardware type with an error stating that unable to connect to iLO address due to authentication failure. See `story 2006223 `__ for details. ironic-15.0.0/releasenotes/notes/add-deploy-steps-ilo-management-interface-9d0f45954eda643a.yaml0000664000175000017500000000051313652514273032050 0ustar zuulzuul00000000000000--- features: - | Adds support for deploy steps to the ``management`` interface of the ``ilo`` hardware type. The methods ``reset_ilo``, ``reset_ilo_credential``, ``reset_bios_to_default``, ``reset_secure_boot_keys_to_default``, ``clear_secure_boot_keys`` and ``update_firmware`` can be used as deploy steps. ironic-15.0.0/releasenotes/notes/fix-ipxe-interface-without-opt-enabled-4fa2f83975295e20.yaml0000664000175000017500000000035513652514273031271 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue that when ``ipxe`` interface is in use with ``[pxe]ipxe_enabled`` set to false, the PXE configuration is not handled properly which prevents the machine from performing a successful iPXE boot. ironic-15.0.0/releasenotes/notes/agent-http-provisioning-d116b3ff36669d16.yaml0000664000175000017500000000162413652514273026564 0ustar zuulzuul00000000000000--- features: - Adds the ability to provision with ``direct`` deploy interface and custom HTTP service running at ironic conductor node. A new configuration option ``[agent]image_download_source`` is introduced. When set to ``swift``, the ``direct`` deploy interface uses tempurl generated via the Object service as the source of instance image during provisioning, this is the default configuration. When set to ``http``, the ``direct`` deploy interface downloads instance image from the Image service, and caches the image in the ironic conductor node. The cached instance images are referenced by symbolic links located at subdirectory ``[deploy]http_image_subdir`` under path ``[deploy]http_root``. The custom HTTP server running at ironic conductor node is supposed to be configured properly to make IPA has unauthenticated access to image URL described above. ironic-15.0.0/releasenotes/notes/update-port-pxe-enabled-f954f934209cbf5b.yaml0000664000175000017500000000036013652514273026474 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug where ironic port is not updated in node introspection as per PXE enabled setting for ``idrac`` hardware type. See bug `2004340 `_ for details. ironic-15.0.0/releasenotes/notes/fix-agent-ilo-temp-image-cleanup-711429d0e67807ae.yaml0000664000175000017500000000021513652514273030010 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where the `agent_ilo` driver did not correctly clean up temporary files created during the deploy process. ironic-15.0.0/releasenotes/notes/add-id-and-uuid-filtering-to-sqalchemy-api.yaml0000664000175000017500000000027313652514273027603 0ustar zuulzuul00000000000000--- fixes: - Fixes `bug 1749755 `_ causing timeouts to not work properly because an unsupported sqalchemy filter was being used. ironic-15.0.0/releasenotes/notes/remove-app-wsgi-d5887ca28e4b9f00.yaml0000664000175000017500000000024613652514273025070 0ustar zuulzuul00000000000000--- upgrade: - The deprecated ``ironic/api/app.wsgi`` script has been removed. The automatically generated ``ironic-api-wsgi`` script must be used instead. ironic-15.0.0/releasenotes/notes/remove-ipmi-retry-timeout-c1b2cf7df6771a43.yaml0000664000175000017500000000021713652514273027173 0ustar zuulzuul00000000000000--- other: - | The deprecated configuration option ``[ipmi]retry_timeout`` was removed, use ``[ipmi]command_retry_timeout`` instead. ironic-15.0.0/releasenotes/notes/xenserver-ssh-driver-398084fe91ac56f1.yaml0000664000175000017500000000011613652514273026072 0ustar zuulzuul00000000000000--- features: - Adds support to the SSH power driver for XenServer VMs. ironic-15.0.0/releasenotes/notes/no-classic-ipmi-7ec52a7b01e40536.yaml0000664000175000017500000000024413652514273024733 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated classic drivers ``pxe_ipmitool`` and ``agent_ipmitool`` have been removed. Please use the ``ipmi`` hardware type instead. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/update-boot_mode-for-cleaning-scenario-for-ilo-hardware-type-ebca86da8fc271f6.yamlironic-15.0.0/releasenotes/notes/update-boot_mode-for-cleaning-scenario-for-ilo-hardware-type-ebca860000664000175000017500000000046713652514273033444 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the ``ilo`` hardware type would not properly update the boot mode on the bare metal machine for cleaning as per given ``boot_mode`` in node's properties/capabilities. See `bug 1559835 `_ for more details. ironic-15.0.0/releasenotes/notes/radosgw-temp-url-b04aac50698b4461.yaml0000664000175000017500000000021313652514273025151 0ustar zuulzuul00000000000000--- features: - Adds support for using Glance with a Ceph backend via the RADOS Gateway Swift API, with the Agent deploy driver. ironic-15.0.0/releasenotes/notes/periodic-tasks-drivers-ae9cddab88b546c6.yaml0000664000175000017500000000030613652514273026657 0ustar zuulzuul00000000000000--- deprecations: - Putting periodic tasks on a driver object (rather than interface) is deprecated. Driver developers should move periodic tasks from driver objects to interface objects. ironic-15.0.0/releasenotes/notes/erase-devices-metadata-config-f39b6ca415a87757.yaml0000664000175000017500000000117713652514273027535 0ustar zuulzuul00000000000000--- features: - Adds a new ``[deploy]/erase_devices_metadata_priority`` configuration option to allow operators to configure the priority of (or disable) the "erase_devices_metadata" cleaning step. upgrade: - The new "erase_devices_metadata" cleaning step is enabled by default (if available) in the ironic-python-agent project (priority 99). Wiping the devices metadata is usually very fast and shouldn't add much time (if any) to the overall cleaning process. Operators wanting to disable this cleaning step can do it by setting the ``[deploy]/erase_devices_metadata_priority`` configuration option to 0. ironic-15.0.0/releasenotes/notes/remove-oneview-9315c7b926fd4aa2.yaml0000664000175000017500000000040213652514273025000 0ustar zuulzuul00000000000000--- other: - | The ``oneview`` hardware type and related interfaces have been removed due to a lack of maintainer and 3rd-party CI. Please see `story 2001924 `_ for additional information. ironic-15.0.0/releasenotes/notes/remove-deprecated-dhcp-provider-method-89926a8f0f4793a4.yaml0000664000175000017500000000144313652514273031336 0ustar zuulzuul00000000000000--- upgrade: - | Removes the deprecated DHCP provider method ``update_port_address``. For users who created their own network interfaces or DHCP providers the logic should be moved to a custom network interface's ``port_changed`` and ``portgroup_changed`` methods. The following methods should be implemented by custom network interfaces: * ``vif_list``: List attached VIF IDs for a node. * ``vif_attach``: Attach a virtual network interface to a node. * ``vif_detach``: Detach a virtual network interface from a node. * ``port_changed``: Handle any actions required when a port changes. * ``portgroup_changed``: Handle any actions required when a port group changes. * ``get_current_vif``: Return VIF ID attached to port or port group object. ironic-15.0.0/releasenotes/notes/dhcpv6-stateful-address-count-0f94ac6a55bd9e51.yaml0000664000175000017500000000111613652514273027705 0ustar zuulzuul00000000000000--- features: - | For baremetal operations on DHCPv6-stateful networks multiple IPv6 addresses can now be allocated for neutron ports created for provisioning, cleaning, rescue or inspection. The new parameter ``[neutron]/dhcpv6_stateful_address_count`` controls the number of addresses to allocate (Default: 4). fixes: - | The 'no address available' problem seen when network booting on DHCPv6-stateful networks is fixed with the support for allocating multiple IPv6 addresses. See `bug: 1861032 `_. ironic-15.0.0/releasenotes/notes/fix-disk-identifier-overwrite-42b33a5a0f7742d8.yaml0000664000175000017500000000041713652514273027633 0ustar zuulzuul00000000000000--- fixes: - Fix handling of whole disk images with disk identifier 0x00000000. Instances failed to boot as the identifier in the boot config was overwritten during config drive creation. See `bug 1685093 `_. ironic-15.0.0/releasenotes/notes/remove-vifs-on-teardown-707c8e40c46b6e64.yaml0000664000175000017500000000156313652514273026464 0ustar zuulzuul00000000000000--- upgrade: - | The behavior for retention of VIF interface attachments has changed. If your use of the Bare Metal service is reliant upon the behavior of the VIFs being retained, which was introduced as a behavior change during the Ocata cycle, then you must update your tooling to explicitly re-add the VIF attachments prior to deployment. fixes: - | Fixes potential case of VIF records being orphaned as the service now removes all records of VIF attachments upon the teardown of a deployed node. This is in order to resolve issues related to where it is operationally impossible in some circumstances to remove a VIF attachment while a node is being undeployed as the Compute service will only attempt to remove the VIF for five minutes. See `bug 1743652 `_ for more details. ironic-15.0.0/releasenotes/notes/rescue-node-87e3b673c61ef628.yaml0000664000175000017500000000512513652514273024206 0ustar zuulzuul00000000000000--- features: - | Adds support for rescuing and unrescuing nodes: - Adds version 1.38 of the Bare Metal API, which includes: * A node in the ``active`` provision state can be rescued via the ``GET /v1/nodes/{node_ident}/states/provision`` API, by specifying ``rescue`` as the ``target`` value, and a ``rescue_password`` value. When the node has been rescued, it will be in the ``rescue`` provision state. A rescue ramdisk will be running, configured with the specified ``rescue_password``, and listening with ssh on the tenant network. * A node in the ``rescue`` provision state can be unrescued (to the ``active`` state) via the ``GET /v1/nodes/{node_ident}/states/provision`` API, by specifying ``unrescue`` as the ``target`` value. * The ``rescue_interface`` field of the node resource. A rescue interface can be set when creating or updating a node. * The ``default_rescue_interface`` and ``enabled_rescue_interfaces`` fields of the driver resource. - Adds new configuration options for the rescue feature: * Rescue interfaces are enabled via ``[DEFAULT]/enabled_rescue_interfaces``. A default rescue interface to use when creating or updating nodes can be specified with ``[DEFAULT]/enabled_rescue_interfaces``. * Adds ``[conductor]/check_rescue_state_interval`` and ``[conductor]/rescue_callback_timeout`` to fail the rescue operation upon timeout, for the nodes that are stuck in the rescue wait state. * Adds support for providing ``rescuing`` network (UUIR or name) with its security groups using new options ``[neutron]/rescuing_network`` and ``[neutron]/rescuing_network_security_groups`` respectively. It is required to provide ``[neutron]/rescuing_network``. Alternatively, the rescuing network can be provided per node via the node's ``driver_info['rescuing_network']`` field. - Adds ``rescue_interface`` field to the following node-related notifications: * ``baremetal.node.create.*``, new payload version 1.3 * ``baremetal.node.update.*``, new payload version 1.3 * ``baremetal.node.delete.*``, new payload version 1.3 * ``baremetal.node.maintenance.*``, new payload version 1.5 * ``baremetal.node.console.*``, new payload version 1.5 * ``baremetal.node.power_set.*``, new payload version 1.5 * ``baremetal.node.power_state_corrected.*``, new payload version 1.5 * ``baremetal.node.provision_set.*``, new payload version 1.5 ironic-15.0.0/releasenotes/notes/rebuild-configdrive-f52479fd55b0f5ce.yaml0000664000175000017500000000062213652514273026051 0ustar zuulzuul00000000000000--- features: - | Starting with the Bare Metal API version 1.35, it is possible to provide a configdrive when rebuilding a node. fixes: - | Fixes the problem of an old configdrive (used for deploying the node) being used again when rebuilding the node. Starting with the Bare Metal API version 1.35, it is possible to specify a different configdrive when rebuilding a node. ironic-15.0.0/releasenotes/notes/backfill_version_column_db_race_condition-713fa05832b93ca5.yaml0000664000175000017500000000045013652514273032344 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue when running ``ironic-dbsync online_data_migrations``. The value of an object's new ``version`` column might have been incorrectly changed from a newer object version to an older object version, due to a race condition. This is no longer the case. ironic-15.0.0/releasenotes/notes/sort_key_allowed_field-091f8eeedd0a2ace.yaml0000664000175000017500000000036713652514273027046 0ustar zuulzuul00000000000000--- fixes: - | When returning lists of nodes, port groups, or ports, checks the sort key to make sure the field is available in the requested API version. A 406 (Not Acceptable) HTTP status is returned if the field is not available. ironic-15.0.0/releasenotes/notes/idrac-fix-reboot-failure-c740e765ff41bcf0.yaml0000664000175000017500000000046113652514273026701 0ustar zuulzuul00000000000000--- fixes: - | Fixed a bug where rebooting a node managed by the ``idrac`` hardware type when using the WS-MAN power interface sometimes fails with a ``The command failed to set RequestedState`` error. See bug `2007487 `_ for details. ironic-15.0.0/releasenotes/notes/fix-cve-2016-4985-b62abae577025365.yaml0000664000175000017500000000124513652514273024307 0ustar zuulzuul00000000000000--- security: - A critical security vulnerability (CVE-2016-4985) was fixed in this release. Previously, a client with network access to the ironic-api service was able to bypass Keystone authentication and retrieve all information about any Node registered with Ironic, if they knew (or were able to guess) the MAC address of a network card belonging to that Node, by sending a crafted POST request to the /v1/drivers/$DRIVER_NAME/vendor_passthru resource. Ironic's policy.json configuration is now respected when responding to this request such that, if passwords should be masked for other requests, they are also masked for this request. ironic-15.0.0/releasenotes/notes/ibmc-38-169438974508f62e.yaml0000664000175000017500000000022313652514273022716 0ustar zuulzuul00000000000000--- fixes: - | Fixes incorrect parsing of ``ibmc_address`` with a port but without a schema in the ``ibmc`` hardware type on Python 3.8. ironic-15.0.0/releasenotes/notes/allow-allocation-update-94d862c3da454be2.yaml0000664000175000017500000000025113652514273026555 0ustar zuulzuul00000000000000--- features: - | API version 1.57 adds a REST API endpoint for updating an existing allocation. Only ``name`` and ``extra`` fields are allowed to be updated. ironic-15.0.0/releasenotes/notes/non-persistent-boot-5e3a0cd78e9dc91b.yaml0000664000175000017500000000052313652514273026132 0ustar zuulzuul00000000000000fixes: - | Fixes a bug which caused boot device changes to be persistent in places where they did not need to be during cleaning and deployment phases, due to the default behavior of PXE interface forcing a persistent change. For more information, see `bug 1701721 `_. ironic-15.0.0/releasenotes/notes/remove-policy-json-be92ffdba7bda951.yaml0000664000175000017500000000123313652514273026076 0ustar zuulzuul00000000000000--- upgrade: - | The default policy file located at ``etc/ironic/policy.json`` was removed in this release, as no policy file is required to run the ironic-api service. other: - | The sample configuration file located at ``etc/ironic/ironic.conf.sample`` and the sample policy file located at ``etc/ironic/policy.json.sample`` were removed in this release, as they are now published with documentation. See `the sample configuration file `_ and `the sample policy file `_. ironic-15.0.0/releasenotes/notes/fix-vif-detach-fca221f1a1c0e9fa.yaml0000664000175000017500000000027213652514273025033 0ustar zuulzuul00000000000000--- fixes: - An issue when it was impossible to detach manually attached VIF to port (port.extra) when port is in portgroup by using DELETE ``v1/nodes//vifs`` API. ironic-15.0.0/releasenotes/notes/drac-migrate-to-dracclient-2bd8a6d1dd3fdc69.yaml0000664000175000017500000000126613652514273027351 0ustar zuulzuul00000000000000--- fixes: - DRAC driver migrated from ``pywsman`` to ``python-dracclient`` fixing the driver lockup issue caused by the python interpreter not handling signals when execution handed to the c library. - Fixes an issue with setting the boot device multiple times without a reboot in the DRAC driver by setting the boot device only before power management operations. upgrade: - Dependency for DRAC driver changed from ``pywsman`` to ``python-dracclient`` with version >= 0.0.5. Exceptions thrown by the driver and return values of the ``set_bios_config``, ``commit_bios_config`` and ``abandon_bios_config`` methods changed on the vendor-passthru interface. ironic-15.0.0/releasenotes/notes/glance-v2-83b04fec247cd22f.yaml0000664000175000017500000000050713652514273023671 0ustar zuulzuul00000000000000--- upgrade: - | Ironic now uses only the Image Service (glance) v2 API by default. Use of the deprecated v1 API for certain basic tasks can still be enabled by setting ``[glance]/glance_api_version`` to ``1``. This option, however, does not affect temporary URL generation, as it always requires the v2 API. ironic-15.0.0/releasenotes/notes/removal-pre-allocation-for-oneview-09310a215b3aaf3c.yaml0000664000175000017500000000040713652514273030620 0ustar zuulzuul00000000000000--- upgrade: - | The pre-allocation model for OneView drivers was deprecated in Newton cycle (Ironic 6.1.0) and all pertaining code was marked for removal during Pike cycle. From now on, OneView drivers works only with dynamic allocation model. ironic-15.0.0/releasenotes/notes/inspector-session-179f83cbb0dc169b.yaml0000664000175000017500000000050413652514273025604 0ustar zuulzuul00000000000000--- fixes: - Ironic Inspector inspection interface will now fetch the service endpoint for the service catalog, if "service_url" is not provided and keystone support is enabled. upgrade: - Minimum required version of python-ironic-inspector-client was bumped to 1.5.0 (released as part of the Mitaka cycle). ironic-15.0.0/releasenotes/notes/fix-boot-url-for-v6-802abde9de8ba455.yaml0000664000175000017500000000044113652514273025640 0ustar zuulzuul00000000000000--- fixes: - | Fixes the URL generation with TFTP URLs does not account for IPv6 addresses needing to be wrapped in '[]' in order to be parsed. - | Fixes DHCP option parameter generation to correctly return the proper values for IPv6 based booting when iPXE is in use. ironic-15.0.0/releasenotes/notes/apache-multiple-workers-11d4ba52c89a13e3.yaml0000664000175000017500000000031513652514273026564 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue with requests to the ironic API service sometimes timing out when running under Apache. This was due to mixing two concurrency models (for handling multiple threads). ironic-15.0.0/releasenotes/notes/check_protocol_for_ironic_api-32f35c93a140d3ae.yaml0000664000175000017500000000065713652514273030071 0ustar zuulzuul00000000000000--- fixes: - A ``[conductor]/api_url`` value specified in the configuration file that does not start with either ``https://`` or ``http://`` is no longer allowed. An incorrect value led to deployment failure on ironic-python-agent side. This misconfiguration will now be detected during ironic-conductor and ironic-api startup. An exception will be raised and an error about the invalid value will be logged. ironic-15.0.0/releasenotes/notes/resource-class-change-563797d5a3c35683.yaml0000664000175000017500000000100413652514273026002 0ustar zuulzuul00000000000000--- upgrade: - | Changing the ``resource_class`` field of a node in the ``active`` state or any of the transient states is no longer possible. Please update your scripts to only set a resource class for nodes that are not deployed to. Setting a resource class for nodes that do not have it is still possible. fixes: - | No longer allows changing the ``resource_class`` field for ``active`` nodes if it was already set to a non-empty value. Doing so would break the Compute scheduler. ironic-15.0.0/releasenotes/notes/clean-nodes-stuck-in-cleaning-on-startup-443823ea4f937965.yaml0000664000175000017500000000032713652514273031450 0ustar zuulzuul00000000000000--- fixes: - When a conductor managing a node dies mid-cleaning the node would get stuck in the CLEANING state. Now upon conductor startup nodes in the CLEANING state will be moved to the CLEANFAIL state. ironic-15.0.0/releasenotes/notes/glance-deprecations-21e7014b72a1bcef.yaml0000664000175000017500000000066113652514273026007 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated options ``glance_api_servers``, ``glance_api_insecure``, ``glance_cafile`` and ``auth_strategy`` from the ``[glance]`` section have been remove. Please use the corresponding keystoneauth options instead. deprecations: - | The configuration option ``[glance]glance_num_retries`` has been renamed to ``[glance]num_retries``. The old name will be removed in a future release. ironic-15.0.0/releasenotes/notes/pxe-takeover-d8f14bcb60e5b121.yaml0000664000175000017500000000041213652514273024507 0ustar zuulzuul00000000000000--- fixes: - | Drivers using the ``PXEBoot`` boot interface now correctly support node take-over for netboot-ed nodes in ``ACTIVE`` state. During take-over, the PXE environment is first re-created before attempting to switch it to "service mode". ironic-15.0.0/releasenotes/notes/remove-exception-message-92100debeb40d4c7.yaml0000664000175000017500000000034413652514273027014 0ustar zuulzuul00000000000000--- upgrade: - Removes support for the "message" attribute from the "IronicException" class. Subclasses of "IronicException" should instead use the "_msg_fmt" attribute. This change is only relevant to developers. ironic-15.0.0/releasenotes/notes/add-vif-attach-detach-support-99eca43eea6e5a30.yaml0000664000175000017500000000142013652514273027706 0ustar zuulzuul00000000000000--- features: - Adds support for attaching and detaching network VIFs to ironic ports and port groups by using the ``/v1/nodes//vifs`` API endpoint that was added in API version 1.28. When attaching a VIF to a node, it is attached to the first free port group. A port group is considered free if it has no VIFs attached to any of its ports. Otherwise, only the unattached ports of this port group are available for attachment. If there are no free port groups, the first available port is used instead, where ports with ``pxe_enabled`` set to ``True`` have higher priority. deprecations: - Using ``port.extra['vif_port_id']`` for attaching and detaching VIFs to ports or port groups is deprecated and will be removed in Pike release. ironic-15.0.0/releasenotes/notes/fix-virtualbox-localboot-not-working-558a3dec72b5116b.yaml0000664000175000017500000000060513652514273031244 0ustar zuulzuul00000000000000--- fixes: - Fixed a VirtualBox issue that Ironic fails to set VirtualBox VM's boot device when it is powered on. This bug causes two problems 1. VirtualBox cannot deploy VMs in local boot mode. 2. Ironic fails to set boot device when VirtualBox VMs is powered on and also fails to get the correct boot device from Ironic API call when VMs is powered on. ironic-15.0.0/releasenotes/notes/add_automated_clean_field-b3e7d56f4aeaf512.yaml0000664000175000017500000000071013652514273027271 0ustar zuulzuul00000000000000--- features: - | Allows enabling automated cleaning per node if it is disabled globally. A new ``automated_clean`` field has been created on the node object, allowing to control the individual automated cleaning of nodes. When automated cleaning is disabled at global level, but enabled at node level, the automated cleaning will be performed only on those nodes. The new field is accessible starting with the API version 1.47. ironic-15.0.0/releasenotes/notes/ironic-python-agent-multidevice-fix-3daa0760696b46b7.yaml0000664000175000017500000000114113652514273030743 0ustar zuulzuul00000000000000--- fixes: - | The `ironic-python-agent `_ version 3.5.0 contains a fix that allows multi-device objects to be selected as a root disk. These devices MAY be created automatically in the case of some ATARAID controllers with pre-existing configuration, or via the actions of a custom hardware manager. Operators who require this functionality are encouraged to ensure that their deployment ramdisks are up to date. See `story 2003445 `_ for more information. ironic-15.0.0/releasenotes/notes/cleaning-retry-fix-89a5d0e65920a064.yaml0000664000175000017500000000046213652514273025406 0ustar zuulzuul00000000000000--- fixes: - A bug has been corrected where a node's current clean_step was not purged upon that node timing out from a CLEANWAIT state. Previously, this bug would prevent a user from retrying cleaning operations. For more information, see https://bugs.launchpad.net/ironic/+bug/1590146. ironic-15.0.0/releasenotes/notes/ilo-fix-inspection-b169ad0a22aea2ff.yaml0000664000175000017500000000023113652514273025751 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug in the iLO drivers' inspection where an existing ``local_gb`` node property was overwritten with "0" if not detected. ironic-15.0.0/releasenotes/notes/no-downward-sql-migration-52279e875cd8b7a3.yaml0000664000175000017500000000033113652514273027012 0ustar zuulzuul00000000000000--- features: - Database migrations downgrade support was removed. More info about database migration/rollback could be found here http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-back.html ironic-15.0.0/releasenotes/notes/boot-ipxe-inc-workaround-548e10d1d6616752.yaml0000664000175000017500000000026713652514273026470 0ustar zuulzuul00000000000000--- fixes: - Make boot.ipxe fallback to its previous behavior on *really* old iPXE ROMs where 'inc' command is not available at all, see https://launchpad.net/bugs/1507738. ironic-15.0.0/releasenotes/notes/pin-api-version-029748f7d3be68d1.yaml0000664000175000017500000000055513652514273025015 0ustar zuulzuul00000000000000--- upgrade: - | During a `rolling upgrade `_ when the new services are pinned to the old release, the Bare Metal API version will also be pinned to the old release. This will prevent new features from being accessed until after the upgrade is done. ironic-15.0.0/releasenotes/notes/fix-ipmi-numeric-password-75e080aa8bdfb9a2.yaml0000664000175000017500000000020613652514273027211 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where ironic could not communicate with IPMI endpoints when the password consisted of only numbers. ironic-15.0.0/releasenotes/notes/lookup-heartbeat-f9772521d12a0549.yaml0000664000175000017500000000165113652514273025066 0ustar zuulzuul00000000000000--- features: - New API endpoint for deploy ramdisk lookup ``/v1/lookup``. This endpoint is not authenticated to allow ramdisks to access it without passing the credentials to them. - New API endpoint for deploy ramdisk heartbeat ``/v1/heartbeat/``. This endpoint is not authenticated to allow ramdisks to access it without passing the credentials to them. deprecations: - The configuration option ``[agent]/heartbeat_timeout`` was renamed to ``[api]/ramdisk_heartbeat_timeout``. The old variant is deprecated. upgrade: - A new configuration option ``[api]/restrict_lookup`` is added, which restricts the lookup API (normally only used by ramdisks) to only work when the node is in specific states used by the ramdisk, and defaults to True. Operators that need this endpoint to work in any state may set this to False, though this is insecure and should not be used in normal operation. ironic-15.0.0/releasenotes/notes/debug-sensor-data-fix-for-ipmitool-eb13e80ccdd984db.yaml0000664000175000017500000000075313652514273030770 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the sensor data parsing method for the ``ipmitool`` interface lacked the ability to handle the automatically included `ipmitool` debugging information when the ``debug`` option is set to ``True`` in the ironic.conf file. As such, extra debugging information supplied by the underlying ``ipmitool`` command is disregarded. More information can be found in `story 2005331 `_. ironic-15.0.0/releasenotes/notes/deprecate-clustered-compute-manager-3dd68557446bcc5c.yaml0000664000175000017500000000061113652514273031050 0ustar zuulzuul00000000000000--- deprecations: - | The ClusteredComputeManager is now deprecated. The Newton version of Nova adds functionality to the ironic virt driver to support multiple compute hosts without using the hack we call ClusteredComputeManager. As such, we are marking this unsupported component as deprecated, and plan to remove it before the end of the Ocata development cycle. ironic-15.0.0/releasenotes/notes/ipa-command-retries-and-timeout-29b0be3f2c21328c.yaml0000664000175000017500000000035113652514273030104 0ustar zuulzuul00000000000000--- fixes: - | Adds ``command_timeout`` and ``max_command_attempts`` configuration options to IPA, so when connection errors occur the command will be executed again. The options are located in the ``[agent]`` section. ironic-15.0.0/releasenotes/notes/add-deploy-steps-ilo-raid-interface-732314cea19fe8ac.yaml0000664000175000017500000000031513652514273030723 0ustar zuulzuul00000000000000--- features: - | Adds support for deploy steps to ``raid`` interface of ``ilo5`` hardware type. The methods ``apply_configuration`` and ``delete_configuration`` can be used as deploy steps. ironic-15.0.0/releasenotes/notes/add-support-for-smart-nic-0fc5b10ba6772f7f.yaml0000664000175000017500000000052313652514273027032 0ustar zuulzuul00000000000000--- features: - | Adds support to enable deployment workflow changes necessary to support the use of Smart NICs in the ``ansible``, ``direct``, ``iscsi`` and ``ramdisk`` deployment interfaces. Networking service integration for this functionality is not anticipated until the Train release of the Networking service. ironic-15.0.0/releasenotes/notes/bug-35702-25da234580ca0c31.yaml0000664000175000017500000000013713652514273023165 0ustar zuulzuul00000000000000--- fixes: - | Fixes deploying non-public images using the ``ansible`` deploy interface. ironic-15.0.0/releasenotes/notes/bug-2004947-e5f27e11b8f9c96d.yaml0000664000175000017500000000036113652514273023537 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where setting the ``conductor_group`` for a node was not entirely case-sensitive, in that this could fail if case-sensitivity did not match between the conductor configuration and the API request. ironic-15.0.0/releasenotes/notes/adopt-oslo-config-generator-15afd2e7c2f008b4.yaml0000664000175000017500000000063413652514273027417 0ustar zuulzuul00000000000000--- other: - Adopt oslo-config-generator to generate sample config files. New config options from Ironic code should register with ironic/conf/opts.py. New external libraries should register with tools/config/ironic-config-generator.conf. A deprecated option should add a deprecated group even if it didn't alter its group, otherwise the deprecated group will use 'DEFAULT' by default. ironic-15.0.0/releasenotes/notes/create-on-conductor-c1c52a1f022c4048.yaml0000664000175000017500000000120013652514273025575 0ustar zuulzuul00000000000000--- upgrade: - Moves node creation logic from the API service to the conductor service. This is more consistent with other node operations and opens opportunities for conductor-side validations on nodes. However, with this change, node creation may take longer, and this may limit the number of nodes that can be enrolled in parallel. - The ``[DEFAULT]/default_network_interface`` and ``[dhcp]/dhcp_provider`` configuration options were previously required for the ironic-api service to calculate the correct "network_interface" default. Now these options are only required by the ironic-conductor service. ironic-15.0.0/releasenotes/notes/allow-deleting-unbound-ports-fa78069b52f099ac.yaml0000664000175000017500000000024013652514273027566 0ustar zuulzuul00000000000000--- fixes: - | Allows deleting unbound ports on an active node. See `story 2006385 `_ for details. ironic-15.0.0/releasenotes/notes/add-boot-mode-redfish-inspect-48e2b27ef022932a.yaml0000664000175000017500000000033513652514273027451 0ustar zuulzuul00000000000000--- features: - | Adds currently used boot mode into node ``properties/capabilities`` upon ``redfish`` inspect interface run. The idea behind this change is to align with the in-band ``inspector`` behavior. ironic-15.0.0/releasenotes/notes/add-deploy-steps-drac-raid-interface-7023c03a96996265.yaml0000664000175000017500000000066613652514273030513 0ustar zuulzuul00000000000000--- features: - | Adds support for deploy steps to the ``idrac-wsman`` ``raid`` interface. The methods ``apply_configuration`` and ``delete_configuration`` can be used as deploy steps. - | Adds a new ``delete_existing`` argument to the ``create_configuration`` clean step on the ``idrac-wsman`` ``raid`` interface which can be used to delete existing virtual disks. The default for this argument is ``False``. ironic-15.0.0/releasenotes/notes/irmc-manual-clean-bios-configuration-1ad24831501456d5.yaml0000664000175000017500000000033213652514273030664 0ustar zuulzuul00000000000000--- features: - | Adds new ``bios`` interface to ``irmc`` hardware type. This provides out-of-band BIOS configuration solution for iRMC driver which makes the functionality available via manual cleaning. ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/story-2006288-ilo-power-on-fails-with-no-boot-device-b698fef59b04e515.yamlironic-15.0.0/releasenotes/notes/story-2006288-ilo-power-on-fails-with-no-boot-device-b698fef59b04e50000664000175000017500000000042713652514273032157 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in powering-on of server in ``ilo`` hardware type. Server was failing to return success for power-on operation if no bootable device was found. See `story 2006288 `__ for details. ironic-15.0.0/releasenotes/notes/add-snmp-pdu-driver-type-discovery-1f280b7f06fd1ca5.yaml0000664000175000017500000000037713652514273030663 0ustar zuulzuul00000000000000--- features: - | Adds new ``auto`` type of the ``driver_info/snmp_driver`` setting which makes ironic automatically select a suitable SNMP driver type based on the ``SNMPv2-MIB::sysObjectID`` value as reported by the PDU being managed. ironic-15.0.0/releasenotes/notes/default_boot_option-f22c01f976bc2de7.yaml0000664000175000017500000000060013652514273026150 0ustar zuulzuul00000000000000--- features: - Adds new option ``[deploy]/default_boot_option`` for setting the default boot option when no explicit boot option is requested via capabilities. upgrade: - A future release will change the default value of ``[deploy]/default_boot_option`` from "netboot" to "local". To avoid disruptions, it is recommended to set an explicit value for this option. ironic-15.0.0/releasenotes/notes/inspection-agent-drivers-cad619ec8a4874b1.yaml0000664000175000017500000000014013652514273027035 0ustar zuulzuul00000000000000--- features: - Adds inspection support for the `agent_ipmitool` and `agent_ssh` drivers. ironic-15.0.0/releasenotes/notes/drac-inspection-interface-b0abbad98fec1c2e.yaml0000664000175000017500000000045213652514273027423 0ustar zuulzuul00000000000000--- features: - Adds out-of-band inspection interface usable by DRAC drivers. upgrade: - The ``inspect`` interface of the ``pxe_drac`` driver has switched to use out-of-band inspection. For inband inspection, the node should be updated to use the ``pxe_drac_inspector`` driver instead. ironic-15.0.0/releasenotes/notes/add-pxe-per-node-526fd79df17efda8.yaml0000664000175000017500000000042313652514273025245 0ustar zuulzuul00000000000000--- features: - | Add a new field pxe_template that can be set at driver-info level. This will specify a path for a custom pxe boot template. If present, this template will be read and will have priority in front of the per-arch and general pxe templates. ironic-15.0.0/releasenotes/notes/only_default_flat_network_if_enabled-b5c6ea415239a53c.yaml0000664000175000017500000000063513652514273031426 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug seen when no ``default_network_interface`` is set, because the conductor tries use the ``flat`` network interface instead even if it is not included in the conductor's ``enabled_network_interfaces`` config option. Resulting in `Failed to register hardware types` error. See `bug 1744332 `_ for more information.ironic-15.0.0/releasenotes/notes/inspection-boot-network-59fd23ca62b09e81.yaml0000664000175000017500000000137713652514273026651 0ustar zuulzuul00000000000000--- features: - | It's now possible to force booting for in-band inspection to be managed by ironic by setting the new ``[inspector]require_managed_boot`` option to ``True``. In-band inspection will fail if the node's driver does not support managing boot for it. other: - | Boot and network interface implementations can now manage boot for in-band inspection by implementing the new methods: * ``BootInterface.validate_inspection`` * ``NetworkInterface.validate_inspection`` * ``NetworkInterface.add_inspection_network`` * ``NetworkInterface.remove_inspection_network`` Previously only ironic-inspector itself could manage boot for it. This change opens a way for non-PXE implementations of in-band inspection. ironic-15.0.0/releasenotes/notes/bfv-pxe-boot-3375d331ee2f04f2.yaml0000664000175000017500000000031713652514273024260 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem when using boot from volume with the ``pxe`` boot interface (`bug 1724275 `_). Now the correct iSCSI initiator is used. ironic-15.0.0/releasenotes/notes/mdns-a5f4034257139e31.yaml0000664000175000017500000000035713652514273022555 0ustar zuulzuul00000000000000--- features: - | Adds a new option ``enable_mdns`` which enables publishing the baremetal API endpoint via mDNS as specified in the `API SIG guideline `_. ironic-15.0.0/releasenotes/notes/bug-2007567-wsman-raid-48483affdd9f9894.yaml0000664000175000017500000000034213652514273025535 0ustar zuulzuul00000000000000--- fixes: - | Fixes RAID configuration using `idrac-wsman` RAID interface where node remains in 'clean wait' provisioning state forever. See `story 2007567 `_. ironic-15.0.0/releasenotes/notes/configdrive-vendordata-122049bd7c6e1b67.yaml0000664000175000017500000000052213652514273026374 0ustar zuulzuul00000000000000--- features: - | Adds support for specifying vendor_data when building config drives. Starting with API version 1.59, a JSON based ``configdrive`` parameter to ``/v1/nodes//states/provision`` can include the key ``vendor_data``. This data will be built into the configdrive contents as ``vendor_data2.json``. ironic-15.0.0/releasenotes/notes/metrics-notifier-information-17858c8e27c795d7.yaml0000664000175000017500000000102313652514273027527 0ustar zuulzuul00000000000000--- features: - | Notification events for metrics data now contains a ``node_name`` field to assist operators with relating metrics data being transmitted by the conductor service. fixes: - | Notification event types now include the hardware type name string as opposed to a static string of "ipmi". This allows event processors and operators to understand what the actual notification event data source is as opposed to having to rely upon fingerprints of the data to make such determinations. ironic-15.0.0/releasenotes/notes/bug-1579635-cffd990b51bcb5ab.yaml0000664000175000017500000000014313652514273023744 0ustar zuulzuul00000000000000--- fixes: - This fixes the issue of RAID interface not being supported in iscsi_ilo driver. ironic-15.0.0/releasenotes/notes/validate-instance-traits-525dd3150aa6afa2.yaml0000664000175000017500000000066413652514273027000 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where a node's ``instance_info.traits`` field could be incorrectly formatted, or contain traits that are not traits of the node. When validating drivers and prior to deployment, the Bare Metal service now validates that a node's traits include all the traits in its ``instance_info.traits`` field. See `bug 1755146 `_ for details. ironic-15.0.0/releasenotes/notes/uefi-first-prepare-e7fa1e2a78b4af99.yaml0000664000175000017500000000071213652514273025717 0ustar zuulzuul00000000000000--- upgrade: - | A future release will change the default value of ``[deploy]/default_boot_mode`` from "bios" to "uefi". It is recommended to set an explicit value for this option. For hardware types which don't support setting boot mode, a future release will assume boot mode is set to UEFI if no boot mode is set to node's capabilities. It is also recommended to set ``boot_mode`` into ``properties/capabilities`` of a node. ironic-15.0.0/releasenotes/notes/vif-detach-locking-fix-7be66f8150e19819.yaml0000664000175000017500000000057713652514273026146 0ustar zuulzuul00000000000000--- fixes: - | Addresses a condition where the Compute Service may have been unable to remove VIF attachment records while a baremetal node is being unprovisiond. This condition resulted in VIF records being orphaned, blocking future deployments without manual intervention. See `bug 1743652 `_ for more details. ironic-15.0.0/releasenotes/notes/jsonschema_draft04-1cb5fc4a3852f9ae.yaml0000664000175000017500000000062413652514273025654 0ustar zuulzuul00000000000000--- fixes: - This fix binds the jsonschema to use draft-04 for raid schema. The jsonschema 3.0.1 supports draft-03, draft-04, draft-06 and draft-07 and by default the validate function uses latest draft validator. Draft-04 is the latest draft in the jsonschema 2.6. Hence binding the schema to draft-04 validator makes it compliant for both jsonschema 2.6 and jsonschema 3.0.1. ironic-15.0.0/releasenotes/notes/proliantutils_version_update-b6e5ff0e496215a5.yaml0000664000175000017500000000041413652514273030057 0ustar zuulzuul00000000000000--- upgrade: - The minimum required version of proliantutils (needed for iLO drivers) was bumped to 2.1.11. This version includes fixes for the bugs caused by python request library version 2.11.0, Proliant Gen7 support and iLO based RAID configuration. ironic-15.0.0/releasenotes/notes/drop-py-2-7-5140cb76e321cdd1.yaml0000664000175000017500000000031213652514273023710 0ustar zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of Ironic to support Python 2.7 is OpenStack Train. The minimum version of Python now supported by Ironic is Python 3.6. ironic-15.0.0/releasenotes/notes/bug-30317-a972c8d879c98941.yaml0000664000175000017500000000017613652514273023164 0ustar zuulzuul00000000000000--- fixes: - | Fixes deployment with the ``ansible`` deploy interface and instance images with GPT partition table. ironic-15.0.0/releasenotes/notes/default-resource-class-e11bacfb01d6841b.yaml0000664000175000017500000000033213652514273026525 0ustar zuulzuul00000000000000--- features: - | Adds new configuration option ``[DEFAULT]default_resource_class`` that specifies the resource class to use for new nodes when no resource class is provided in the node creation request. ironic-15.0.0/releasenotes/notes/allocation-api-6ac2d262689f5f59.yaml0000664000175000017500000000052713652514273024674 0ustar zuulzuul00000000000000--- features: - | Introduces allocation API. This API allows finding and reserving a node by its resource class, traits and optional list of candidate nodes. Introduces new API endpoints: * ``GET/POST /v1/allocations`` * ``GET/DELETE /v1/allocations/`` * ``GET/DELETE /v1/nodes//allocation`` ironic-15.0.0/releasenotes/notes/jsonrpc-logging-21670015bb845182.yaml0000664000175000017500000000021413652514273024621 0ustar zuulzuul00000000000000--- security: - | Node secrets (such as BMC credentials) are no longer logged when JSON RPC is used and DEBUG logging is enabled. ironic-15.0.0/releasenotes/notes/ipmi_hex_kg_key-8f6caabe5b7d7a9b.yaml0000664000175000017500000000037413652514273025500 0ustar zuulzuul00000000000000--- features: - | New property ``ipmi_hex_kg_key`` for the ipmi based interfaces. The property enables user to set the Kg key for IPMIv2 authentication in hexadecimal format. This value is provided to ``ipmitool`` as the -y argument. ironic-15.0.0/releasenotes/notes/network-flat-use-node-uuid-for-binding-hostid-afb43097e7204b99.yaml0000664000175000017500000000373613652514273032555 0ustar zuulzuul00000000000000--- features: - Adds support for `routed networks `_ when using the ``flat`` network interface. This feature requires the ``baremetal`` ML2 mechanism driver and L2 agent from the `networking-baremetal `_ plugin. See the `networking configuration documentation `_ for more details. upgrade: - | The ``baremetal`` ML2 mechanism driver and L2 agent should now be used with the ``flat`` network interface. When installed, the ``baremetal`` mechanism driver and agent ensure that ports are properly bound in the Networking service. Installation and configuration of the ML2 components are documented in the `networking-baremetal project documentation `_. Without the ML2 mechanism driver and L2 agent, the Networking service's ports will not be correctly bound. In the Networking service, ports will have a ``DOWN`` status, and the ``binding_vif_type`` field equal to ``binding_failed``. This was always the status for the ``flat`` network interface ports prior to the introduction of the ``baremetal`` mechanism driver. For a non-routed network, bare metal nodes can still be deployed and are functional, despite this port binding state in the Networking service. fixes: - Fixes an issue where the Networking service would reject port bindings with the ``flat`` network interface because no host would match the *host-id* used in such configurations. The ``flat`` network interface no longer requires a networking agent (such as ``neutron-openvswitch-agent``) to be run on the ``nova-compute`` proxy node which executes the ironic virt driver. Instead, the interface uses the `baremetal mechanism driver `_. ironic-15.0.0/releasenotes/notes/add-realtime-support-d814d5917836e9e2.yaml0000664000175000017500000000065413652514273025764 0ustar zuulzuul00000000000000--- features: - | Adds capability to hardware type ``idrac`` for creating and deleting RAID sets without rebooting the baremetal node. This realtime mechanism is supported on PERC H730 and H740 RAID controllers that are running firmware version 25.5.5.0005 or later. upgrade: - | Updates the minimum required version of ``python-dracclient`` to ``3.0.0`` when using the ``idrac`` hardware type. ironic-15.0.0/releasenotes/notes/deprecated-inspector-opts-0520b08dbcd10681.yaml0000664000175000017500000000045113652514273027021 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated configuration options ``enabled`` and ``service_url`` from the ``inspector`` section have been removed. - | The python-ironic-inspector-client package is no longer required for the ``inspector`` inspect interface (openstacksdk is used instead). ironic-15.0.0/releasenotes/notes/add-agent-iboot-0a4b5471c6ace461.yaml0000664000175000017500000000017013652514273024755 0ustar zuulzuul00000000000000--- features: - Adds an `agent_iboot` driver to allow use of the Iboot power driver with the Agent deploy driver. ironic-15.0.0/releasenotes/notes/remove-driver-periodic-task-f5e513b06b601ce4.yaml0000664000175000017500000000025313652514273027344 0ustar zuulzuul00000000000000--- upgrade: - Removes the deprecated decorator "driver_periodic_task", Drivers should use the "periodics.periodic" decorator from the futurist library instead. ironic-15.0.0/releasenotes/notes/undeprecate-xclarity-4f4752017e8310e7.yaml0000664000175000017500000000054213652514273025752 0ustar zuulzuul00000000000000--- deprecations: - | The ``xclarity`` hardware type, which was previously deprecated, is no longer deprecated. Lenovo has instituted third-party CI which is a requirement for a driver to remain in-tree. other: - | The ``xclarity`` hardware type is no longer deprecated as Lenovo has implemented third-party CI to enable testing. ironic-15.0.0/releasenotes/notes/update-live-port-ee3fa9b77f5d0cf7.yaml0000664000175000017500000000041613652514273025472 0ustar zuulzuul00000000000000--- fixes: - Fixed updating a MAC on a port for active instances in maintenance mode (previously returned HTTP 500). - Return HTTP 400 for requests to update a MAC on a port for an active instance without maintenance mode set (previously returned HTTP 500). ironic-15.0.0/releasenotes/notes/inspector-periodics-34449c9d77830b3c.yaml0000664000175000017500000000056413652514273025677 0ustar zuulzuul00000000000000--- fixes: - | The periodic tasks for the ``inspector`` inspect interface are no longer disabled if the ``[inspector]enabled`` option is not set to ``True``. The help string of this option claims that it does not apply to hardware types. In any case, the periodic tasks are only run if any enabled classic driver or hardware interface requires them. ironic-15.0.0/releasenotes/notes/irmc-oob-inspection-6d072c60f6c88ecb.yaml0000664000175000017500000000011013652514273025771 0ustar zuulzuul00000000000000--- features: - Adds out-of-band inspection support for iRMC drivers. ironic-15.0.0/releasenotes/notes/agent-command-status-retry-f9b6f53a823c6b01.yaml0000664000175000017500000000034313652514273027231 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue with the agent client code where checks of the agent command status had no logic to prevent an intermittent or transient connection failure from causing the entire operation to fail. ironic-15.0.0/releasenotes/notes/update-irmc-set-boot-device-fd50d9dce42aaa89.yaml0000664000175000017500000000027313652514273027461 0ustar zuulzuul00000000000000--- fixes: - This forces iRMC vmedia boot from remotely connected (redirected) CD/DVD instead of default CD/DVD. See https://bugs.launchpad.net/ironic/+bug/1561852 for details. ironic-15.0.0/releasenotes/notes/ironic-11-prelude-6dae469633823f8d.yaml0000664000175000017500000000104613652514273025133 0ustar zuulzuul00000000000000--- prelude: | I R O N I C turns the dial to `11` In preparation for the OpenStack Rocky development cycle release, the "ironic" Bare Metal as a Service team announces the release of version 11.0. While it is not quite like a volume knob, this release lays the foundation for features coming in future releases and user experience enhancements. Some of these include the BIOS configuration framework, power fault recovery, additional error handling, refactoring, removal of classic drivers, and many bug fixes. ironic-15.0.0/releasenotes/notes/fix-provisioning-port-cleanup-79ee7930ca206c42.yaml0000664000175000017500000000057713652514273027710 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue whereby in certain deployment failure scenarios a node's provisioning ports are not deleted. The issue would typically have been masked by nova, which deletes all ports with a device ID matching the instance's UUID during instance termination. See `bug 1732412 `_ for details. ironic-15.0.0/releasenotes/notes/add-snmp-pdu-driver-type-baytech-mrp27-5007d1d7e0a52162.yaml0000664000175000017500000000015113652514273030776 0ustar zuulzuul00000000000000--- features: - | Adds new Power Distribution Unit (PDU) ``snmp`` driver type - BayTech MRP27. ironic-15.0.0/releasenotes/notes/irmc-boot-from-volume-4bc5d20a0a780669.yaml0000664000175000017500000000053713652514273026114 0ustar zuulzuul00000000000000--- features: - | Adds support for booting from remote volumes via the ``irmc-virtual-media`` boot interface. It enables boot configuration for iSCSI or FibreChannel via out-of-band network. For details, see the `iRMC driver documentation `_. ironic-15.0.0/releasenotes/notes/idrac-add-redfish-boot-support-036396b48d3f71f4.yaml0000664000175000017500000000166313652514273027617 0ustar zuulzuul00000000000000--- features: - | Adds ``idrac`` hardware type support of a virtual media boot interface implementation that utilizes the Redfish out-of-band (OOB) management protocol and is compatible with the integrated Dell Remote Access Controller (iDRAC) baseboard management controller (BMC). It is named ``idrac-redfish-virtual-media``. The ``idrac`` hardware type declares support for that new interface implementation, in addition to all boot interface implementations it has been supporting. The highest priority boot interfaces remain the same. It now supports the following boot interface implementations, listed in priority order from highest to lowest: ``ipxe``, ``pxe``, and ``idrac-redfish-virtual-media``. To use the new boot interface, install the ``sushy-oem-idrac`` Python package. For more information, see `story 2006570 `_. ironic-15.0.0/releasenotes/notes/add-node-bios-9c1c3d442e8acdac.yaml0000664000175000017500000000046313652514273024656 0ustar zuulzuul00000000000000--- features: - Adds support for reading and changing the node's ``bios_interface`` field and enables the GET endpoints to check BIOS settings, if they have already been cached. This requires a compatible ``bios_interface`` to be set. This feature is available starting with API version 1.40. ironic-15.0.0/releasenotes/notes/node-name-remove-720aa8007f2f8b75.yaml0000664000175000017500000000015113652514273025105 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue that prevented the node name to be removed as part of the node update. ironic-15.0.0/releasenotes/notes/sum-based-update-firmware-manual-clean-step-e69ade488060cf27.yaml0000664000175000017500000000044713652514273032331 0ustar zuulzuul00000000000000--- features: - iLO drivers now support firmware update based on `Smart Update Manager `_ (SUM) as an in-band manual cleaning step ``update_firmware_sum`` for all the hardware components. ironic-15.0.0/releasenotes/notes/remove-verbose-option-261f1b9e24212ee2.yaml0000664000175000017500000000071313652514273026212 0ustar zuulzuul00000000000000--- upgrade: - The 'verbose' configuration option was removed, consequently the "--verbose, -v" parameter from all command lines was also removed. This affects the ironic-api, ironic-conductor, ironic-dbsync, and ironic-rootwrap commands. The verbose config/parameter was originally a shortcut to set the log level to INFO, however the log level has defaulted to INFO since this option was deprecated, so this option was a noop. ironic-15.0.0/releasenotes/notes/always-return-chassis-uuid-4eecbc8da2170cb1.yaml0000664000175000017500000000034313652514273027453 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue of not returning ``chassis_uuid`` field of a node in API responses if it does not belong to a chassis. It should be always returned, either set to None, or to a corresponding chassis UUID. ironic-15.0.0/releasenotes/notes/create-port-on-conductor-b921738b4b2a5def.yaml0000664000175000017500000000054213652514273026751 0ustar zuulzuul00000000000000--- upgrade: - Moves port creation logic from the API service to the conductor service. This is more consistent with port update operations and opens opportunities for conductor-side validations on ports. However, with this change, port creation may take longer, and this may limit the number of ports that can be created in parallel. ironic-15.0.0/releasenotes/notes/remove-ssh-power-port-delay-7ae6e5eb893439cd.yaml0000664000175000017500000000051713652514273027436 0ustar zuulzuul00000000000000--- upgrade: - | For SSH power drivers, if the configuration option ``[neutron]/port_setup_delay`` had been set to 0, a delay of 15 seconds was used. This is no longer the case. Please set the configuration option to the desired value; otherwise the service will not wait for Neutron agents to set up a port. ironic-15.0.0/releasenotes/notes/ilo-soft-power-operations-eaef33a3ff56b047.yaml0000664000175000017500000000046013652514273027242 0ustar zuulzuul00000000000000--- features: - | Adds support for ``soft power off`` and ``soft reboot`` operations to ``ilo`` power interface. deprecations: - | The ``[ilo]/power_retry`` config is deprecated and will be removed in the future release. Please use ``[conductor]/soft_power_off_timeout`` instead. ironic-15.0.0/releasenotes/notes/drac-fix-power-on-reboot-race-condition-fe712aa9c79ee252.yaml0000664000175000017500000000140113652514273031543 0ustar zuulzuul00000000000000--- features: - | Adds a new configuration option ``[drac]boot_device_job_status_timeout`` that specifies the maximum amount of time (in seconds) to wait for the boot device configuration job to transition to the scheduled state to allow a reboot or power on action to complete. fixes: - | Fixes an issue in the ``idrac`` hardware type where a configuration job does not transition to the correct state and start execution during a power on or reboot operation. If the boot device is being changed, the system might complete its POST before the job is ready, leaving the job in the queue, and the system will boot from the wrong device. See bug `2004909 `_ for details. ironic-15.0.0/releasenotes/notes/enable-osprofiler-support-e3839b0fa90d3831.yaml0000664000175000017500000000135313652514273027101 0ustar zuulzuul00000000000000--- features: - | Adds `OSProfiler `_ support. This cross-project profiling library provides the ability to trace various OpenStack requests through all OpenStack services that support it. For more information, see https://docs.openstack.org/ironic/latest/contributor/osprofiler-support.html. security: - | `OSProfiler `_ support requires passing of trace information between various OpenStack services. This information is securely signed by one of the HMAC keys, defined in the ``ironic.conf`` configuration file. To allow cross-project tracing, the same key should be used for all OpenStack services. ironic-15.0.0/releasenotes/notes/oslopolicy-scripts-bdcaeaf7dd9ce2ac.yaml0000664000175000017500000000125113652514273026430 0ustar zuulzuul00000000000000--- features: - | Ironic is now configured to work with two oslo.policy CLI scripts that have been added. The first of these can be called like ``oslopolicy-list-redundant --namespace ironic`` and will output a list of policy rules in policy.[json|yaml] that match the project defaults. These rules can be removed from the policy file as they have no effect there. The second script can be called like ``oslopolicy-policy-generator --namespace ironic --output-file policy-merged.yaml`` and will populate the policy-merged.yaml file with the effective policy. This is the merged results of project defaults and config file overrides. ironic-15.0.0/releasenotes/notes/validate-image-url-wnen-deploying-8820f4398ea9de9f.yaml0000664000175000017500000000015013652514273030465 0ustar zuulzuul00000000000000--- fixes: - Ironic now validates any swift temporary URL when preparing for deployment of nodes. ironic-15.0.0/releasenotes/notes/drac-fix-double-manage-provide-cycle-6ac8a427068f87fe.yaml0000664000175000017500000000063013652514273031007 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue that caused a node using a Dell EMC integrated Dell Remote Access Controller (iDRAC) *classic driver*, ``pxe_drac`` or ``pxe_drac_inspector``, to be placed in the ``clean failed`` state after a double ``manage``/``provide`` cycle, instead of the ``available`` state. For more information, see `bug 1676387 `_. ironic-15.0.0/releasenotes/notes/ilo-license-activate-manual-clean-step-84d335998d708b49.yaml0000664000175000017500000000015713652514273031153 0ustar zuulzuul00000000000000--- features: - Support for activation of iLO Advanced license as a manual cleaning step in iLO drivers. ironic-15.0.0/releasenotes/notes/oslo-reports-optional-59469955eaffdf1d.yaml0000664000175000017500000000031713652514273026435 0ustar zuulzuul00000000000000--- upgrade: - | The guru meditation reporting functionality is now optional and the ``oslo.reports`` package is no longer a part of requirements. Install it manually if you need this feature. ironic-15.0.0/releasenotes/notes/use-dhcp-option-numbers-8b0b0efae912ff5f.yaml0000664000175000017500000000043013652514273026740 0ustar zuulzuul00000000000000--- fixes: - | Uses standard DHCP option codes instead of dnsmasq-specific option names, because different backends use different option names. This fixes the `compatibility issues with neutron's DHCP backends `_. ironic-15.0.0/releasenotes/notes/add-option-persistent-boot-device-139cf280fb66f4f7.yaml0000664000175000017500000000146313652514273030505 0ustar zuulzuul00000000000000--- features: - | Adds capability to control the persistency of boot order changes during instance deployment via (i)PXE on a per-node level. The option 'force_persistent_boot_device' in the node's driver info for the (i)PXE drivers is extended to allow the values 'Default' (make all changes but the last one upon deployment non-persistent), 'Always' (make all changes persistent), and 'Never' (make all boot order changes non-persistent). deprecations: - | The values 'True'/'False' for the option 'force_persistent_boot_device' in the node's driver info for the (i)PXE drivers are deprecated and support for them may be removed in a future release. The former default value 'False' is replaced by the new value 'Default', the value 'True' is replaced by 'Always'. ironic-15.0.0/releasenotes/notes/fix-drac-job-state-8c5422bbeaf15226.yaml0000664000175000017500000000025013652514273025404 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in the ``idrac`` RAID interface seen when creating RAID configurations using ``python-dracclient`` version ``2.0.0`` or higher. ironic-15.0.0/releasenotes/notes/api-none-cdb95e58b69a5c50.yaml0000664000175000017500000000040313652514273023627 0ustar zuulzuul00000000000000--- fixes: - | Fixes a confusing ``AttributeError`` if an adapter returns ``None`` for the bare metal API. - | Prevents the adapter configuration options from getting ignored if a matching endpoint cannot be found. An error is now raised. ironic-15.0.0/releasenotes/notes/fix_raid0_creation_for_multiple_disks-f47957754fca0312.yaml0000664000175000017500000000040113652514273031404 0ustar zuulzuul00000000000000--- fixes: - | Fixed a bug when executing ``create_configuration`` cleaning step for disks of PERC H740P controller, first disks get created and then controller doesn't allow to create next couple disks because controller is getting busy. ironic-15.0.0/releasenotes/notes/fixes-execution-of-out-of-band-deploy-steps-1f5967e7bfcabbf9.yaml0000664000175000017500000000033213652514273032547 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue wherein asynchronous out-of-band deploy steps in deployment template fails to execute. See `story 2006342 `__ for details. ironic-15.0.0/releasenotes/notes/duplicated-driver-entry-775370ad84736206.yaml0000664000175000017500000000025213652514273026307 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem which causes the conductor to error out on startup in case there's a duplicated entry in the enabled_drivers configuration option. ironic-15.0.0/releasenotes/notes/support_to_hash_rescue_password-0915927e41e6d845.yaml0000664000175000017500000000144713652514273030346 0ustar zuulzuul00000000000000--- features: - | Passwords for ``rescue`` operation are now hashed for transmission to the ``ironic-python-agent``. This functionality requires ``ironic-python-agent`` version ``6.0.0``. The setting ``[conductor]rescue_password_hash_algorithm`` now defaults to ``sha256``, and may be set to ``sha256``, or ``sha512``. upgrades: - | The version of ``ironic-python-agent`` should be upgraded to at least version ``6.0.0`` for rescue passwords to be hashed for transmission. security: - | Operators wishing to enforce all rescue passwords to be hashed should use the ``[conductor]require_rescue_password_hashed`` setting and set it to a value of ``True``. This setting will be changed to a default of ``True`` in the Victoria development cycle. ironic-15.0.0/releasenotes/notes/vif-detach-locking-fix-revert-3961d47fe419460a.yaml0000664000175000017500000000031513652514273027430 0ustar zuulzuul00000000000000--- fixes: - | Reverts the fix for orphaned VIF records from the previous release, as it causes a regression. See `bug 1750785 `_ for details. ironic-15.0.0/releasenotes/notes/rely-on-standalone-ports-supported-8153e1135787828b.yaml0000664000175000017500000000227613652514273030460 0ustar zuulzuul00000000000000--- other: - | Some combinations of port group protocols and hardware might not support falling back to single interface mode. If a static port group was created under such circumstances (where ``portgroup.standalone_ports_supported = False``), additional restrictions apply to such ports and port groups, for example such ports will not support booting over PXE. Certain restrictions are imposed on values of port properties for ports belonging to a port group: * ``port.pxe_enabled`` cannot be set to True if the port is a member of a port group with portgroup.standalone_ports_supported already set to False. * ``portgroup.standalone_ports_supported`` cannot be set to False on a port group if at least one port in that port group has ``port.pxe_enabled=True`` * ``port.extra.vif_port_id`` cannot be set on a port that is a member of a port group with ``portgroup.standalone_ports_supported=False`` as setting it means that we using port in single interface mode. * ``portgroup.standalone_ports_supported`` cannot be set to False on a port group if it has at least one port with ``port.extra.vif_port_id`` set. ironic-15.0.0/releasenotes/notes/remove-agent_last_heartbeat-65a9fe02f20465c5.yaml0000664000175000017500000000025113652514273027406 0ustar zuulzuul00000000000000--- deprecations: - The ``agent_last_heartbeat`` field of ``driver_internal_info`` has been removed from all agent drivers, since this field was unused by ironic. ironic-15.0.0/releasenotes/notes/add-secure-boot-suport-irmc-9509f3735df2aa5d.yaml0000664000175000017500000000042313652514273027300 0ustar zuulzuul00000000000000--- features: - | Adds support to provision an instance in secure boot mode for ``irmc-virtual-media`` boot interface. For details, see the `iRMC driver documentation `_.././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/fix-ilo-firmware-update-swift-path-with-pseudo-folder-0660345510ec0bb4.yamlironic-15.0.0/releasenotes/notes/fix-ilo-firmware-update-swift-path-with-pseudo-folder-0660345510ec00000664000175000017500000000023513652514273032565 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue where iLO drivers fail to download the firmware file from swift when the swift file path includes swift pseudo folder. ironic-15.0.0/releasenotes/notes/rescue-interface-for-irmc-hardware-type-17e38197849748e0.yaml0000664000175000017500000000047113652514273031276 0ustar zuulzuul00000000000000--- features: - | Adds support for rescue interface ``agent`` for the ``irmc`` hardware type when the corresponding boot interface is ``irmc-virtual-media``. The supported values of rescue interface for ``irmc`` hardware type are ``agent`` and ``no-rescue``. The default value is ``no-rescue``. ironic-15.0.0/releasenotes/notes/ironic-11.1-prelude-b5ba8134953db4c2.yaml0000664000175000017500000000143413652514273025327 0ustar zuulzuul00000000000000--- prelude: | Ironic `11.1`... Where the volume dial turned more! While Pixie Boots has rocked out to Rock and Roll, the Bare Metal as a Service team has wrapped up our Rocky release with 11.1. This new release contains a number of major features that we hope will improve the lives of bare metal operators everywhere! * Conductor grouping enabling nodes to be assigned to groups of different conductors. * Deployment steps framework enabling greater flexibility for deployers to request specific steps. * Bios setting interfaces for the ``ilo`` and ``irmc`` hardware types. * Ramdisk deployment interface for disk-less deployments. * Capability to reset nodes to their default interfaces via the API when resetting the node's driver. ironic-15.0.0/releasenotes/notes/consider_embedded_ipa_error_codes-c8fdfaa9e6a1ed06.yaml0000664000175000017500000000064213652514273031176 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in the ``ironic-python-agent`` client code where a command exception may not be captured in the interaction with the agent rest API. The client code would return the resulting error message and a static error code. We now look with-in the error to detect if the error may be a compatability error to raise the appropriate exception for fallback logic to engage. ironic-15.0.0/releasenotes/notes/port-list-bad-request-078512862c22118e.yaml0000664000175000017500000000036413652514273025703 0ustar zuulzuul00000000000000--- fixes: - | Fixes rare race condition which resulted in the port list API returning HTTP 400 (bad request) if some nodes were being removed in parallel. See `bug 1748893 `_ for details. ironic-15.0.0/releasenotes/notes/update-python-scciclient-required-version-71398d5d5e1c0bf8.yaml0000664000175000017500000000043013652514273032260 0ustar zuulzuul00000000000000--- upgrade: - Updated python-scciclient required version number for iRMC driver to 0.3.0 which fixed the bug '#1518999' and '#1519000'. fixes: - Updated python-scciclient required version number for iRMC driver to 0.3.0 which fixed the bug '#1518999' and '#1519000'. ironic-15.0.0/releasenotes/notes/fix-conductor-list-raise-131ac76719b74032.yaml0000664000175000017500000000027213652514273026444 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue that node list with conductor fails if any of the nodes has an invalid hardware type, which may happen when some conductor is out of service. ironic-15.0.0/releasenotes/notes/fix-drives-conversion-before-raid-creation-ea1f7eb425f79f2f.yaml0000664000175000017500000000215713652514273032435 0ustar zuulzuul00000000000000fixes: - | Certain RAID controllers (PERC H730P) require physical disks to be switched from non-RAID (JBOD) mode to RAID mode to be included in a virtual disk. When this conversion happens, the available free space on the physical disk is reduced due to some space being allocated to RAID mode housekeeping. If the user requests a virtual disk (a RAID 1 for example) with a size close to the max size of the physical disks when they are in JBOD mode, then creation of the virtual disk following conversion of the physical disks from JBOD to RAID mode will fail since there is not enough space due to the space used by RAID mode housekeeping. This patch works around this issue by recalculating the RAID volume size after physical disk conversion has completed and the free space on the converted drives is known. Note that this may result in a virtual disk that is slightly smaller than the requested size, but still the max size that the drives can support. See bug `bug 2007359 `_ for more details ironic-15.0.0/releasenotes/notes/add-validate-rescue-to-boot-interface-bd74aff9e250334b.yaml0000664000175000017500000000032413652514273031234 0ustar zuulzuul00000000000000--- other: - | Adds new method ``validate_rescue()`` to boot interface to validate node's properties related to rescue operation. This method is called by the validate() method of rescue interface. ironic-15.0.0/releasenotes/notes/opentack-baremetal-request-id-daa72b785eaaaa8d.yaml0000664000175000017500000000013113652514273030137 0ustar zuulzuul00000000000000--- features: - Append request_id as ``Openstack-Request-Id`` header to the response. ironic-15.0.0/releasenotes/notes/no-instance-uuid-workaround-fc458deb168c7a8b.yaml0000664000175000017500000000024513652514273027561 0ustar zuulzuul00000000000000--- upgrade: - Removed the workaround in API allowing removing "instance_uuid" during cleaning. It was only required for Nova during introduction of cleaning. ironic-15.0.0/releasenotes/notes/min-sushy-version-change-3b697530e0c05dee.yaml0000664000175000017500000000046313652514273026677 0ustar zuulzuul00000000000000--- fixes: - Support for some hardware, including some Dell EMC servers, is broken when using the Redfish hardware type with sushy 1.9.0. The minimum version for the sushy library is now 2.0.0. See `story 2006702 `_ for more information. ironic-15.0.0/releasenotes/notes/fix-oneview-periodics-0f535fe7a0ad83cd.yaml0000664000175000017500000000021213652514273026403 0ustar zuulzuul00000000000000--- fixes: - Fixes a bug in the OneView driver where the periodic task to check if a node is in use by OneView may end prematurely. ironic-15.0.0/releasenotes/notes/add-kernel-params-redfish-72b87075465c87f6.yaml0000664000175000017500000000064413652514273026554 0ustar zuulzuul00000000000000--- features: - | Adds ``instance_info/kernel_append_params`` property support to ``redfish`` hardware type. If given, this property overrides ``[redfish]/kernel_append_params`` ironic option. The rationale for adding this property is to allow passing node-specific kernel parameters to instance kernel. One of the use-cases for this is to pass node static network configuration to the kernel. ironic-15.0.0/releasenotes/notes/add-secure-boot-suport-irmc-2c1f09271f96424d.yaml0000664000175000017500000000016713652514273027141 0ustar zuulzuul00000000000000--- features: - | Adds support to provision an instance in UEFI secure boot for ``irmc-pxe`` boot interface. ironic-15.0.0/releasenotes/notes/software-raid-4a88e6c5af9ea742.yaml0000664000175000017500000000124413652514273024677 0ustar zuulzuul00000000000000--- features: - | Adds support for software RAID via the generic hardware manager when using a Train release ``ironic-python-agent`` deployment or cleaning ramdisk. This may be used by means of the ``target_raid_config`` a single RAID-1 or one RAID-1 plus one RAID-N can be configured (where N can be 0, 1, and 1+0). The RAID is created/deleted during manual cleaning. Note that this initial implementation will use all available devices for the setup of the software RAID device(s). More information is available in the Ironic Administrator `documentation `_. ironic-15.0.0/releasenotes/notes/no-classic-ilo-7822af6821d2f1cc.yaml0000664000175000017500000000024413652514273024646 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated iLO classic drivers ``pxe_ilo``, ``iscsi_ilo`` and ``agent_ilo`` have been removed. Please use the ``ilo`` hardware type. ironic-15.0.0/releasenotes/notes/software-raid-with-uefi-5b88e6c5af9ea743.yaml0000664000175000017500000000012713652514273026600 0ustar zuulzuul00000000000000--- features: - | Adds support for bootable software RAID with UEFI boot mode. ironic-15.0.0/releasenotes/notes/bug-1592335-7c5835868fe364ea.yaml0000664000175000017500000000024113652514273023402 0ustar zuulzuul00000000000000--- fixes: - A node using the ``agent_ilo`` or ``iscsi_ilo`` driver now has its ``driver_info/ilo_deploy_iso`` field validated during node validation. ironic-15.0.0/releasenotes/notes/fix_pending_non_bios_job_execution-4b22e168ac915f4f.yaml0000664000175000017500000000051113652514273031127 0ustar zuulzuul00000000000000--- fixes: - | Fixes a bug in the ``idrac`` hardware type where executing the ``clear_job_queue`` clean step, pending non-BIOS config jobs (E.g. create/delete virtual disk) were not being deleted before job execution. See bug `2006580 `_ for details. ironic-15.0.0/releasenotes/notes/add-snmp-read-write-community-names-7589a8d1899c142c.yaml0000664000175000017500000000055013652514273030605 0ustar zuulzuul00000000000000--- features: - | Adds new optional ``snmp_community_read`` and ``snmp_community_write`` properties to ``snmp`` driver configuration (specified via a node's ``driver_info`` field). If present, the value(s) will be used respectively for SNMP reads and/or writes to the PDU. When not present, ``snmp_community`` value will be used instead. ironic-15.0.0/releasenotes/notes/improve-conductor-shutdown-42687d8b9dac4054.yaml0000664000175000017500000000020013652514273027303 0ustar zuulzuul00000000000000--- fixes: - Shutdown of conductor process should take less time, as we do not wait for completion of all periodic tasks. ironic-15.0.0/releasenotes/notes/snmp-hardware-type-ee3d471cf5c596f4.yaml0000664000175000017500000000036113652514273025657 0ustar zuulzuul00000000000000--- features: - | Adds a new hardware type ``snmp`` for SNMP powered systems. It supports the following driver interfaces: * boot: ``pxe`` * deploy: ``iscsi``, ``direct`` * power: ``snmp`` * management: ``fake`` ironic-15.0.0/releasenotes/notes/dbsync-online_data_migration-edcf0b1cc3667582.yaml0000664000175000017500000000061213652514273027721 0ustar zuulzuul00000000000000--- upgrade: - The new ``ironic-dbsync online_data_migrations`` command should be run after each upgrade to ensure all DB records are converted to the newest format. It must be run before starting the software as part of a new upgrade to the next named release. For more information about this command, see https://docs.openstack.org/ironic/latest/cli/ironic-dbsync.html. ironic-15.0.0/releasenotes/notes/fix-ipxe-template-for-whole-disk-image-943da0311ca7aeb5.yaml0000664000175000017500000000024313652514273031347 0ustar zuulzuul00000000000000--- fixes: - Fixes an issue with the provided iPXE template where whole disk images could not be booted. See https://bugs.launchpad.net/ironic/+bug/1524403. ironic-15.0.0/releasenotes/notes/soft-power-operations-oneview-e7ac054668235998.yaml0000664000175000017500000000016113652514273027573 0ustar zuulzuul00000000000000--- features: - | Enables support for soft power off and soft reboot in the ``oneview`` hardware type. ironic-15.0.0/releasenotes/notes/bmc_reset-warm-9396ac444cafd734.yaml0000664000175000017500000000021313652514273024746 0ustar zuulzuul00000000000000--- fixes: - Fix a problem that caused the bmc_reset() vendor passthru method from the IPMI drivers to be always executed as "warm". ironic-15.0.0/releasenotes/notes/ipmitool-vendor-3f0f52240ebbe489.yaml0000664000175000017500000000021113652514273025154 0ustar zuulzuul00000000000000--- features: - | The ``ipmi`` hardware type now supports ``ipmitool`` vendor interface (similar to classic ipmitool drivers). ironic-15.0.0/releasenotes/notes/fast-track-deployment-f09a8b921b3aae36.yaml0000664000175000017500000000145013652514273026324 0ustar zuulzuul00000000000000--- features: - | Adds a new feature called `fast-track` which allows an operator to optionally configure the Bare Metal API Service and the Bare Metal conductor service to permit lookup and heartbeat for nodes that are in the process of being enrolled and created. These nodes can be left online, from a process such as discovery. If ironic-python-agent has communicated with the Bare Metal Service API endpoint with-in the last `300` seconds, then setup steps that are normally involved with preparing to launch a ramdisk on the node, are skipped along with power operations to enable a baremetal node to undergo discovery through to deployment with a single power cycle. Fast track functionality may be enabled through the ``[deploy]fast_track`` option. ironic-15.0.0/releasenotes/notes/add_detail_true_api_query-cb6944847830cd1a.yaml0000664000175000017500000000055713652514273027155 0ustar zuulzuul00000000000000--- features: - | Add ``?detail=`` boolean query to the API list endpoints to provide a more RESTful alternative to the existing ``/nodes/detail`` and similar endpoints. The default is False. Now these API requests are possible: * ``/nodes?detail=True`` * ``/ports?detail=True`` * ``/chassis?detail=True`` * ``/portgroups?detail=True`` ironic-15.0.0/releasenotes/notes/fix-tftp-master-path-config-77face94f5db9af7.yaml0000664000175000017500000000046613652514273027533 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where the master TFTP image cache could not be disabled. The configuration option ``[pxe]/tftp_master_path`` may now be set to the empty string to disable the cache. For more information, see story `2004608 `_. ironic-15.0.0/releasenotes/notes/oob-power-off-7bbdf5947ed24bf8.yaml0000664000175000017500000000057313652514273024677 0ustar zuulzuul00000000000000--- fixes: - Fixes a problem where some hardware/firmware (specially faulty ones) won't come back online after an in-band ACPI soft power off by adding a new driver property called "deploy_forces_oob_reboot" that can be set to the nodes being deployed by the IPA ramdisk. If the value of this property is True, Ironic will power cycle the node via out-of-band. ironic-15.0.0/releasenotes/notes/console-port-allocation-bb07c43e3890c54c.yaml0000664000175000017500000000062213652514273026600 0ustar zuulzuul00000000000000--- features: - | Adds a new configuration option ``[console]port_range``, which specifies the range of ports can be consumed for the IPMI serial console. The default value is ``None`` for backwards compatibility. If the ``ipmi_terminal_port`` is not specified in the driver information for a node, a free port will be allocated from the configured port range for further use.ironic-15.0.0/releasenotes/notes/migrate_vif_port_id-5e1496638240933d.yaml0000664000175000017500000000107713652514273025565 0ustar zuulzuul00000000000000--- upgrade: - | ``ironic-dbsync online_data_migrations`` will migrate any port's and port group's extra['vif_port_id'] value to their internal_info['tenant_vif_port_id']. For API versions >= 1.28, the ability to attach/detach the VIF via the port's or port group's extra['vif_port_id'] will not be supported starting with the Stein release. Any out-of-tree network interface implementations that had a different behavior in support of attach/detach VIFs via the port or port group's extra['vif_port_id'] must be updated appropriately. ironic-15.0.0/releasenotes/notes/fix-boot-from-volume-for-iscsi-deploy-60bc0790ada62b26.yaml0000664000175000017500000000062113652514273031201 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue in boot from volume for ``iscsi`` deploy interface. Booting from a volume would fail for a node with the ``iscsi`` deploy interface because the pxelinux.cfg file for the MAC address wasn't created and the node would fail to boot. The pxelinux.cfg file is now created. See `bug 1714436 `_ for details. ironic-15.0.0/releasenotes/notes/hw-ifaces-periodics-af8c9b93ecca9fcd.yaml0000664000175000017500000000033713652514273026257 0ustar zuulzuul00000000000000--- fixes: - | Fixes collection of periodic tasks from hardware interfaces that are not used in any enabled classic drivers. See `bug 2001884 `_ for details. ironic-15.0.0/releasenotes/notes/bug-1702158-79bf57bd4d8087b6.yaml0000664000175000017500000000262213652514273023461 0ustar zuulzuul00000000000000--- fixes: - | Fixes database schema that could cause the wrong database engine to be utilized for the ``conductor_hardware_interfaces`` table, if the system is using MySQL prior to version 5.5 or the ``default_storage_engine`` option is set explicitly to 'MyISAM' in ``my.cnf``. In this case, a table could be created with MyISAM engine, and the foreign key constraint ``conductor_id(conductors.id)`` was ignored. See `bug 1702158 `_ for details. upgrade: - | Due to `bug 1702158 `_, the ``conductor_hardware_interfaces`` table could be created with MyISAM database engine, while all other tables in ironic database are using InnoDB engine. This could happen during initial installation, or upgrade to the Ocata release, if the system was using MySQL prior to version 5.5 or the ``default_storage_engine`` option was set explicitly to 'MyISAM' in ``my.cnf``. If this is the case, the ``conductor_hardware_interfaces`` table needs to be manually migrated to InnoDB, and the foreign key constraint needs to be re-created:: alter table conductor_hardware_interfaces engine='InnoDB'; alter table conductor_hardware_interfaces add constraint conductor_hardware_interfaces_ibfk_1 foreign key (conductor_id) references conductors(id); ironic-15.0.0/releasenotes/notes/deploy-steps-required-aa72cdf1c0ec0e84.yaml0000664000175000017500000000014213652514273026504 0ustar zuulzuul00000000000000--- upgrade: - | Removes compatibility with deploy interfaces that do not use deploy steps. ironic-15.0.0/releasenotes/notes/wipe-disk-before-deployment-0a8b9cede4a659e9.yaml0000664000175000017500000000076613652514273027533 0ustar zuulzuul00000000000000--- fixes: - Fixed a bug that was causing grub installation failure. If the disk was already coming with a partition table, the conductor was not able to wipe it properly and the new partition table would conflict with the old one. The issue was only impacting new nodes and installations with automated_clean disabled in the configuration. A disk instance without preserve_ephemeral is now purged before new deployment. See https://bugs.launchpad.net/ironic-lib/+bug/1550604 ironic-15.0.0/releasenotes/notes/oneview-timing-metrics-0b6c1b54e80eb683.yaml0000664000175000017500000000007213652514273026434 0ustar zuulzuul00000000000000--- features: - Adds timing metrics to OneView drivers. ironic-15.0.0/releasenotes/notes/no-ssh-drivers-6ee5ff4c3ecdd3fb.yaml0000664000175000017500000000057413652514273025313 0ustar zuulzuul00000000000000--- upgrade: - | SSH-based power and management driver interfaces were removed from ironic. The drivers ``pxe_ssh``, ``agent_ssh`` and ``fake_ssh`` are no longer available. Operators are required to ensure that these drivers are not used or enabled (in ``[DEFAULT]enabled_drivers`` configuration file option) in their ironic installation before upgrade. ironic-15.0.0/releasenotes/notes/online_data_migration_update_versions-ea03aff12d9c036f.yaml0000664000175000017500000000123313652514273032003 0ustar zuulzuul00000000000000--- critical: - The ``ironic-dbsync online_data_migrations`` command was not updating the objects to their latest versions, which could prevent upgrades from working (i.e. when running the next release's ``ironic-dbsync upgrade``). Objects are updated to their latest versions now when running that command. See `story 2004174 `_ for more information. upgrade: - If you are doing a minor version upgrade, please re-run the ``ironic-dbsync online_data_migrations`` command to properly update the versions of the Objects in the database. Otherwise, the next major upgrade may fail. ironic-15.0.0/releasenotes/notes/wsgi-applications-5d36cf2a8885a56d.yaml0000664000175000017500000000071013652514273025502 0ustar zuulzuul00000000000000--- upgrade: - A new WSGI application script ``ironic-api-wsgi`` is now available. It is auto-generated by ``pbr`` and provides the ability to serve the bare metal API using a WSGI server (for example Nginx and uWSGI or Apache with mod_wsgi). deprecations: - Using ``ironic/api/app.wsgi`` script is deprecated and it will be removed in Rocky release. Please switch to automatically generated ``ironic-api-wsgi`` script instead. ironic-15.0.0/releasenotes/notes/add-ssl-support-4547801eedba5942.yaml0000664000175000017500000000022013652514273025011 0ustar zuulzuul00000000000000--- features: - The ironic-api service now supports SSL when running the service directly (as opposed to behind mod_wsgi or similar). ironic-15.0.0/releasenotes/notes/bug-1596421-0cb8f59073f56240.yaml0000664000175000017500000000067413652514273023313 0ustar zuulzuul00000000000000--- upgrade: - Extends the ``instance_info`` column in the nodes table for MySQL/MariaDB from up to 64KiB to up to 4GiB (type is changed from TEXT to LONGTEXT). This upgrade will not be executed on PostgreSQL as its TEXT is unlimited. fixes: - The config drive passed to the node can now contain more than 64KiB in case of MySQL/MariaDB. For more details see `bug 1596421 `_. ironic-15.0.0/releasenotes/notes/ipmi-debug-1c7e090c6cc71903.yaml0000664000175000017500000000126613652514273023777 0ustar zuulzuul00000000000000--- features: - | Adds a new ``[ipmi]debug`` option that allows users to explicitly turn IPMI command debugging on, as opposed to relying upon the system debug setting ``[DEFAULT]debug``. Users wishing to continue to log this output should set ``[ipmi]debug`` to ``True`` in their ironic.conf. upgrade: - Debug logging control has been moved to the ``[ipmi]debug`` configuration setting as opposed to the "conductor" ``[DEFAULT]debug`` setting as the existing ``ipmitool`` output can be extremely misleading for users. Operators who wish to continue to log ``ipmitool`` verbose output in their logs should explicitly set the ``[ipmi]debug`` command to True. ironic-15.0.0/releasenotes/notes/ensure-unbind-flat-vifs-and-clear-macs-34eec149618e5964.yaml0000664000175000017500000000115513652514273031130 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where Neutron ports would be left with a baremetal MAC address associated after an instance is deleted from a baremetal host. This caused problems with MAC address conflicts in follow up deployments to the same baremetal host. `bug 2004428 `_. - | Fixes an issue where a flat Neutron port would be left with a host ID associated with it after an instance is deleted from a baremetal host. This caused problems with reusing the same port for a new instance as it is already bound to the old instance. ironic-15.0.0/releasenotes/notes/port-local-link-connection-network-type-71103d919e27fc5d.yaml0000664000175000017500000000042013652514273031565 0ustar zuulzuul00000000000000--- features: - | To allow use of the ``neutron`` network interface in combination with ``flat`` provider networks where no actual switch management is done. The ``local_link_connection`` field on ports is extended to support the ``network_type`` field. ironic-15.0.0/releasenotes/notes/bug-1607527-75885e145db62d69.yaml0000664000175000017500000000015613652514273023325 0ustar zuulzuul00000000000000--- fixes: - Fixes SSH driver validation when using a private key with a passphrase for authentication. ironic-15.0.0/releasenotes/notes/dhcp-provider-clean-dhcp-9352717903d6047e.yaml0000664000175000017500000000024113652514273026303 0ustar zuulzuul00000000000000--- other: - Adds a `clean_dhcp_opts` method to the DHCP provider base class, to give DHCP providers a method to clean up DHCP reservations if needed. ironic-15.0.0/releasenotes/notes/whole-disk-root-gb-9132e5a354e6cb9d.yaml0000664000175000017500000000032013652514273025447 0ustar zuulzuul00000000000000--- fixes: - | The ``instance_info[root_gb]`` property is no longer required for whole-disk images. It has always been ignored for them, but the validation code still expected it to be present. ironic-15.0.0/releasenotes/notes/no-classic-irmc-3a606045e87119b7.yaml0000664000175000017500000000024413652514273024600 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated classic drivers ``pxe_irmc``, ``agent_irmc`` and ``iscsi_irmc`` have been removed. Please use the ``irmc`` hardware type. ironic-15.0.0/releasenotes/notes/validate-ilo-certificates-3ab98bb8cfad7d60.yaml0000664000175000017500000000042513652514273027274 0ustar zuulzuul00000000000000--- features: - Added support to validate iLO SSL certificate in iLO drivers. A new configuration option ``[ilo]/ca_file`` is added to specify the iLO CA certificate file. If ``[ilo]/ca_file`` is specified, the iLO drivers will validate iLO SSL certificates. ironic-15.0.0/releasenotes/notes/add-configurable-ipmi-retriables-b6056f722f6ed3b0.yaml0000664000175000017500000000041013652514273030301 0ustar zuulzuul00000000000000--- other: - | This release allows to configure retryable ipmitool exceptions via ``[ipmi]additional_retryable_ipmi_errors`` so that, depending on the environment, operators could allow retrying ipmitool commands containing specified substrings. ironic-15.0.0/releasenotes/notes/idrac-add-initial-redfish-support-27f27f18f3c1cd91.yaml0000664000175000017500000000407013652514273030435 0ustar zuulzuul00000000000000--- features: - | Adds initial ``idrac`` hardware type support of interface implementations that utilize the Redfish out-of-band (OOB) management protocol and are compatible with the integrated Dell Remote Access Controller (iDRAC) baseboard management controller (BMC), presently those of the management and power hardware interfaces. They are named ``idrac-redfish``. Introduces a new name for the ``idrac`` interface implementations, ``idrac-wsman``, and deprecates ``idrac``. They both use the Web Services Management (WS-Man) OOB management protocol. The ``idrac`` hardware type declares support for those new interface implementations, in addition to all interface implementations it has been supporting. The priority order of supported interfaces remains the same. Interface implementations which rely on WS-Man continue to have the highest priority, and the new ``idrac-wsman`` is listed before the deprecated ``idrac``. It now supports the following interface implementations, which are listed in priority order from highest to lowest: * bios: ``no-bios`` * boot: ``ipxe``, ``pxe`` * console: ``no-console`` * deploy: ``iscsi``, ``direct``, ``ansible``, ``ramdisk`` * inspect: ``idrac-wsman``, ``idrac``, ``inspector``, ``no-inspect`` * management: ``idrac-wsman``, ``idrac``, ``idrac-redfish`` * network: ``flat``, ``neutron``, ``noop`` * power: ``idrac-wsman``, ``idrac``, ``idrac-redfish`` * raid: ``idrac-wsman``, ``idrac``, ``no-raid`` * rescue: ``no-rescue``, ``agent`` * storage: ``noop``, ``cinder``, ``external`` * vendor: ``idrac-wsman``, ``idrac``, ``no-vendor`` For more information, see `story 2004592 `_. deprecations: - | The ``idrac`` interface implementation name is deprecated in favor of a new name, ``idrac-wsman``, and may be removed in a future release. A deprecation warning will be logged for every loaded ``idrac`` interface implementation. Use ``idrac-wsman`` instead. ironic-15.0.0/releasenotes/notes/allocation-added-owner-policy-c650074e68d03289.yaml0000664000175000017500000000052113652514273027431 0ustar zuulzuul00000000000000--- features: - | Adds ``is_allocation_owner`` policy rule, which can be applied to allocation get/update/delete rules. Also adds ``baremetal:allocation:list`` and ``baremetal:allocation:list_all`` rules for listing owned allocations and all allocations. Default rules are unaffected, so default behavior is unchanged. ironic-15.0.0/releasenotes/notes/remove-discoverd-group-03eaf75e9f94d7be.yaml0000664000175000017500000000024413652514273026621 0ustar zuulzuul00000000000000--- upgrade: - Removes support for the deprecated "discoverd" group for inspection options. Configuration files should use the "inspector" group instead. ironic-15.0.0/releasenotes/notes/fix-bug-1675529-479357c217819420.yaml0000664000175000017500000000176013652514273023671 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``[ipmi]/retry_timeout`` is deprecated in favor of these new options: * ``[ipmi]/command_retry_timeout``: timeout value to wait for an IPMI command to complete (be acknowledged by the baremetal node) * ``[conductor]/power_state_change_timeout``: timeout value to wait for a power operation to complete, so that the baremetal node is in the desired new power state fixes: - | Prevents the IPMI driver from needlessly checking status of the baremetal node if a power change action fails. Additionally, stops retrying power actions and power status polls on receipt of a non-retryable error from ipmitool. For more information, see `bug 1675529 `_. A new configuration option ``[conductor]/power_state_change_timeout`` can be used to specify how many seconds to wait for a baremetal node to change the power state when a power action is requested. ironic-15.0.0/releasenotes/notes/drac-list-unfinished-jobs-10400419b6bc3c6e.yaml0000664000175000017500000000041513652514273026704 0ustar zuulzuul00000000000000--- features: - Adds ``list_unfinished_jobs`` method to the vendor-passthru interface of the DRAC driver. It provides a way to check the status of the remote config job after a BIOS configuration change was submitted using the ``set_bios_config`` method. ironic-15.0.0/releasenotes/notes/any-wsgi-8d6ccb0590104146.yaml0000664000175000017500000000036113652514273023422 0ustar zuulzuul00000000000000--- fixes: - | Makes ``ironic.api.wsgi`` compatible with WSGI containers that cannot use an executable WSGI entry point. For example, with gunicorn:: gunicorn -b 0.0.0.0:6385 'ironic.api.wsgi:initialize_wsgi_app(argv=[])' ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000ironic-15.0.0/releasenotes/notes/remove-deprecated-build-instance-info-for-deploy-2fe165fc018010e4.yamlironic-15.0.0/releasenotes/notes/remove-deprecated-build-instance-info-for-deploy-2fe165fc018010e4.y0000664000175000017500000000052013652514273032545 0ustar zuulzuul00000000000000--- other: - | The method ``build_instance_info_for_deploy()`` from the ``ironic.drivers.modules.agent`` module was deprecated in the Ocata cycle (version 7.0.0). It is no longer available. Please use the method ``build_instance_info_for_deploy()`` from the ``ironic.drivers.modules.deploy_utils`` module instead. ironic-15.0.0/releasenotes/notes/agent_partition_image-48a03700f41a3980.yaml0000664000175000017500000000012113652514273026126 0ustar zuulzuul00000000000000--- features: - Adds support for partition images for agent based drivers. ironic-15.0.0/releasenotes/notes/ipmi-noop-mgmt-8fad89dc2b4665b8.yaml0000664000175000017500000000046713652514273025012 0ustar zuulzuul00000000000000--- features: - | Adds support for the new ``noop`` interface to the ``ipmi`` hardware type. This interface targets hardware that does not correctly change boot mode via the IPMI protocol. Using it requires pre-configuring the boot order on a node to try PXE, then fall back to local booting. ironic-15.0.0/releasenotes/notes/fix-ipa-ephemeral-partition-1f1e020727a49078.yaml0000664000175000017500000000030313652514273027110 0ustar zuulzuul00000000000000--- fixes: - Fixed a bug where the ironic python agent ramdisk was not creating an ephemeral partition because the ephemeral partition size was not being passed correctly to the agent. ironic-15.0.0/releasenotes/notes/fix-sensors-storage-ed5d5bbda9b46645.yaml0000664000175000017500000000041113652514273026116 0ustar zuulzuul00000000000000--- fixes: - | Fixes drive sensors information collection in ``redfish`` management interface. Prior to this fix, wrong Redfish schema has been used for Drive resource what has been causing exception and ultimately sensor data collection failure. ironic-15.0.0/releasenotes/notes/bug-1696296-a972c8d879b98940.yaml0000664000175000017500000000072113652514273023347 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where an ironic-conductor service was deemed dead because the service could not report its heartbeat due to the database connection experiencing an unexpected failure. Full tracebacks of these exceptions are now logged, and if the database connection recovers in a reasonable amount of time the service will still be available. See `bug 1696296 `_ for details. ironic-15.0.0/releasenotes/notes/collect-deployment-logs-2ec1634847c3f6a5.yaml0000664000175000017500000000144713652514273026532 0ustar zuulzuul00000000000000--- features: - | Adds support for collecting deployment logs from the IPA ramdisk. Five new configuration options were added: * ``[agent]/deploy_logs_collect`` * ``[agent]/deploy_logs_storage_backend`` * ``[agent]/deploy_logs_local_path`` * ``[agent]/deploy_logs_swift_container`` * ``[agent]/deploy_logs_swift_days_to_expire``. upgrade: - Collecting logs on deploy failure is enabled by default and the logs will be saved to the local disk at the location specified by the configuration option ``[agent]/deploy_logs_local_path`` (by default, ``/var/log/ironic/deploy``). Operators upgrading may want to disable this feature, enable some form of rotation for the logs or change the configuration to store the logs in Swift to avoid disk space problems. ironic-15.0.0/releasenotes/notes/flag_always_reboot-62468a7058b58823.yaml0000664000175000017500000000061013652514273025407 0ustar zuulzuul00000000000000--- features: - | Adds a boolean flag called ``force_persistent_boot_device`` into a node's ``driver_info`` to enable persistent behavior when you set the boot device during deploy and cleaning operations. This flag will override a non-persistent behavior in the cleaning and deploy process. For more information, see https://bugs.launchpad.net/ironic/+bug/1703945. ironic-15.0.0/releasenotes/notes/bug-1506657-3bcb4ef46623124d.yaml0000664000175000017500000000100013652514273023424 0ustar zuulzuul00000000000000--- upgrade: - Adds a new configuration option, hash_ring_reset_interval, to control how often the conductor's view of the hash ring is reset. This has a default of 180 seconds, the same as the default for the sync_local_state periodic task that used to handle this reset. critical: - Fixes a bug where the conductor's view of the hash ring was never refreshed if the sync_local_state periodic task was disabled. For more info, see https://bugs.launchpad.net/ironic/+bug/1506657. ironic-15.0.0/releasenotes/notes/classic-drivers-deprecation-de464065187d4c14.yaml0000664000175000017500000000120413652514273027263 0ustar zuulzuul00000000000000--- deprecations: - | The classic drivers, as well as the ``enabled_drivers`` configuration option, are now deprecated and may be removed in the Rocky relese. A deprecation warning will be logged for every loaded classic driver. Check `the migration guide `_ for information on how to update your nodes. .. note:: Check `the classic drivers future specification `_ for technical information behind this deprecation. ironic-15.0.0/releasenotes/notes/ipmi-cmd-for-ipmi-consoles-2e1104f22df3efcd.yaml0000664000175000017500000000014413652514273027223 0ustar zuulzuul00000000000000fixes: - Fixes a bug with incorrect base socat command, which prevented the usage of console. ironic-15.0.0/releasenotes/notes/port-0-is-valid-d7188af3be6f3ecb.yaml0000664000175000017500000000032613652514273025112 0ustar zuulzuul00000000000000--- fixes: - Fixes the issue of port number 0 (zero) being considered invalid (`bug 1729628 `_). Zero is a valid port number and is now recognized as such. ironic-15.0.0/releasenotes/notes/add-deploy-steps-ilo-bios-interface-c73152269701ef80.yaml0000664000175000017500000000030513652514273030442 0ustar zuulzuul00000000000000--- features: - | Adds support for deploy steps to ``bios`` interface of ``ilo`` hardware type. The methods ``factory_reset`` and ``apply_configuration`` can be used as deploy steps. ironic-15.0.0/releasenotes/notes/snmp-outlet-validate-ffbe8e6687172efc.yaml0000664000175000017500000000011413652514273026271 0ustar zuulzuul00000000000000--- fixes: - Adds validation of ``snmp_outlet`` parameter to SNMP driver. ironic-15.0.0/releasenotes/notes/deprecate-global-region-4dbea91de71ebf59.yaml0000664000175000017500000000064313652514273026740 0ustar zuulzuul00000000000000--- deprecations: - | Configuration option ``[keystone]/region_name`` is deprecated and will be ignored in the Rocky release. Instead, provide per-service ``region_name`` option in the following configuration file sections: ``[service_catalog]`` (for bare metal API endpoint discovery from keystone service catalog), ``[glance]``, ``[neutron]``, ``[cinder]``, ``[inspector]`` and ``[swift]``. ironic-15.0.0/releasenotes/notes/ipxe-dhcp-b799bc326cd2529a.yaml0000664000175000017500000000024313652514273023721 0ustar zuulzuul00000000000000--- fixes: - Remove "dhcp" command from the default iPXE script. It is redundant, and may even break booting when the provisioning NIC is not the first one. ironic-15.0.0/releasenotes/notes/change-default-boot-option-to-local-8c326077770ab672.yaml0000664000175000017500000000047513652514273030454 0ustar zuulzuul00000000000000--- upgrade: - | The default value of ``[deploy]/default_boot_option`` is changed from ``netboot`` to ``local``. - Due to the default boot option change, partition images without ``grub2`` will be unable to be deployed without the ``boot_option`` for the node to be explicitly set to ``netboot``. ironic-15.0.0/releasenotes/notes/oslo-i18n-optional-76bab4d2697c6f94.yaml0000664000175000017500000000025013652514273025423 0ustar zuulzuul00000000000000--- upgrade: - | The dependency on ``oslo.i18n`` is now optional. If you would like messages from ironic to be translated, you need to install it explicitly. ironic-15.0.0/releasenotes/notes/conductor-power-sync-timeout-extension-fa5e7b5fdd679d84.yaml0000664000175000017500000000064313652514273032024 0ustar zuulzuul00000000000000--- other: - | The ``[conductor]power_state_change_timeout`` default value has been extended to ``60`` seconds from ``30`` seconds. This is due to some API interfaces with Redfish, may cache the power state and thus may take longer than thirty seconds to update after a change has been requested. Please see `here `_ for more information. ironic-15.0.0/releasenotes/notes/migrate_to_hardware_types-0c85c6707c4f296d.yaml0000664000175000017500000000272013652514273027221 0ustar zuulzuul00000000000000--- upgrade: - | Adds new data migration ``migrate_to_hardware_types`` that will try to migrate nodes from classic drivers to hardware types on upgrade. Nodes that cannot be migrated are skipped. This may happen due to one of these reasons: * migration is not implemented for the classic driver, * the matching hardware type is not enabled, * one or more matching hardware interfaces are not enabled. In the latter case, the new migration command line option ``reset_unsupported_interfaces`` can be used to reset optional interfaces (all except for ``boot``, ``deploy``, ``management`` and ``power``) to their no-op implementations (e.g. ``no-inspect``) if the matching implementation is not enabled. Use it like:: ironic-dbsync online_data_migrations --option migrate_to_hardware_types.reset_unsupported_interfaces=true This migration can be repeated several times to migrate skipped nodes after the configuration is changed. other: - | A classic driver implementation can now provide matching hardware type and interfaces to enable automatic migration to hardware types. See `the specification `_ for an explanation on how to do it. .. note:: This feature will only be available until the classic drivers support is removed (presumably in the Rocky release). ironic-15.0.0/releasenotes/notes/fix-agent-clean-up-9a25deb85bc53d9b.yaml0000664000175000017500000000023113652514273025561 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue wherein agent based deploy do not call clean up the instance related configurations done on the Ironic node. ironic-15.0.0/releasenotes/notes/oneview-onetime-boot-64a68e135a45f5e2.yaml0000664000175000017500000000022513652514273026031 0ustar zuulzuul00000000000000--- fixes: - Fixes the OneView driver to make the ``set_boot_device`` method work as expected with the ``persistent`` option set to ``False``. ironic-15.0.0/releasenotes/notes/allow-set-interface-to-node-in-available-bd6f695620c2d77f.yaml0000664000175000017500000000014313652514273031577 0ustar zuulzuul00000000000000--- features: - | Allows updating hardware interfaces on nodes in the ``available`` state. ironic-15.0.0/releasenotes/notes/fix-xclarity-management-defect-ec5af0cc6d1045d9.yaml0000664000175000017500000000044513652514273030157 0ustar zuulzuul00000000000000--- fixes: - | Fixes an issue where ``xclarity`` management interface fails to get boot order. Now the driver correctly gets boot device and this has been verified in the 3rd party CI. See story `2004576 `_ for details. ironic-15.0.0/releasenotes/notes/implement-policy-in-code-cbb0216ef5f8224f.yaml0000664000175000017500000000207613652514273026720 0ustar zuulzuul00000000000000--- features: - | RESTful access to every API resource may now be controlled by adjusting policy settings. Defaults are set in code, and remain backwards compatible with the previously-included policy.json file. Two new roles are checked by default, "baremetal_admin" and "baremetal_observer", though these may be replaced or overridden by configuration. The "baremetal_observer" role grants read-only access to Ironic's API. security: - | Previously, access to Ironic's REST API was "all or nothing". With this release, it is now possible to restrict read and write access to API resources to specific cloud roles. upgrade: - | During an upgrade, it is recommended that all deployers re-evaluate the settings in their ``/etc/ironic/policy.json`` file. This file should now be used only to override default configuration, such as by limiting access to the ironic service to specific tenants or restricting access to specific API endpoints. A ``policy.json.sample`` file is provided that lists all supported policies. ironic-15.0.0/releasenotes/source/0000775000175000017500000000000013652514443017102 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/mitaka.rst0000664000175000017500000000032513652514273021103 0ustar zuulzuul00000000000000=========================================== Mitaka Series (4.3.0 - 5.1.x) Release Notes =========================================== .. release-notes:: :branch: origin/stable/mitaka :earliest-version: 4.3.0 ironic-15.0.0/releasenotes/source/train.rst0000664000175000017500000000026413652514273020754 0ustar zuulzuul00000000000000============================================ Train Series (12.2.0 - 13.0.x) Release Notes ============================================ .. release-notes:: :branch: stable/train ironic-15.0.0/releasenotes/source/liberty.rst0000664000175000017500000002032613652514273021312 0ustar zuulzuul00000000000000============================================ Liberty Series (4.0.0 - 4.2.5) Release Notes ============================================ .. release-notes:: :branch: origin/stable/liberty :earliest-version: 4.2.2 .. _V4-2-1: 4.2.1 ===== This release is a patch release on top of 4.2.0, as part of the stable Liberty series. Full details are available on Launchpad: https://launchpad.net/ironic/liberty/4.2.1. * Import Japanese translations - our first major translation addition! * Fix a couple of locale issues with deployments, when running on a system using the Japanese locale .. _V4-2-0: 4.2.0 ===== This release is proposed as the stable Liberty release for Ironic, and brings with it some bug fixes and small features. Full release details are available on Launchpad: https://launchpad.net/ironic/liberty/4.2.0. * Deprecated the bash ramdisk The older bash ramdisk built by diskimage-builder is now deprecated and support will be removed at the beginning of the "N" development cycle. Users should migrate to a ramdisk running ironic-python-agent, which now also supports the pxe_* drivers that the bash ramdisk was responsible for. For more info on building an ironic-python-agent ramdisk, see: https://docs.openstack.org/developer/ironic/deploy/install-guide.html#building-or-downloading-a-deploy-ramdisk-image * Raised API version to 1.14 * 1.12 allows setting RAID properties for a node; however support for putting this configuration on a node is not yet implemented for in-tree drivers; this will be added in a future release. * 1.13 adds a new 'abort' verb to the provision state API. This may be used to abort cleaning for nodes in the CLEANWAIT state. * 1.14 makes the following endpoints discoverable in the API: * /v1/nodes//states * /v1/drivers//properties * Implemented a new Boot interface for drivers This change enhances the driver interface for driver authors, and should not affect users of Ironic, by splitting control of booting a server from the DeployInterface. The BootInterface is responsible for booting an image on a server, while the DeployInterface is responsible for deploying a tenant image to a server. This has been implemented in most in-tree drivers, and is a backwards-compatible change for out-of-tree drivers. The following in-tree drivers will be updated in a forth-coming release: * agent_ilo * agent_irmc * iscsi_ilo * iscsi_irmc * Implemented a new RAID interface for drivers This change enhances the driver interface for driver authors. Drivers may begin implementing this interface to support RAID configuration for nodes. This is not yet implemented for any in-tree drivers. * Image size is now checked before deployment with agent drivers The agent must download the tenant image in full before writing it to disk. As such, the server being deployed must have enough RAM for running the agent and storing the image. This is now checked before Ironic tells the agent to deploy an image. An optional config [agent]memory_consumed_by_agent is provided. When Ironic does this check, this config option may be set to factor in the amount of RAM to reserve for running the agent. * Added Cisco IMC driver This driver supports managing Cisco UCS C-series servers through the CIMC API, rather than IPMI. Documentation is available at: https://docs.openstack.org/developer/ironic/drivers/cimc.html * iLO virtual media drivers can work without Swift iLO virtual media drivers (iscsi_ilo and agent_ilo) can work standalone without Swift, by configuring an HTTP(S) server for hosting the deploy/boot images. A web server needs to be running on every conductor node and needs to be configured in ironic.conf. iLO driver documentation is available at: https://docs.openstack.org/developer/ironic/drivers/ilo.html Known issues ~~~~~~~~~~~~ * Out of tree drivers may be broken by this release. The AgentDeploy and ISCSIDeploy (formerly known as PXEDeploy) classes now depend on drivers to utilize an instance of a BootInterface. For drivers that exist out of tree, that use these deploy classes, an error will be thrown during deployment. There is a simple fix. For drivers that expect these deploy classes to handle PXE booting, one can add the following code to the driver's `__init__` method:: from ironic.drivers.modules import pxe class YourDriver(...): def __init__(self): # ... self.boot = pxe.PXEBoot() A driver that handles booting itself (for example, a driver that implements booting from virtual media) should use the following to make calls to the boot interface a no-op:: from ironic.drivers.modules import fake class YourDriver(...) def __init__(self): # ... self.boot = fake.FakeBoot() Additionally, as mentioned before, `ironic.drivers.modules.pxe.PXEDeploy` has moved to `ironic.drivers.modules.iscsi_deploy.ISCSIDeploy`, which will break drivers that use this class. The Ironic team apologizes profusely for this inconvenience. .. _V4-1-0: 4.1.0 ===== This brings some bug fixes and small features on top of Ironic 4.0.0. Major changes are listed below, and full release details are available on Launchpad: https://launchpad.net/ironic/liberty/4.1.0. * Added CORS support The Ironic API now has support for CORS requests, that may be used by, for example, web browser-based clients. This is configured in the [cors] section of ironic.conf. * Removed deprecated 'admin_api' policy rule * Deprecated the 'parallel' option to periodic task decorator .. _V4-0-0: 4.0.0 First semver release ============================ This is the first semver-versioned release of Ironic, created during the OpenStack "Liberty" development cycle. It marks a pivot in our versioning schema from date-based versioning; the previous released version was 2015.1. Full release details are available on Launchpad: https://launchpad.net/ironic/liberty/4.0.0. * Raised API version to 1.11 - v1.7 exposes a new 'clean_step' property on the Node resource. - v1.8 and v1.9 improve query and filter support - v1.10 fixes Node logical names to support all `RFC 3986`_ unreserved characters - v1.11 changes the default state of newly created Nodes from AVAILABLE to ENROLL * Support for the new ENROLL workflow during Node creation Previously, all Nodes were created in the "available" provision state - before management credentials were validated, hardware was burned in, etc. This could lead to workloads being scheduled to Nodes that were not yet ready for it. Beginning with API v1.11, newly created Nodes begin in the ENROLL state, and must be "managed" and "provided" before they are made available for provisioning. API clients must be updated to handle the new workflow when they begin sending the X-OpenStack-Ironic-API-Version header with a value >= 1.11. * Migrations from Nova "baremetal" have been removed After a deprecation period, the scripts and support for migrating from the old Nova "baremetal" driver to the new Nova "ironic" driver have been removed from Ironic's tree. * Removal of deprecated vendor driver methods A new @passthru decorator was introduced to the driver API in a previous release. In this release, support for vendor_passthru and driver_vendor_passthru methods has been removed. All in-tree drivers have been updated. Any out of tree drivers which did not update to the @passthru decorator during the previous release will need to do so to be compatible with this release. * Introduce new BootInterface to the Driver API Drivers may optionally add a new BootInterface. This is merely a refactoring of the Driver API to support future improvements. * Several hardware drivers have been added or enhanced - Add OCS Driver - Add UCS Driver - Add Wake-On-Lan Power Driver - ipmitool driver supports IPMI v1.5 - Add support to SNMP driver for "APC MasterSwitchPlus" series PDU's - pxe_ilo driver now supports UEFI Secure Boot (previous releases of the iLO driver only supported this for agent_ilo and iscsi_ilo) - Add Virtual Media support to iRMC Driver - Add BIOS config to DRAC Driver - PXE drivers now support GRUB2 .. _`RFC 3986`: https://www.ietf.org/rfc/rfc3986.txt ironic-15.0.0/releasenotes/source/_templates/0000775000175000017500000000000013652514443021237 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/_templates/.placeholder0000664000175000017500000000000013652514273023511 0ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/_static/0000775000175000017500000000000013652514443020530 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/_static/.placeholder0000664000175000017500000000000013652514273023002 0ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/pike.rst0000664000175000017500000000031113652514273020560 0ustar zuulzuul00000000000000========================================== Pike Series (8.0.0 - 9.1.x) Release Notes ========================================== .. release-notes:: :branch: stable/pike :earliest-version: 8.0.0 ironic-15.0.0/releasenotes/source/rocky.rst0000664000175000017500000000027113652514273020764 0ustar zuulzuul00000000000000============================================== Rocky Series (11.0.0 - 11.1.x) Release Notes ============================================== .. release-notes:: :branch: stable/rocky ironic-15.0.0/releasenotes/source/newton.rst0000664000175000017500000000032513652514273021147 0ustar zuulzuul00000000000000=========================================== Newton Series (6.0.0 - 6.2.x) Release Notes =========================================== .. release-notes:: :branch: origin/stable/newton :earliest-version: 6.0.0 ironic-15.0.0/releasenotes/source/ocata.rst0000664000175000017500000000032113652514273020720 0ustar zuulzuul00000000000000========================================== Ocata Series (7.0.0 - 7.0.x) Release Notes ========================================== .. release-notes:: :branch: origin/stable/ocata :earliest-version: 7.0.0 ironic-15.0.0/releasenotes/source/locale/0000775000175000017500000000000013652514443020341 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/locale/en_GB/0000775000175000017500000000000013652514443021313 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000013652514443023100 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000664000175000017500000014414513652514273026143 0ustar zuulzuul00000000000000# Andi Chandler , 2017. #zanata # Andi Chandler , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: ironic\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-03-21 20:13+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-10-11 09:38+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "" "\"Dynamic drivers\" is a revamp of how drivers are composed. Rather than a " "huge matrix of hardware drivers supporting different things, now users " "select a \"hardware type\" for a machine, and can independently change the " "deploy method, console manager, RAID management, power control interface, " "etc. This is experimental, as not all \"classic\" drivers have a dynamic " "equivalent yet, but we encourage users to try this feature out and submit " "feedback." msgstr "" "\"Dynamic drivers\" is a revamp of how drivers are composed. Rather than a " "huge matrix of hardware drivers supporting different things, now users " "select a \"hardware type\" for a machine, and can independently change the " "deploy method, console manager, RAID management, power control interface, " "etc. This is experimental, as not all \"classic\" drivers have a dynamic " "equivalent yet, but we encourage users to try this feature out and submit " "feedback." msgid "" "\"Port group\" support allows users to take advantage of bonded network " "interfaces." msgstr "" "\"Port group\" support allows users to take advantage of bonded network " "interfaces." msgid "" "**WARNING: don't set the option ``[DEFAULT]/default_network_interface`` " "before upgrading to this release without reading the upgrade notes about it, " "due to data migrations depending on the value.**" msgstr "" "**WARNING: don't set the option ``[DEFAULT]/default_network_interface`` " "before upgrading to this release without reading the upgrade notes about it, " "due to data migrations depending on the value.**" msgid "" "*python-scciclient* of version 0.6.0 or newer is required by the ``irmc`` " "hardware type to support new out-of-band inspection capabilities. If an " "older version is used, the new capabilities will not be discovered." msgstr "" "*python-scciclient* of version 0.6.0 or newer is required by the ``irmc`` " "hardware type to support new out-of-band inspection capabilities. If an " "older version is used, the new capabilities will not be discovered." msgid "10.0.0" msgstr "10.0.0" msgid "10.1.0" msgstr "10.1.0" msgid "10.1.1" msgstr "10.1.1" msgid "10.1.2" msgstr "10.1.2" msgid "10.1.3" msgstr "10.1.3" msgid "10.1.4" msgstr "10.1.4" msgid "10.1.6" msgstr "10.1.6" msgid "11.0.0" msgstr "11.0.0" msgid "11.1.0" msgstr "11.1.0" msgid "4.2.2" msgstr "4.2.2" msgid "4.2.3" msgstr "4.2.3" msgid "4.2.4" msgstr "4.2.4" msgid "4.2.5" msgstr "4.2.5" msgid "4.3.0" msgstr "4.3.0" msgid "443, 80" msgstr "443, 80" msgid "5.0.0" msgstr "5.0.0" msgid "5.1.0" msgstr "5.1.0" msgid "5.1.1" msgstr "5.1.1" msgid "5.1.2" msgstr "5.1.2" msgid "5.1.3" msgstr "5.1.3" msgid "6.0.0" msgstr "6.0.0" msgid "6.1.0" msgstr "6.1.0" msgid "6.2.0" msgstr "6.2.0" msgid "6.2.2" msgstr "6.2.2" msgid "6.2.3" msgstr "6.2.3" msgid "6.2.4" msgstr "6.2.4" msgid "6.3.0" msgstr "6.3.0" msgid "7.0.0" msgstr "7.0.0" msgid "7.0.1" msgstr "7.0.1" msgid "7.0.2" msgstr "7.0.2" msgid "7.0.3" msgstr "7.0.3" msgid "7.0.4" msgstr "7.0.4" msgid "7.0.5" msgstr "7.0.5" msgid "7.0.7" msgstr "7.0.7" msgid "8.0.0" msgstr "8.0.0" msgid "9.0.0" msgstr "9.0.0" msgid "9.0.1" msgstr "9.0.1" msgid "9.1.0" msgstr "9.1.0" msgid "9.1.1" msgstr "9.1.1" msgid "9.1.2" msgstr "9.1.2" msgid "9.1.3" msgstr "9.1.3" msgid "9.1.4" msgstr "9.1.4" msgid "9.1.5" msgstr "9.1.5" msgid "9.2.0" msgstr "9.2.0" msgid "" "A ``[DEFAULT]/enabled_network_interfaces`` option (which must be set for " "both ironic-api and ironic-conductor services) controls which network " "interfaces are available for use." msgstr "" "A ``[DEFAULT]/enabled_network_interfaces`` option (which must be set for " "both Ironic-API and Ironic-conductor services) controls which network " "interfaces are available for use." msgid "" "A ``[conductor]/api_url`` value specified in the configuration file that " "does not start with either ``https://`` or ``http://`` is no longer allowed. " "An incorrect value led to deployment failure on ironic-python-agent side. " "This misconfiguration will now be detected during ironic-conductor and " "ironic-api startup. An exception will be raised and an error about the " "invalid value will be logged." msgstr "" "A ``[conductor]/api_url`` value specified in the configuration file that " "does not start with either ``https://`` or ``http://`` is no longer allowed. " "An incorrect value led to deployment failure on ironic-python-agent side. " "This misconfiguration will now be detected during ironic-conductor and " "ironic-api startup. An exception will be raised and an error about the " "invalid value will be logged." msgid "" "A bug has been corrected where a node's current clean_step was not purged " "upon that node timing out from a CLEANWAIT state. Previously, this bug would " "prevent a user from retrying cleaning operations. For more information, see " "https://bugs.launchpad.net/ironic/+bug/1590146." msgstr "" "A bug has been corrected where a node's current clean_step was not purged " "upon that node timing out from a CLEANWAIT state. Previously, this bug would " "prevent a user from retrying cleaning operations. For more information, see " "https://bugs.launchpad.net/ironic/+bug/1590146." msgid "" "A bug was identified in the behavior of the iLO drivers where nodes that are " "not active but taking part of a conductor takeover could be powered off. In " "preparation for new features and functionality, that risk encountering this " "bug, we are limiting the deployment preparation steps to the ``deploying`` " "state to prevent nodes from being erroneously powered off." msgstr "" "A bug was identified in the behaviour of the iLO drivers where nodes that " "are not active but taking part of a conductor takeover could be powered off. " "In preparation for new features and functionality, that risk encountering " "this bug, we are limiting the deployment preparation steps to the " "``deploying`` state to prevent nodes from being erroneously powered off." msgid "" "A classic driver implementation can now provide matching hardware type and " "interfaces to enable automatic migration to hardware types. See `the " "specification `_ for an " "explanation on how to do it." msgstr "" "A classic driver implementation can now provide matching hardware type and " "interfaces to enable automatic migration to hardware types. See `the " "specification `_ for an " "explanation on how to do it." msgid "" "A critical security vulnerability (CVE-2016-4985) was fixed in this release. " "Previously, a client with network access to the ironic-api service was able " "to bypass Keystone authentication and retrieve all information about any " "Node registered with Ironic, if they knew (or were able to guess) the MAC " "address of a network card belonging to that Node, by sending a crafted POST " "request to the /v1/drivers/$DRIVER_NAME/vendor_passthru resource. Ironic's " "policy.json configuration is now respected when responding to this request " "such that, if passwords should be masked for other requests, they are also " "masked for this request." msgstr "" "A critical security vulnerability (CVE-2016-4985) was fixed in this release. " "Previously, a client with network access to the ironic-api service was able " "to bypass Keystone authentication and retrieve all information about any " "Node registered with Ironic, if they knew (or were able to guess) the MAC " "address of a network card belonging to that Node, by sending a crafted POST " "request to the /v1/drivers/$DRIVER_NAME/vendor_passthru resource. Ironic's " "policy.json configuration is now respected when responding to this request " "such that, if passwords should be masked for other requests, they are also " "masked for this request." msgid "" "A few major changes are worth mentioning. This is not an exhaustive list, " "and mostly includes changes from 9.0.0:" msgstr "" "A few major changes are worth mentioning. This is not an exhaustive list, " "and mostly includes changes from 9.0.0:" msgid "" "A few major changes are worth mentioning. This is not an exhaustive list:" msgstr "" "A few major changes are worth mentioning. This is not an exhaustive list:" msgid "A few major changes since 9.1.x (Pike) are worth mentioning:" msgstr "A few major changes since 9.1.x (Pike) are worth mentioning:" msgid "" "A future release will change the default value of ``[deploy]/" "default_boot_option`` from \"netboot\" to \"local\". To avoid disruptions, " "it is recommended to set an explicit value for this option." msgstr "" "A future release will change the default value of ``[deploy]/" "default_boot_option`` from \"netboot\" to \"local\". To avoid disruptions, " "it is recommended to set an explicit value for this option." msgid "" "A group name may be up to 255 characters containing ``a-z``, ``0-9``, ``_``, " "``-``, and ``.``. The group is case-insensitive. The default group is the " "empty string (``\"\"``)." msgstr "" "A group name may be up to 255 characters containing ``a-z``, ``0-9``, ``_``, " "``-``, and ``.``. The group is case-insensitive. The default group is the " "empty string (``\"\"``)." msgid "A major bug was fixed where clean steps do not run." msgstr "A major bug was fixed where clean steps do not run." msgid "" "A network UUID for provisioning and cleaning network is no longer cached " "locally if the requested network (either via node's ``driver_info`` or via " "configuration options) is specified as a network name. Fixes the situation " "when a network is re-created with the same name." msgstr "" "A network UUID for provisioning and cleaning network is no longer cached " "locally if the requested network (either via node's ``driver_info`` or via " "configuration options) is specified as a network name. Fixes the situation " "when a network is re-created with the same name." msgid "" "A network interface is set for a node by setting the ``network_interface`` " "field for the node via the REST API. This field is available in API version " "1.20 and above. Changing the network interface may only be done in the " "``enroll``, ``inspecting``, and ``manageable`` states." msgstr "" "A network interface is set for a node by setting the ``network_interface`` " "field for the node via the REST API. This field is available in API version " "1.20 and above. Changing the network interface may only be done in the " "``enroll``, ``inspecting``, and ``manageable`` states." msgid "" "A new WSGI application script ``ironic-api-wsgi`` is now available. It is " "auto-generated by ``pbr`` and provides the ability to serve the bare metal " "API using a WSGI server (for example Nginx and uWSGI or Apache with " "mod_wsgi)." msgstr "" "A new WSGI application script ``ironic-api-wsgi`` is now available. It is " "auto-generated by ``pbr`` and provides the ability to serve the bare metal " "API using a WSGI server (for example Nginx and uWSGI or Apache with " "mod_wsgi)." msgid "" "A new configuration option ``[api]/restrict_lookup`` is added, which " "restricts the lookup API (normally only used by ramdisks) to only work when " "the node is in specific states used by the ramdisk, and defaults to True. " "Operators that need this endpoint to work in any state may set this to " "False, though this is insecure and should not be used in normal operation." msgstr "" "A new configuration option ``[api]/restrict_lookup`` is added, which " "restricts the lookup API (normally only used by ramdisks) to only work when " "the node is in specific states used by the ramdisk, and defaults to True. " "Operators that need this endpoint to work in any state may set this to " "False, though this is insecure and should not be used in normal operation." msgid "" "A new configuration option ``[conductor]/power_state_change_timeout`` can be " "used to specify how many seconds to wait for a baremetal node to change the " "power state when a power action is requested." msgstr "" "A new configuration option ``[conductor]/power_state_change_timeout`` can be " "used to specify how many seconds to wait for a baremetal node to change the " "power state when a power action is requested." msgid "" "A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, " "which has a default value of False, has been added. If set to True, the " "Ironic Python Agent will revert to a disk shred operation if an ATA secure " "erase operation fails. Under normal circumstances, the failure of an ATA " "secure erase operation results in the node being put in ``clean failed`` " "state." msgstr "" "A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, " "which has a default value of False, has been added. If set to True, the " "Ironic Python Agent will revert to a disk shred operation if an ATA secure " "erase operation fails. Under normal circumstances, the failure of an ATA " "secure erase operation results in the node being put in ``clean failed`` " "state." msgid "" "A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, " "which has a default value of False, has been added. The default setting " "represents the standard behavior of the Ironic Python Agent during a " "cleaning failure." msgstr "" "A new configuration option ``[deploy]continue_if_disk_secure_erase_fails``, " "which has a default value of False, has been added. The default setting " "represents the standard behaviour of the Ironic Python Agent during a " "cleaning failure." msgid "" "A new configuration option, `shred_final_overwrite_with_zeros` is now " "available. This option controls the final overwrite with zeros done on all " "block devices for a node under cleaning. This feature was previously always " "enabled and not configurable. This option is only used when a block device " "could not be ATA Secure Erased." msgstr "" "A new configuration option, `shred_final_overwrite_with_zeros` is now " "available. This option controls the final overwrite with zeros done on all " "block devices for a node under cleaning. This feature was previously always " "enabled and not configurable. This option is only used when a block device " "could not be ATA Secure Erased." msgid "" "A new dictionary field ``internal_info`` is added to the port API object. It " "is readonly from the API side, and can contain any internal information " "ironic needs to store for the port. ``cleaning_vif_port_id`` is being stored " "inside this dictionary." msgstr "" "A new dictionary field ``internal_info`` is added to the port API object. It " "is readonly from the API side, and can contain any internal information " "ironic needs to store for the port. ``cleaning_vif_port_id`` is being stored " "inside this dictionary." msgid "" "A node in the ``active`` provision state can be rescued via the ``GET /v1/" "nodes/{node_ident}/states/provision`` API, by specifying ``rescue`` as the " "``target`` value, and a ``rescue_password`` value. When the node has been " "rescued, it will be in the ``rescue`` provision state. A rescue ramdisk will " "be running, configured with the specified ``rescue_password``, and listening " "with ssh on the tenant network." msgstr "" "A node in the ``active`` provision state can be rescued via the ``GET /v1/" "nodes/{node_ident}/states/provision`` API, by specifying ``rescue`` as the " "``target`` value, and a ``rescue_password`` value. When the node has been " "rescued, it will be in the ``rescue`` provision state. A rescue ramdisk will " "be running, configured with the specified ``rescue_password``, and listening " "with SSH on the tenant network." msgid "" "A node in the ``rescue`` provision state can be unrescued (to the ``active`` " "state) via the ``GET /v1/nodes/{node_ident}/states/provision`` API, by " "specifying ``unrescue`` as the ``target`` value." msgstr "" "A node in the ``rescue`` provision state can be unrescued (to the ``active`` " "state) via the ``GET /v1/nodes/{node_ident}/states/provision`` API, by " "specifying ``unrescue`` as the ``target`` value." msgid "" "A node using 'agent_ilo' or 'iscsi_ilo' driver has their 'driver_info/" "ilo_deploy_iso' field validated during node validate. This closes bug" msgstr "" "A node using 'agent_ilo' or 'iscsi_ilo' driver has their 'driver_info/" "ilo_deploy_iso' field validated during node validate. This closes bug" msgid "" "A node using the ``agent_ilo`` or ``iscsi_ilo`` driver now has its " "``driver_info/ilo_deploy_iso`` field validated during node validation." msgstr "" "A node using the ``agent_ilo`` or ``iscsi_ilo`` driver now has its " "``driver_info/ilo_deploy_iso`` field validated during node validation." msgid "" "A node's traits are also included in the following node query and list " "responses:" msgstr "" "A node's traits are also included in the following node query and list " "responses:" msgid "" "A number of drivers that were declared as unsupported in Newton release have " "been removed from ironic tree. This includes drivers with power and/or " "management driver interfaces based on:" msgstr "" "A number of drivers that were declared as unsupported in Newton release have " "been removed from ironic tree. This includes drivers with power and/or " "management driver interfaces based on:" msgid "" "A storage interface can be set when creating or updating a node. Enabled " "storage interfaces are defined via the ``[DEFAULT]/" "enabled_storage_interfaces`` configuration option. A default interface for a " "created node can be specified with ``[DEFAULT]/default_storage_interface`` " "configuration option." msgstr "" "A storage interface can be set when creating or updating a node. Enabled " "storage interfaces are defined via the ``[DEFAULT]/" "enabled_storage_interfaces`` configuration option. A default interface for a " "created node can be specified with ``[DEFAULT]/default_storage_interface`` " "configuration option." msgid "" "A validation step is added to verify that the Server Profile Template's MAC " "type is set to Physical when dynamic allocation is enabled. The OneView " "Driver needs this verification because the machine is going to use a MAC " "that will only be specified at the profile application." msgstr "" "A validation step is added to verify that the Server Profile Template's MAC " "type is set to Physical when dynamic allocation is enabled. The OneView " "Driver needs this verification because the machine is going to use a MAC " "that will only be specified at the profile application." msgid "A warning is logged for any changes to immutable configuration options." msgstr "" "A warning is logged for any changes to immutable configuration options." msgid "" "API service once again records HTTP access logs. See https://bugs.launchpad." "net/ironic/+bug/1536828 for details." msgstr "" "API service once again records HTTP access logs. See https://bugs.launchpad." "net/ironic/+bug/1536828 for details." msgid "Add BIOS config to DRAC Driver" msgstr "Add BIOS config to DRAC Driver" msgid "" "Add Neutron ``port_setup_delay`` configuration option. This delay allows " "Ironic to wait for Neutron port operations until we have a mechanism for " "synchronizing events with Neutron. Set to 0 by default." msgstr "" "Add Neutron ``port_setup_delay`` configuration option. This delay allows " "Ironic to wait for Neutron port operations until we have a mechanism for " "synchronising events with Neutron. Set to 0 by default." msgid "Add UCS Driver" msgstr "Add UCS Driver" msgid "Add Virtual Media support to iRMC Driver" msgstr "Add Virtual Media support to iRMC Driver" msgid "Add Wake-On-Lan Power Driver" msgstr "Add Wake-On-LAN Power Driver" msgid "" "Add ``?detail=`` boolean query to the API list endpoints to provide a more " "RESTful alternative to the existing ``/nodes/detail`` and similar endpoints. " "The default is False. Now these API requests are possible:" msgstr "" "Add ``?detail=`` boolean query to the API list endpoints to provide a more " "RESTful alternative to the existing ``/nodes/detail`` and similar endpoints. " "The default is False. Now these API requests are possible:" msgid "" "Add ``choices`` parameter to config options. Invalid values will be rejected " "when first accessing them, which can happen in the middle of deployment." msgstr "" "Add ``choices`` parameter to config options. Invalid values will be rejected " "when first accessing them, which can happen in the middle of deployment." msgid "" "Add ``hctl`` to root device hints. HCTL is the SCSI address and stands for " "Host, Channel, Target and Lun." msgstr "" "Add ``hctl`` to root device hints. HCTL is the SCSI address and stands for " "Host, Channel, Target and LUN." msgid "" "Add missing \"lookup\" method to the pxe_drac driver vendor interface " "enabling it to be deployed using the IPA ramdisk." msgstr "" "Add missing \"lookup\" method to the pxe_drac driver vendor interface " "enabling it to be deployed using the IPA ramdisk." msgid "" "Add support for a new capability called 'disk_label' to allow operators to " "choose the disk label that will be used when Ironic is partitioning the disk." msgstr "" "Add support for a new capability called 'disk_label' to allow operators to " "choose the disk label that will be used when Ironic is partitioning the disk." msgid "Add support for filtering nodes using the same driver via the API." msgstr "Add support for filtering nodes using the same driver via the API." msgid "" "Add support for ipmitool's port (-p) option. This allows ipmitool support " "for operators that do not use the default port (623) as their IPMI port." msgstr "" "Add support for ipmitool's port (-p) option. This allows ipmitool support " "for operators that do not use the default port (623) as their IPMI port." msgid "" "Add support for the injection of Non-Masking Interrupts (NMI) for a node in " "REST API version 1.29. This feature can be used for hardware diagnostics, " "and actual support depends on the driver. In 7.0.0, this is available in the " "ipmitool and iRMC drivers." msgstr "" "Add support for the injection of Non-Masking Interrupts (NMI) for a node in " "REST API version 1.29. This feature can be used for hardware diagnostics, " "and actual support depends on the driver. In 7.0.0, this is available in the " "ipmitool and iRMC drivers." msgid "Add support to SNMP driver for \"APC MasterSwitchPlus\" series PDU's" msgstr "Add support to SNMP driver for \"APC MasterSwitchPlus\" series PDUs" msgid "" "Add the ability to adjust ipxe timeout during image downloading, default is " "still unlimited (0)." msgstr "" "Add the ability to adjust ipxe timeout during image downloading, default is " "still unlimited (0)." msgid "" "Add the field `standalone_ports_supported` to the portgroup object. This " "field indicates whether ports that are members of this portgroup can be used " "as stand-alone ports. The default is True." msgstr "" "Add the field `standalone_ports_supported` to the portgroup object. This " "field indicates whether ports that are members of this portgroup can be used " "as stand-alone ports. The default is True." msgid "" "Added configdrive support for whole disk images for iSCSI based deploy. This " "will work for UEFI only or BIOS only images. It will not work for hybrid " "images which are capable of booting from BIOS and UEFI boot mode." msgstr "" "Added configdrive support for whole disk images for iSCSI based deploy. This " "will work for UEFI only or BIOS only images. It will not work for hybrid " "images which are capable of booting from BIOS and UEFI boot mode." msgid "Added support for JBOD volumes in RAID configuration." msgstr "Added support for JBOD volumes in RAID configuration." msgid "" "Added support for local booting a partition image for ppc64* hardware. If a " "PReP partition is detected when deploying to a ppc64* machine, the partition " "will be specified to IPA causing the bootloader to be installed there " "directly. This feature requires a ironic-python-agent ramdisk with ironic-" "lib >=2.14." msgstr "" "Added support for local booting a partition image for ppc64* hardware. If a " "PReP partition is detected when deploying to a ppc64* machine, the partition " "will be specified to IPA causing the bootloader to be installed there " "directly. This feature requires a ironic-python-agent ramdisk with ironic-" "lib >=2.14." msgid "" "Added support to validate iLO SSL certificate in iLO drivers. A new " "configuration option ``[ilo]/ca_file`` is added to specify the iLO CA " "certificate file. If ``[ilo]/ca_file`` is specified, the iLO drivers will " "validate iLO SSL certificates." msgstr "" "Added support to validate iLO SSL certificate in iLO drivers. A new " "configuration option ``[ilo]/ca_file`` is added to specify the iLO CA " "certificate file. If ``[ilo]/ca_file`` is specified, the iLO drivers will " "validate iLO SSL certificates." msgid "" "Addition of the provision state target verb of ``adopt`` which allows an " "operator to move a node into an ``active`` state from ``manageable`` state, " "without performing a deployment operation on the node. This can be used to " "represent nodes that have been previously deployed by other means that will " "now be managed by ironic and be later released to the available hardware " "pool." msgstr "" "Addition of the provision state target verb of ``adopt`` which allows an " "operator to move a node into an ``active`` state from ``manageable`` state, " "without performing a deployment operation on the node. This can be used to " "represent nodes that have been previously deployed by other means that will " "now be managed by ironic and be later released to the available hardware " "pool." msgid "Additionally, adds the following API changes:" msgstr "Additionally, adds the following API changes:" msgid "" "Addresses a condition where the Compute Service may have been unable to " "remove VIF attachment records while a baremetal node is being unprovisiond. " "This condition resulted in VIF records being orphaned, blocking future " "deployments without manual intervention. See `bug 1743652 `_ for more details." msgstr "" "Addresses a condition where the Compute Service may have been unable to " "remove VIF attachment records while a baremetal node is being unprovisioned. " "This condition resulted in VIF records being orphaned, blocking future " "deployments without manual intervention. See `bug 1743652 `_ for more details." msgid "" "Adds '9.0' and 'pike' as choices for the configuration option [default]/" "pin_release_version. This addresses failures with the unit and grenade tests." msgstr "" "Adds '9.0' and 'pike' as choices for the configuration option [default]/" "pin_release_version. This addresses failures with the unit and grenade tests." msgid "" "Adds DBDeadlock handling which may improve stability when using Galera. See " "https://bugs.launchpad.net/ironic/+bug/1639338. Number of retries depends on " "the configuration option ``[database]db_max_retries``." msgstr "" "Adds DBDeadlock handling which may improve stability when using Galera. See " "https://bugs.launchpad.net/ironic/+bug/1639338. Number of retries depends on " "the configuration option ``[database]db_max_retries``." msgid "" "Adds SNMP request timeout and retries settings for the SNMP UDP transport. " "Some SNMP devices take longer than others to respond. The new Ironic " "configuration settings ``[snmp]/udp_transport_retries`` and ``[snmp]/" "udp_transport_timeout`` allow to change the number of retries and the " "timeout values respectively for the SNMP driver." msgstr "" "Adds SNMP request timeout and retries settings for the SNMP UDP transport. " "Some SNMP devices take longer than others to respond. The new Ironic " "configuration settings ``[snmp]/udp_transport_retries`` and ``[snmp]/" "udp_transport_timeout`` allow to change the number of retries and the " "timeout values respectively for the SNMP driver." msgid "" "Adds SNMP request timeout and retries settings for the SNMP UDP transport. " "Some SNMP devices take longer than others to respond. The new Ironic " "configuration settings ``[snmp]/udp_transport_retries`` and ``[snmp]/" "udp_transport_timeout`` allow to change the number of retries and the " "timeout values respectively for the the SNMP driver." msgstr "" "Adds SNMP request timeout and retries settings for the SNMP UDP transport. " "Some SNMP devices take longer than others to respond. The new Ironic " "configuration settings ``[snmp]/udp_transport_retries`` and ``[snmp]/" "udp_transport_timeout`` allow to change the number of retries and the " "timeout values respectively for the the SNMP driver." msgid "" "Adds SNMPv3 message authentication and encryption features to ironic " "``snmp`` hardware type. To enable these features, the following parameters " "should be used in the node's ``driver_info``:" msgstr "" "Adds SNMPv3 message authentication and encryption features to ironic " "``snmp`` hardware type. To enable these features, the following parameters " "should be used in the node's ``driver_info``:" msgid "Adds ShellinaboxConsole support for virsh SSH driver." msgstr "Adds ShellinaboxConsole support for virsh SSH driver." msgid "" "Adds `OSProfiler `_ support. " "This cross-project profiling library provides the ability to trace various " "OpenStack requests through all OpenStack services that support it. For more " "information, see https://docs.openstack.org/ironic/latest/contributor/" "osprofiler-support.html." msgstr "" "Adds `OSProfiler `_ support. " "This cross-project profiling library provides the ability to trace various " "OpenStack requests through all OpenStack services that support it. For more " "information, see https://docs.openstack.org/ironic/latest/contributor/" "osprofiler-support.html." msgid "" "Adds ``[conductor]/check_rescue_state_interval`` and ``[conductor]/" "rescue_callback_timeout`` to fail the rescue operation upon timeout, for the " "nodes that are stuck in the rescue wait state." msgstr "" "Adds ``[conductor]/check_rescue_state_interval`` and ``[conductor]/" "rescue_callback_timeout`` to fail the rescue operation upon timeout, for the " "nodes that are stuck in the rescue wait state." msgid "" "Adds ``[swift]/endpoint_override`` option to explicitly set the endpoint URL " "used for Swift. Ironic uses the Swift connection URL as a base for " "generation of some TempURLs. Added parameter enables operators to fix the " "problem when image is attached (via TempURL) as vmedia (e.g. in iLO driver) " "and BMC doesn't have connectivity to public network. By default this " "parameter is not set for backward compatibility." msgstr "" "Adds ``[swift]/endpoint_override`` option to explicitly set the endpoint URL " "used for Swift. Ironic uses the Swift connection URL as a base for " "generation of some TempURLs. Added parameter enables operators to fix the " "problem when image is attached (via TempURL) as vmedia (e.g. in iLO driver) " "and BMC doesn't have connectivity to public network. By default this " "parameter is not set for backward compatibility." msgid "" "Adds ``external`` storage interface which is short for \"externally managed" "\". This adds logic to allow the Bare Metal service to identify when a BFV " "scenario is being requested based upon the configuration set for ``volume " "targets``." msgstr "" "Adds ``external`` storage interface which is short for \"externally managed" "\". This adds logic to allow the Bare Metal service to identify when a BFV " "scenario is being requested based upon the configuration set for ``volume " "targets``." msgid "" "Adds ``get_boot_mode``, ``set_boot_mode`` and ``get_supported_boot_modes`` " "methods to driver management interface. Drivers can override these methods " "implementing boot mode management calls to the BMC of the baremetal nodes " "being managed." msgstr "" "Adds ``get_boot_mode``, ``set_boot_mode`` and ``get_supported_boot_modes`` " "methods to driver management interface. Drivers can override these methods " "implementing boot mode management calls to the BMC of the baremetal nodes " "being managed." msgid "" "Adds ``list_unfinished_jobs`` method to the vendor-passthru interface of the " "DRAC driver. It provides a way to check the status of the remote config job " "after a BIOS configuration change was submitted using the " "``set_bios_config`` method." msgstr "" "Adds ``list_unfinished_jobs`` method to the vendor-passthru interface of the " "DRAC driver. It provides a way to check the status of the remote config job " "after a BIOS configuration change was submitted using the " "``set_bios_config`` method." msgid "" "Adds ``rescue_interface`` field to the following node-related notifications:" msgstr "" "Adds ``rescue_interface`` field to the following node-related notifications:" msgid "Adds ``storage_interface`` field to the node-related notifications:" msgstr "Adds ``storage_interface`` field to the node-related notifications:" msgid "" "Adds `agent_pxe_oneview` and `iscsi_pxe_oneview` drivers for integration " "with the HP OneView Management System." msgstr "" "Adds `agent_pxe_oneview` and `iscsi_pxe_oneview` drivers for integration " "with the HP OneView Management System." msgid "" "Adds a [glance]glance_cafile configuration option to pass a optional " "certificate for secured https communication. It is used when " "[glance]glance_api_insecure configuration option is set to False." msgstr "" "Adds a [glance]glance_cafile configuration option to pass a optional " "certificate for secured https communication. It is used when " "[glance]glance_api_insecure configuration option is set to False." msgid "" "Adds a [glance]swift_temp_url_cache_enabled configuration option to enable " "Swift temporary URL caching. It is only useful if the caching proxy is used. " "Also adds [glance]swift_temp_url_expected_download_start_delay, which is " "used to check if the Swift temporary URL duration is long enough to let the " "image download to start, and, if temporary URL caching is enabled, to " "determine if a cached entry will be still valid when download starts. The " "value of [glance]swift_temp_url_expected_download_start_delay must be less " "than the value for the [glance]swift_temp_url_duration configuration option." msgstr "" "Adds a [glance]swift_temp_url_cache_enabled configuration option to enable " "Swift temporary URL caching. It is only useful if the caching proxy is used. " "Also adds [glance]swift_temp_url_expected_download_start_delay, which is " "used to check if the Swift temporary URL duration is long enough to let the " "image download to start, and, if temporary URL caching is enabled, to " "determine if a cached entry will be still valid when download starts. The " "value of [glance]swift_temp_url_expected_download_start_delay must be less " "than the value for the [glance]swift_temp_url_duration configuration option." msgid "" "Adds a ``physical_network`` field to the port object in REST API version " "1.34." msgstr "" "Adds a ``physical_network`` field to the port object in REST API version " "1.34." msgid "" "Adds a ``ramdisk`` deploy interface for deployments that wish to network " "boot to a ramdisk, as opposed to perform a complete traditional deployment " "to a physical media. This may be useful in scientific use cases or where " "ephemeral baremetal machines are desired." msgstr "" "Adds a ``ramdisk`` deploy interface for deployments that wish to network " "boot to a ramdisk, as opposed to perform a complete traditional deployment " "to a physical media. This may be useful in scientific use cases or where " "ephemeral baremetal machines are desired." msgid "" "Adds a ``resource_class`` field to the node resource, which will be used by " "Nova to define which nodes may quantitatively match a Nova flavor. Operators " "should populate this accordingly before deploying the Ocata version of Nova." msgstr "" "Adds a ``resource_class`` field to the node resource, which will be used by " "Nova to define which nodes may quantitatively match a Nova flavour. " "Operators should populate this accordingly before deploying the Ocata " "version of Nova." msgid "" "Adds a ``traits`` field to the node resource, which will be used by the " "Compute service to define which nodes may match a Compute flavor using " "qualitative attributes." msgstr "" "Adds a ``traits`` field to the node resource, which will be used by the " "Compute service to define which nodes may match a Compute flavour using " "qualitative attributes." msgid "" "Adds a `clean_dhcp_opts` method to the DHCP provider base class, to give " "DHCP providers a method to clean up DHCP reservations if needed." msgstr "" "Adds a `clean_dhcp_opts` method to the DHCP provider base class, to give " "DHCP providers a method to clean up DHCP reservations if needed." msgid "" "Adds a boolean flag called ``force_persistent_boot_device`` into a node's " "``driver_info`` to enable persistent behavior when you set the boot device " "during deploy and cleaning operations. This flag will override a non-" "persistent behavior in the cleaning and deploy process. For more " "information, see https://bugs.launchpad.net/ironic/+bug/1703945." msgstr "" "Adds a boolean flag called ``force_persistent_boot_device`` into a node's " "``driver_info`` to enable persistent behaviour when you set the boot device " "during deploy and cleaning operations. This flag will override a non-" "persistent behaviour in the cleaning and deploy process. For more " "information, see https://bugs.launchpad.net/ironic/+bug/1703945." msgid "" "Adds a config [amt]awake_interval for the interval to wake up the AMT " "interface for a node. This should correspond to the IdleTimeout config " "option on the AMT interface. Setting to 0 will disable waking the AMT " "interface, just like setting IdleTimeout=0 on the AMT interface will disable " "the AMT interface from sleeping when idle." msgstr "" "Adds a config [amt]awake_interval for the interval to wake up the AMT " "interface for a node. This should correspond to the IdleTimeout config " "option on the AMT interface. Setting to 0 will disable waking the AMT " "interface, just like setting IdleTimeout=0 on the AMT interface will disable " "the AMT interface from sleeping when idle." msgid "" "Adds a config option 'debug_tracebacks_in_api' to allow the API service to " "return tracebacks in API responses in an error condition." msgstr "" "Adds a config option 'debug_tracebacks_in_api' to allow the API service to " "return tracebacks in API responses in an error condition." msgid "" "Adds a configuration option for the Iboot driver, [iboot]reboot_delay, to " "allow adding a pause between power off and power on." msgstr "" "Adds a configuration option for the Iboot driver, [iboot]reboot_delay, to " "allow adding a pause between power off and power on." msgid "" "Adds a configuration section ``cinder`` and a requirement of cinder client " "(python-cinderclient)." msgstr "" "Adds a configuration section ``cinder`` and a requirement of Cinder client " "(python-cinderclient)." msgid "" "Adds a missing error check into ``ipmitool`` power driver's reboot method so " "that the reboot can fail properly if power off failed." msgstr "" "Adds a missing error check into ``ipmitool`` power driver's reboot method so " "that the reboot can fail properly if power off failed." msgid "" "Adds a new ``[deploy]/erase_devices_metadata_priority`` configuration option " "to allow operators to configure the priority of (or disable) the " "\"erase_devices_metadata\" cleaning step." msgstr "" "Adds a new ``[deploy]/erase_devices_metadata_priority`` configuration option " "to allow operators to configure the priority of (or disable) the " "\"erase_devices_metadata\" cleaning step." msgid "" "Adds a new ``ansible`` deploy interface. It targets mostly undercloud use-" "case by allowing greater customization of provisioning process." msgstr "" "Adds a new ``ansible`` deploy interface. It targets mostly undercloud use-" "case by allowing greater customisation of provisioning process." msgid "" "Adds a new configuration option ``[disk_utils]partprobe_attempts`` which " "defaults to 10. This is the maximum number of times to try to read a " "partition (if creating a config drive) via a ``partprobe`` command. " "Previously, no retries were done which caused failures. This addresses `bug " "1756760 `_." msgstr "" "Adds a new configuration option ``[disk_utils]partprobe_attempts`` which " "defaults to 10. This is the maximum number of times to try to read a " "partition (if creating a config drive) via a ``partprobe`` command. " "Previously, no retries were done which caused failures. This addresses `bug " "1756760 `_." msgid "" "Adds a new configuration option ``[disk_utils]partprobe_attempts`` which " "defaults to 10. This is the maximum number of times to try to read a " "partition (if creating a config drive) via a ``partprobe`` command. Set it " "to 1 if you want the previous behavior, where no retries were done." msgstr "" "Adds a new configuration option ``[disk_utils]partprobe_attempts`` which " "defaults to 10. This is the maximum number of times to try to read a " "partition (if creating a config drive) via a ``partprobe`` command. Set it " "to 1 if you want the previous behaviour, where no retries were done." msgid "" "Adds a new configuration option ``[pxe]pxe_config_subdir`` to allow " "operators to define the specific directory that may be used inside of ``/" "tftpboot`` or ``/httpboot`` for a boot loader to locate the configuration " "file for the node. This option defaults to ``pxelinux.cfg`` which is the " "directory that the Syslinux `pxelinux.0` bootloader utilized. Operators may " "wish to change the directory name if they are using other boot loaders such " "as `GRUB` or `iPXE`." msgstr "" "Adds a new configuration option ``[pxe]pxe_config_subdir`` to allow " "operators to define the specific directory that may be used inside of ``/" "tftpboot`` or ``/httpboot`` for a boot loader to locate the configuration " "file for the node. This option defaults to ``pxelinux.cfg`` which is the " "directory that the Syslinux `pxelinux.0` bootloader utilised. Operators may " "wish to change the directory name if they are using other boot loaders such " "as `GRUB` or `iPXE`." msgid "" "Adds a new configuration option, hash_ring_reset_interval, to control how " "often the conductor's view of the hash ring is reset. This has a default of " "180 seconds, the same as the default for the sync_local_state periodic task " "that used to handle this reset." msgstr "" "Adds a new configuration option, hash_ring_reset_interval, to control how " "often the conductor's view of the hash ring is reset. This has a default of " "180 seconds, the same as the default for the sync_local_state periodic task " "that used to handle this reset." msgid "" "Adds a new dependency on the `tooz library `_, as the consistent hash ring code was moved out of ironic and into " "tooz." msgstr "" "Adds a new dependency on the `tooz library `_, as the consistent hash ring code was moved out of ironic and into " "tooz." msgid "" "Adds a new hardware type ``ilo`` for iLO 4 based Proliant Gen 8 and Gen 9 " "servers. This hardware type supports virtual media and PXE based boot using " "HPE iLO 4 management engine. The following driver interfaces are supported:" msgstr "" "Adds a new hardware type ``ilo`` for iLO 4 based Proliant Gen 8 and Gen 9 " "servers. This hardware type supports virtual media and PXE based boot using " "HPE iLO 4 management engine. The following driver interfaces are supported:" msgid "" "Adds a new hardware type ``oneview`` for HPE OneView supported servers. This " "hardware type supports the following driver interfaces:" msgstr "" "Adds a new hardware type ``oneview`` for HPE OneView supported servers. This " "hardware type supports the following driver interfaces:" msgid "" "Adds a new hardware type ``snmp`` for SNMP powered systems. It supports the " "following driver interfaces:" msgstr "" "Adds a new hardware type ``snmp`` for SNMP powered systems. It supports the " "following driver interfaces:" msgid "" "Adds a new hardware type, ``idrac``, for Dell EMC integrated Dell Remote " "Access Controllers (iDRAC). ``idrac`` hardware type supports PXE-based " "provisioning using an iDRAC. It supports the following driver interfaces:" msgstr "" "Adds a new hardware type, ``idrac``, for Dell EMC integrated Dell Remote " "Access Controllers (iDRAC). ``idrac`` hardware type supports PXE-based " "provisioning using an iDRAC. It supports the following driver interfaces:" msgid "" "Adds a new policy rule that may be used to mask instance-specific secrets, " "such as configdrive contents or the temp URL used to store a configdrive or " "instance image. This is similar to how passwords are already masked." msgstr "" "Adds a new policy rule that may be used to mask instance-specific secrets, " "such as configdrive contents or the temp URL used to store a configdrive or " "instance image. This is similar to how passwords are already masked." msgid "" "Adds additional parameters and response fields for GET /v1/drivers and GET /" "v1/drivers/." msgstr "" "Adds additional parameters and response fields for GET /v1/drivers and GET /" "v1/drivers/." msgid "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. Caution should be taken due to the timeout monitoring is " "shifted from ``inspecting`` to ``inspect wait``, please stop all running " "asynchronous hardware inspection or wait until it is finished before " "upgrading to the Rocky release. Otherwise nodes in asynchronous inspection " "will be left at ``inspecting`` state forever unless the database is manually " "updated." msgstr "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. Caution should be taken due to the timeout monitoring is " "shifted from ``inspecting`` to ``inspect wait``, please stop all running " "asynchronous hardware inspection or wait until it is finished before " "upgrading to the Rocky release. Otherwise nodes in asynchronous inspection " "will be left at ``inspecting`` state forever unless the database is manually " "updated." msgid "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. Returning ``INSPECTING`` from the ``inspect_hardware`` method " "of inspect interface is deprecated, ``INSPECTWAIT`` should be returned " "instead." msgstr "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. Returning ``INSPECTING`` from the ``inspect_hardware`` method " "of inspect interface is deprecated, ``INSPECTWAIT`` should be returned " "instead." msgid "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. The ``[conductor]inspect_timeout`` configuration option is " "deprecated for removal, please use ``[conductor]inspect_wait_timeout`` " "instead to specify the timeout of inspection process." msgstr "" "Adds an ``inspect wait`` state to handle asynchronous hardware " "introspection. The ``[conductor]inspect_timeout`` configuration option is " "deprecated for removal, please use ``[conductor]inspect_wait_timeout`` " "instead to specify the timeout of inspection process." msgid "" "Adds an `agent_iboot` driver to allow use of the Iboot power driver with the " "Agent deploy driver." msgstr "" "Adds an `agent_iboot` driver to allow use of the Iboot power driver with the " "Agent deploy driver." msgid "" "Adds an `agent_wol` driver that combines the Agent deploy interface with the " "Wake-On-LAN power driver." msgstr "" "Adds an `agent_wol` driver that combines the Agent deploy interface with the " "Wake-On-LAN power driver." msgid "" "Adds clean step ``restore_irmc_bios_config`` to restore BIOS config for a " "node with an ``irmc``-based driver during automatic cleaning." msgstr "" "Adds clean step ``restore_irmc_bios_config`` to restore BIOS config for a " "node with an ``irmc``-based driver during automatic cleaning." msgid "" "Adds configuration option ``[console]terminal_timeout`` to allow setting the " "time (in seconds) of inactivity, after which a socat-based console " "terminates." msgstr "" "Adds configuration option ``[console]terminal_timeout`` to allow setting the " "time (in seconds) of inactivity, after which a socat-based console " "terminates." msgid "" "Adds experimental support for IPv6 PXE booting. This is configurable via the " "[pxe]ip_version configuration option." msgstr "" "Adds experimental support for IPv6 PXE booting. This is configurable via the " "[pxe]ip_version configuration option." msgid "Adds in-band inspection interface usable by OneView drivers." msgstr "Adds in-band inspection interface usable by OneView drivers." msgid "" "Adds inspection support for the `agent_ipmitool` and `agent_ssh` drivers." msgstr "" "Adds inspection support for the `agent_ipmitool` and `agent_ssh` drivers." msgid "Current Series Release Notes" msgstr "Current Series Release Notes" msgid "Mitaka Series (4.3.0 - 5.1.x) Release Notes" msgstr "Mitaka Series (4.3.0 - 5.1.x) Release Notes" msgid "Newton Series (6.0.0 - 6.2.x) Release Notes" msgstr "Newton Series (6.0.0 - 6.2.x) Release Notes" msgid "Ocata Series (7.0.0 - 7.0.x) Release Notes" msgstr "Ocata Series (7.0.0 - 7.0.x) Release Notes" msgid "PXE drivers now support GRUB2" msgstr "PXE drivers now support GRUB2" msgid "Pike Series (8.0.0 - 9.1.x) Release Notes" msgstr "Pike Series (8.0.0 - 9.1.x) Release Notes" msgid "Queens Series (9.2.0 - 10.1.x) Release Notes" msgstr "Queens Series (9.2.0 - 10.1.x) Release Notes" msgid "ipmitool driver supports IPMI v1.5" msgstr "ipmitool driver supports IPMI v1.5" msgid "" "pxe_ilo driver now supports UEFI Secure Boot (previous releases of the iLO " "driver only supported this for agent_ilo and iscsi_ilo)" msgstr "" "pxe_ilo driver now supports UEFI Secure Boot (previous releases of the iLO " "driver only supported this for agent_ilo and iscsi_ilo)" ironic-15.0.0/releasenotes/source/locale/ja/0000775000175000017500000000000013652514443020733 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/locale/ja/LC_MESSAGES/0000775000175000017500000000000013652514443022520 5ustar zuulzuul00000000000000ironic-15.0.0/releasenotes/source/locale/ja/LC_MESSAGES/releasenotes.po0000664000175000017500000000531413652514273025555 0ustar zuulzuul00000000000000# OpenStack Infra , 2015. #zanata # Akihiro Motoki , 2016. #zanata # Akihito INOH , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: ironic\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2018-08-09 13:46+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-02-15 11:45+0000\n" "Last-Translator: Akihito INOH \n" "Language-Team: Japanese\n" "Language: ja\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "" "\"Port group\" support allows users to take advantage of bonded network " "interfaces." msgstr "" "\"Port group\" のサポートにより、ユーザーはボンディングされたネットワークイン" "ターフェースが利用できるようになります。" msgid "10.0.0" msgstr "10.0.0" msgid "10.1.0" msgstr "10.1.0" msgid "4.2.2" msgstr "4.2.2" msgid "4.2.3" msgstr "4.2.3" msgid "4.2.4" msgstr "4.2.4" msgid "4.2.5" msgstr "4.2.5" msgid "4.3.0" msgstr "4.3.0" msgid "443, 80" msgstr "443, 80" msgid "5.0.0" msgstr "5.0.0" msgid "5.1.0" msgstr "5.1.0" msgid "5.1.1" msgstr "5.1.1" msgid "5.1.2" msgstr "5.1.2" msgid "5.1.3" msgstr "5.1.3" msgid "6.0.0" msgstr "6.0.0" msgid "6.1.0" msgstr "6.1.0" msgid "6.2.0" msgstr "6.2.0" msgid "6.2.2" msgstr "6.2.2" msgid "6.2.3" msgstr "6.2.3" msgid "6.2.4" msgstr "6.2.4" msgid "6.3.0" msgstr "6.3.0" msgid "7.0.0" msgstr "7.0.0" msgid "7.0.1" msgstr "7.0.1" msgid "7.0.2" msgstr "7.0.2" msgid "7.0.3" msgstr "7.0.3" msgid "7.0.4" msgstr "7.0.4" msgid "8.0.0" msgstr "8.0.0" msgid "9.0.0" msgstr "9.0.0" msgid "9.0.1" msgstr "9.0.1" msgid "9.1.0" msgstr "9.1.0" msgid "9.1.1" msgstr "9.1.1" msgid "9.1.2" msgstr "9.1.2" msgid "9.1.3" msgstr "9.1.3" msgid "9.2.0" msgstr "9.2.0" msgid "" "A few major changes are worth mentioning. This is not an exhaustive list:" msgstr "" "いくつかの主要な変更がありました。全てではありませんが以下にリストを示しま" "す。" msgid "A few major changes since 9.1.x (Pike) are worth mentioning:" msgstr "9.1.x (Pike) からの主要な変更がいくつかありました。" msgid "Bug Fixes" msgstr "バグ修正" msgid "Current Series Release Notes" msgstr "開発中バージョンのリリースノート" msgid "Deprecation Notes" msgstr "廃止予定の機能" msgid "Known Issues" msgstr "既知の問題" msgid "New Features" msgstr "新機能" msgid "Option" msgstr "オプション" msgid "Other Notes" msgstr "その他の注意点" msgid "Security Issues" msgstr "セキュリティー上の問題" msgid "Upgrade Notes" msgstr "アップグレード時の注意" ironic-15.0.0/releasenotes/source/conf.py0000664000175000017500000002202313652514273020401 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Ironic Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'reno.sphinxext', ] try: import openstackdocstheme extensions.append('openstackdocstheme') except ImportError: openstackdocstheme = None repository_name = 'openstack/ironic' bug_project = 'ironic' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Ironic Release Notes' copyright = u'2015, Ironic Developers' # Release notes do not need a version number in the title, they # cover multiple releases. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. if openstackdocstheme is not None: html_theme = 'openstackdocs' else: html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'IronicReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'IronicReleaseNotes.tex', u'Ironic Release Notes Documentation', u'Ironic Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'ironicreleasenotes', u'Ironic Release Notes Documentation', [u'Ironic Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'IronicReleaseNotes', u'Ironic Release Notes Documentation', u'Ironic Developers', 'IronicReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] ironic-15.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000015313652514273021763 0ustar zuulzuul00000000000000============================ Current Series Release Notes ============================ .. release-notes:: ironic-15.0.0/releasenotes/source/stein.rst0000664000175000017500000000027113652514273020757 0ustar zuulzuul00000000000000============================================== Stein Series (12.0.0 - 12.1.x) Release Notes ============================================== .. release-notes:: :branch: stable/stein ironic-15.0.0/releasenotes/source/index.rst0000664000175000017500000000066013652514273020746 0ustar zuulzuul00000000000000===================== Ironic Release Notes ===================== .. toctree:: :maxdepth: 1 unreleased train stein rocky queens pike ocata newton mitaka liberty Kilo (2015.1) Juno (2014.2) Icehouse (2014.1) ironic-15.0.0/releasenotes/source/queens.rst0000664000175000017500000000032613652514273021136 0ustar zuulzuul00000000000000============================================== Queens Series (9.2.0 - 10.1.x) Release Notes ============================================== .. release-notes:: :branch: stable/queens :earliest-version: 9.2.0 ironic-15.0.0/api-ref/0000775000175000017500000000000013652514443014434 5ustar zuulzuul00000000000000ironic-15.0.0/api-ref/source/0000775000175000017500000000000013652514443015734 5ustar zuulzuul00000000000000ironic-15.0.0/api-ref/source/baremetal-api-v1-drivers.inc0000664000175000017500000001777413652514273023153 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Drivers (drivers) ================= .. versionchanged:: 1.30 The REST API now also exposes information about *dynamic* drivers. Ironic has two types of drivers: *classic* drivers and *dynamic* drivers. A *classic* driver is a Python object containing all the logic to manage the bare metal nodes enrolled within Ironic. A driver may be loaded within one or more ``ironic-conductor`` services. Each driver contains a pre-determined set of instantiated interfaces. Each type of interface (eg, ``power`` or ``boot``) performs a specific hardware function. *Dynamic* drivers are supported via hardware types, which are Python classes enabled via entry points. Unlike *classic* drivers, which have pre-determined interfaces, a hardware type may support multiple types of interfaces. For example, the ``ipmi`` hardware type may support multiple methods for enabling node console. Which interface a node of a particular hardware type uses is determined at runtime. This collection of interfaces is called a *dynamic* driver. For more information about this, see the node API documentation. The REST API exposes the list of drivers and which ``ironic-conductor`` processes have loaded that driver via the Driver resource (``/v1/drivers`` endpoint). This can be useful for operators to validate their configuration in a heterogeneous hardware environment. Each ``ironic-conductor`` process may load one or more drivers, and does not necessarily need to load the same *classic* drivers as another ``ironic-conductor``. Each ``ironic-conductor`` with the same hardware types must have the same hardware interfaces enabled. The REST API also exposes details about each driver, such as what properties must be supplied to a node's ``driver_info`` for that driver to manage hardware. Lastly, some drivers may expose methods through a ``driver_vendor_passthru`` endpoint, allowing one to interact with the driver directly (i.e., without knowing a specific node identifier). For example, this is used by the ironic python agent ramdisk to get the UUID of the node being deployed/cleaned by using MAC addresses of the node's network interfaces the agent has discovered. List drivers ============ .. rest_method:: GET /v1/drivers Lists all drivers. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - type: driver_type - detail: driver_detail Response Parameters ------------------- The response BODY contains a single key, "drivers", whose value is a list of drivers supported by this Ironic service. .. rest_parameters:: parameters.yaml - drivers: drivers - name: driver_name - hosts: hosts - type: response_driver_type - links: links - properties: driver_property_links .. versionchanged:: 1.30 If the request has the "detail" URL parameter set to true, each driver will also include the following fields. .. rest_parameters:: parameters.yaml - default_bios_interface: default_bios_interface - default_boot_interface: default_boot_interface - default_console_interface: default_console_interface - default_deploy_interface: default_deploy_interface - default_inspect_interface: default_inspect_interface - default_management_interface: default_management_interface - default_network_interface: default_network_interface - default_power_interface: default_power_interface - default_raid_interface: default_raid_interface - default_rescue_interface: default_rescue_interface - default_storage_interface: default_storage_interface - default_vendor_interface: default_vendor_interface - enabled_bios_interfaces: enabled_bios_interfaces - enabled_boot_interfaces: enabled_boot_interfaces - enabled_console_interfaces: enabled_console_interfaces - enabled_deploy_interfaces: enabled_deploy_interfaces - enabled_inspect_interfaces: enabled_inspect_interfaces - enabled_management_interfaces: enabled_management_interfaces - enabled_network_interfaces: enabled_network_interfaces - enabled_power_interfaces: enabled_power_interfaces - enabled_rescue_interfaces: enabled_rescue_interfaces - enabled_raid_interfaces: enabled_raid_interfaces - enabled_storage_interfaces: enabled_storage_interfaces - enabled_vendor_interfaces: enabled_vendor_interfaces Response Example ---------------- Example for a request with detail=false (the default): .. literalinclude:: samples/drivers-list-response.json :language: javascript Example for a request with detail=true: .. literalinclude:: samples/drivers-list-detail-response.json :language: javascript Show driver details =================== .. rest_method:: GET /v1/drivers/{driver_name} Shows details for a driver. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - driver_name: driver_ident Response Parameters ------------------- .. rest_parameters:: parameters.yaml - name: driver_name - hosts: hosts - type: response_driver_type - default_bios_interface: default_bios_interface - default_boot_interface: default_boot_interface - default_console_interface: default_console_interface - default_deploy_interface: default_deploy_interface - default_inspect_interface: default_inspect_interface - default_management_interface: default_management_interface - default_network_interface: default_network_interface - default_power_interface: default_power_interface - default_raid_interface: default_raid_interface - default_rescue_interface: default_rescue_interface - default_storage_interface: default_storage_interface - default_vendor_interface: default_vendor_interface - enabled_bios_interfaces: enabled_bios_interfaces - enabled_boot_interfaces: enabled_boot_interfaces - enabled_console_interfaces: enabled_console_interfaces - enabled_deploy_interfaces: enabled_deploy_interfaces - enabled_inspect_interfaces: enabled_inspect_interfaces - enabled_management_interfaces: enabled_management_interfaces - enabled_network_interfaces: enabled_network_interfaces - enabled_power_interfaces: enabled_power_interfaces - enabled_raid_interfaces: enabled_raid_interfaces - enabled_rescue_interfaces: enabled_rescue_interfaces - enabled_storage_interfaces: enabled_storage_interfaces - enabled_vendor_interfaces: enabled_vendor_interfaces - links: links - properties: driver_property_links Response Example ---------------- .. literalinclude:: samples/driver-get-response.json :language: javascript Show driver properties ====================== .. rest_method:: GET /v1/drivers/{driver_name}/properties Shows the required and optional parameters that ``driver_name`` expects to be supplied in the ``driver_info`` field for every Node it manages. To check if all required parameters have been supplied to a Node, you should query the ``/v1/nodes/{node_ident}/validate`` endpoint. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - driver_name: driver_ident Response Example ---------------- The response BODY is a dictionary, but the keys are unique to each driver. The structure of the response is ``property`` : ``description``. The following example is returned from the ``agent_ipmitool`` driver. .. literalinclude:: samples/driver-property-response.json :language: javascript Show driver logical disk properties =================================== .. versionadded:: 1.12 .. rest_method:: GET /v1/drivers/{driver_name}/raid/logical_disk_properties Show the required and optional parameters that ``driver_name`` expects to be supplied in the node's ``raid_config`` field, if a RAID configuration change is requested. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - driver_name: driver_ident Response Example ---------------- The response BODY is a dictionary, but the keys are unique to each driver. The structure of the response is ``property`` : ``description``. The following example is returned from the ``agent_ipmitool`` driver. .. literalinclude:: samples/driver-logical-disk-properties-response.json :language: javascript ironic-15.0.0/api-ref/source/baremetal-api-v1-portgroups-ports.inc0000664000175000017500000000506313652514273025052 0ustar zuulzuul00000000000000.. -*- rst -*- ============================================= Listing Ports by Portgroup (portgroup, ports) ============================================= .. versionadded:: 1.24 Given a Portgroup identifier (``uuid`` or ``name``), the API exposes the list of, and details of, all Ports associated with that Portgroup. These endpoints do not allow modification of the Ports; that should be done by accessing the Port resources under the ``/v1/ports`` endpoint. List Ports by Portgroup ======================= .. rest_method:: GET /v1/portgroups/{portgroup_ident}/ports Return a list of bare metal Ports associated with ``portgroup_ident``. When specified, the ``fields`` request parameter causes the content of the Response to include only the specified fields, rather than the default set. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - portgroup_ident: portgroup_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - links: links **Example list of a Portgroup's Ports:** .. literalinclude:: samples/portgroup-port-list-response.json List detailed Ports by Portgroup ================================ .. rest_method:: GET /v1/portgroups/{portgroup_ident}/ports/detail Return a detailed list of bare metal Ports associated with ``portgroup_ident``. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - portgroup_ident: portgroup_ident - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - node_uuid: node_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - portgroup_uuid: portgroup_uuid - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example details of a Portgroup's Ports:** .. literalinclude:: samples/portgroup-port-detail-response.json ironic-15.0.0/api-ref/source/baremetal-api-v1-allocation.inc0000664000175000017500000001502213652514273023602 0ustar zuulzuul00000000000000.. -*- rst -*- ========================= Allocations (allocations) ========================= The Allocation resource represents a request to find and allocate a Node for deployment. .. versionadded:: 1.52 Allocation API was introduced. Create Allocation ================= .. rest_method:: POST /v1/allocations Creates an allocation. A Node can be requested by its resource class and traits. Additionally, Nodes can be pre-filtered on the client side, and the resulting list of UUIDs and/or names can be submitted as ``candidate_nodes``. Otherwise all nodes are considered. A Node is suitable for an Allocation if all of the following holds: * ``provision_state`` is ``available`` * ``power_state`` is not ``null`` * ``maintenance`` is ``false`` * ``instance_uuid`` is ``null`` * ``resource_class`` matches requested one * ``traits`` list contains all of the requested ones The allocation process is asynchronous. The new Allocation is returned in the ``allocating`` state, and the process continues in the background. If it succeeds, the ``node_uuid`` field is populated with the Node's UUID, and the Node's ``instance_uuid`` field is set to the Allocation's UUID. If you want to backfill an allocation for an already deployed node, you can pass the UUID or name of this node to ``node``. In this case the allocation is created immediately, bypassing the normal allocation process. Other parameters must be missing or match the provided node. .. versionadded:: 1.52 Allocation API was introduced. .. versionadded:: 1.58 Added support for backfilling allocations. .. versionadded:: 1.60 Introduced the ``owner`` field. Normal response codes: 201 Error response codes: 400, 401, 403, 409, 503 Request ------- .. rest_parameters:: parameters.yaml - resource_class: req_allocation_resource_class - candidate_nodes: req_candidate_nodes - name: req_allocation_name - traits: req_allocation_traits - uuid: req_uuid - extra: req_extra - node: req_allocation_node - owner: owner Request Example --------------- .. literalinclude:: samples/allocation-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - candidate_nodes: candidate_nodes - last_error: allocation_last_error - name: allocation_name - node_uuid: allocation_node - resource_class: allocation_resource_class - state: allocation_state - traits: allocation_traits - owner: owner - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/allocation-create-response.json :language: javascript List Allocations ================ .. rest_method:: GET /v1/allocations Lists all Allocations. .. versionadded:: 1.52 Allocation API was introduced. .. versionadded:: 1.60 Introduced the ``owner`` field. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - node: r_allocation_node - resource_class: r_resource_class - state: r_allocation_state - owner: owner - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - candidate_nodes: candidate_nodes - last_error: allocation_last_error - name: allocation_name - node_uuid: allocation_node - resource_class: allocation_resource_class - state: allocation_state - traits: allocation_traits - owner: owner - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/allocations-list-response.json :language: javascript Show Allocation Details ======================= .. rest_method:: GET /v1/allocations/{allocation_id} Shows details for an Allocation. .. versionadded:: 1.52 Allocation API was introduced. .. versionadded:: 1.60 Introduced the ``owner`` field. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - fields: fields - allocation_id: allocation_ident Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - candidate_nodes: candidate_nodes - last_error: allocation_last_error - name: allocation_name - node_uuid: allocation_node - resource_class: allocation_resource_class - state: allocation_state - traits: allocation_traits - owner: owner - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/allocation-show-response.json :language: javascript Update Allocation ================= .. rest_method:: PATCH /v1/allocations/{allocation_id} Updates an allocation. Allows updating only name and extra fields. .. versionadded:: 1.57 Allocation update API was introduced. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409, 503 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - allocation_id: allocation_ident - name: req_allocation_name - extra: req_extra Request Example --------------- .. literalinclude:: samples/allocation-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - candidate_nodes: candidate_nodes - last_error: allocation_last_error - name: allocation_name - node_uuid: allocation_node - resource_class: allocation_resource_class - state: allocation_state - traits: allocation_traits - owner: owner - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/allocation-update-response.json :language: javascript Delete Allocation ================= .. rest_method:: DELETE /v1/allocations/{allocation_id} Deletes an Allocation. If the Allocation has a Node associated, the Node's ``instance_uuid`` is reset. The deletion will fail if the Allocation has a Node assigned and the Node is ``active`` and not in the maintenance mode. .. versionadded:: 1.52 Allocation API was introduced. Normal response codes: 204 Error response codes: 400, 401, 403, 404, 409, 503 Request ------- .. rest_parameters:: parameters.yaml - allocation_id: allocation_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-bios.inc0000664000175000017500000000323213652514273023517 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Node Bios (nodes) ================= .. versionadded:: 1.40 Given a Node identifier (``uuid`` or ``name``), the API exposes the list of all Bios settings associated with that Node. These endpoints do not allow modification of the Bios settings; that should be done by using ``clean steps``. List all Bios settings by Node ============================== .. rest_method:: GET /v1/nodes/{node_ident}/bios Return a list of Bios settings associated with ``node_ident``. Normal response code: 200 Error codes: 404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - bios: bios_settings - created_at: created_at - updated_at: updated_at - links: links - name: bios_setting_name - value: bios_setting_value **Example list of a Node's Bios settings:** .. literalinclude:: samples/node-bios-list-response.json Show single Bios setting of a Node ================================== .. rest_method:: GET /v1/nodes/{node_ident}/bios/{bios_setting} Return the content of the specific bios ``bios_setting`` associated with ``node_ident``. Normal response code: 200 Error codes: 404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - bios_setting: bios_setting Response -------- .. rest_parameters:: parameters.yaml - : d_bios_setting - created_at: created_at - updated_at: updated_at - links: links - name: bios_setting_name - value: bios_setting_value **Example details of a Node's Bios setting details:** .. literalinclude:: samples/node-bios-detail-response.json ironic-15.0.0/api-ref/source/baremetal-api-v1-conductors.inc0000664000175000017500000000401213652514273023635 0ustar zuulzuul00000000000000.. -*- rst -*- ======================= Conductors (conductors) ======================= .. versionadded:: 1.49 Listing Conductor resources is done through the ``conductors`` resource. Conductor resources are read-only, they can not be created, updated, or removed. List Conductors =============== .. rest_method:: GET /v1/conductors Return a list of conductors known by the Bare Metal service. By default, this query will return the hostname, conductor group, and alive status for each Conductor. When ``detail`` is set to True in the query string, will return the full representation of the resource. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - fields: fields_for_conductor - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - detail: detail Response -------- .. rest_parameters:: parameters.yaml - hostname: hostname - conductor_group: conductor_group - alive: alive - drivers: drivers - links: links **Example Conductor list response:** .. literalinclude:: samples/conductor-list-response.json :language: javascript **Example detailed Conductor list response:** .. literalinclude:: samples/conductor-list-details-response.json :language: javascript Show Conductor Details ====================== .. rest_method:: GET /v1/conductors/{hostname} Shows details for a conductor. By default, this will return the full representation of the resource; an optional ``fields`` parameter can be supplied to return only the specified set. Normal response codes: 200 Error codes: 400,403,404,406 Request ------- .. rest_parameters:: parameters.yaml - hostname: hostname_ident - fields: fields_for_conductor Response -------- .. rest_parameters:: parameters.yaml - hostname: hostname - conductor_group: conductor_group - alive: alive - drivers: drivers - links: links **Example JSON representation of a Conductor:** .. literalinclude:: samples/conductor-show-response.json :language: javascript ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-portgroups.inc0000664000175000017500000000426413652514273025015 0ustar zuulzuul00000000000000.. -*- rst -*- ============================================== Listing Portgroups by Node (nodes, portgroups) ============================================== .. versionadded:: 1.24 Given a Node identifier (``uuid`` or ``name``), the API exposes the list of, and details of, all Portgroups associated with that Node. These endpoints do not allow modification of the Portgroups; that should be done by accessing the Portgroup resources under the ``/v1/portgroups`` endpoint. List Portgroups by Node ======================= .. rest_method:: GET /v1/nodes/{node_ident}/portgroups Return a list of bare metal Portgroups associated with ``node_ident``. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - portgroups: portgroups - uuid: uuid - address: portgroup_address - name: portgroup_name - links: links **Example list of a Node's Portgroups:** .. literalinclude:: samples/node-portgroup-list-response.json List detailed Portgroups by Node ================================ .. rest_method:: GET /v1/nodes/{node_ident}/portgroups/detail Return a detailed list of bare metal Portgroups associated with ``node_ident``. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - portgroups: portgroups - uuid: uuid - address: portgroup_address - name: portgroup_name - node_uuid: node_uuid - standalone_ports_supported: standalone_ports_supported - internal_info: portgroup_internal_info - extra: extra - mode: portgroup_mode - properties: portgroup_properties - ports: pg_ports - created_at: created_at - updated_at: updated_at - links: links **Example details of a Node's Portgroups:** .. literalinclude:: samples/node-portgroup-detail-response.json ironic-15.0.0/api-ref/source/parameters.yaml0000664000175000017500000013427613652514273021001 0ustar zuulzuul00000000000000# variables in header header_version: description: | Specific API microversion used to generate this response. in: header required: true type: string openstack-request-id: description: > A unique ID for tracking the request. The request ID associated with the request appears in the log lines for that request. By default, the middleware configuration ensures that the request ID appears in the log files. in: header required: false type: string x-openstack-ironic-api-max-version: description: | Maximum API microversion supported by this endpoint, eg. "1.22" in: header required: true type: string x-openstack-ironic-api-min-version: description: | Minimum API microversion supported by this endpoint, eg. "1.1" in: header required: true type: string x-openstack-ironic-api-version: description: > A request SHOULD include this header to indicate to the Ironic API service what version the client supports. The server will transform the response object into compliance with the requested version, if it is supported, or return a 406 Not Supported error. If this header is not supplied, the server will default to ``min_version`` in all responses. in: header required: true type: string # variables in path allocation_ident: description: | The UUID or name of the allocation. in: path required: true type: string bios_setting: description: | The name of the Bios setting. in: path required: true type: string chassis_ident: description: | The UUID of the chassis. in: path required: true type: string deploy_template_ident: description: | The UUID or name of the deploy template. in: path required: true type: string driver_ident: description: | The name of the driver. in: path required: true type: string hostname_ident: description: | The hostname of the conductor. in: path required: true type: string node_id: description: | The UUID of the node. in: path required: false type: string node_ident: description: | The UUID or Name of the node. in: path required: true type: string port_ident: description: | The UUID of the port. in: path required: true type: string portgroup_ident: description: | The UUID or Name of the portgroup. in: path required: true type: string trait: description: | A single trait for this node. in: path required: true type: string volume_connector_id: description: | The UUID of the Volume connector. in: path required: true type: string volume_target_id: description: | The UUID of the Volume target. in: path required: true type: string agent_version: description: | The version of the ironic-python-agent ramdisk, sent back to the Bare Metal service and stored during provisioning. in: query required: true type: string callback_url: description: | The URL of an active ironic-python-agent ramdisk, sent back to the Bare Metal service and stored temporarily during a provisioning action. in: query required: true type: string detail: description: | Whether to show detailed information about the resource. This cannot be set to True if ``fields`` parameter is specified. in: query required: false type: boolean # variables in driver query string driver_detail: description: | Whether to show detailed information about the drivers (e.g. the "boot_interface" field). in: query required: false type: boolean driver_type: description: | Only list drivers of this type. Options are "classic" or "dynamic". in: query required: false type: string # variables common to all query strings fields: description: | One or more fields to be returned in the response. For example, the following request returns only the ``uuid`` and ``name`` fields for each node: :: GET /v1/nodes?fields=uuid,name in: query required: false type: array fields_for_conductor: description: | One or more fields to be returned in the response. For example, the following request returns only the ``hostname`` and ``alive`` fields for each conductor: :: GET /v1/conductors?fields=hostname,alive in: query required: false type: array limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. This value cannot be larger than the ``max_limit`` option in the ``[api]`` section of the configuration. If it is higher than ``max_limit``, only ``max-limit`` resources will be returned. in: query required: false type: integer marker: description: | The ID of the last-seen item. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string # variables in the vendor_passthru query string method_name: description: | Driver specific method name. in: query required: true type: string # variable in the lookup query string r_addresses: description: | Optional list of one or more Port addresses. in: query required: false type: array # variables in the query string r_allocation_node: description: | Filter the list of allocations by the node UUID or name. in: query required: false type: string r_allocation_state: description: | Filter the list of allocations by the allocation state, one of ``active``, ``allocating`` or ``error``. in: query required: false type: string r_associated: description: | Filter the list of returned nodes and only return those which are, or are not, associated with an ``instance_uuid``. in: query required: false type: boolean r_conductor: description: | Filter the list of returned nodes, and only return those with the specified ``conductor``. in: query required: false type: string r_conductor_group: description: | Filter the list of returned nodes, and only return those with the specified ``conductor_group``. Case-insensitive string up to 255 characters, containing ``a-z``, ``0-9``, ``_``, ``-``, and ``.``. in: query required: false type: string r_description_contains: description: | Filter the list of returned nodes, and only return those containing substring specified by ``description_contains``. in: query requred: false type: string r_driver: description: | Filter the list of returned nodes, and only return those with the specified ``driver``. in: query required: false type: string r_fault: description: | Filter the list of returned nodes, and only return those with the specified ``fault``. Possible values are determined by faults supported by ironic, e.g., ``power failure``, ``clean failure`` or ``rescue abort failure``. in: query required: false type: string r_instance_uuid: description: | Filter the list of returned nodes, and only return the node with this specific instance UUID, or an empty set if not found. in: query required: false type: string r_maintenance: description: | Filter the list of returned nodes and only return those with ``maintenance`` set to ``True`` or ``False``. in: query required: false type: boolean # variable in the lookup query string r_node_uuid: description: | Optional Node UUID. in: query required: false type: string r_port_address: description: | Filter the list of returned Ports, and only return the ones with the specified physical hardware address, typically MAC, or an empty set if not found. in: query required: false type: string r_port_node_ident: description: | Filter the list of returned Ports, and only return the ones associated with this specific node (name or UUID), or an empty set if not found. in: query required: false type: string r_port_node_uuid: description: | Filter the list of returned Ports, and only return the ones associated with this specific node UUID, or an empty set if not found. in: query required: false type: string r_port_portgroup_ident: description: | Filter the list of returned Ports, and only return the ones associated with this specific Portgroup (name or UUID), or an empty set if not found. in: query required: false type: string r_portgroup_address: description: | Filter the list of returned Portgroups, and only return the ones with the specified physical hardware address, typically MAC, or an empty set if not found. in: query required: false type: string r_portgroup_node_ident: description: | Filter the list of returned Portgroups, and only return the ones associated with this specific node (name or UUID), or an empty set if not found. in: query required: false type: string r_provision_state: description: | Filter the list of returned nodes, and only return those with the specified ``provision_state``. in: query required: false type: string r_resource_class: description: | Filter the list of returned nodes, and only return the ones with the specified resource class. in: query required: false type: string r_volume_connector_node_ident: description: | Filter the list of returned Volume connectors, and only return the ones associated with this specific node (name or UUID), or an empty set if not found. in: query required: false type: string r_volume_target_node_ident: description: | Filter the list of returned Volume targets, and only return the ones associated with this specific node (name or UUID), or an empty set if not found. in: query required: false type: string sort_dir: description: | Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. in: query required: false type: string sort_key: description: | Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. in: query required: false type: string # variable returned from /lookup agent_config: description: | JSON document of configuration data for the ironic-python-agent process. in: body required: true type: JSON agent_node: description: | JSON document containing the Node fields "uuid", "properties", "instance_info", and "driver_internal_info"; used by the ironic-python-agent process as it operates on the Node. in: body required: true type: JSON # variables in the API body alive: description: | The conductor status indicates whether a conductor is considered alive or not. in: body required: true type: boolean allocation_last_error: description: | The error message for the allocation if it is in the ``error`` state, ``null`` otherwise. in: body required: true type: string allocation_name: description: | The unique name of the allocation. in: body required: true type: string allocation_node: description: | The UUID of the node assigned to the allocation. Will be ``null`` if a node is not yet assigned. in: body required: true type: string allocation_resource_class: description: | The resource class requested for the allocation. Can be ``null`` if the allocation was created via backfilling and the target node did not have the resource class set. in: body required: true type: string allocation_state: description: | The current state of the allocation. One of: * ``allocating`` - allocation is in progress. * ``active`` - allocation is finished and ``node_uuid`` is assigned. * ``error`` - allocation has failed, see ``last_error`` for details. in: body required: true type: string allocation_traits: description: | The list of the traits requested for the allocation. in: body required: true type: array allocation_uuid: description: | The UUID of the allocation associated with the node. If not ``null``, will be the same as ``instance_uuid`` (the opposite is not always true). Unlike ``instance_uuid``, this field is read-only. Please use the Allocation API to remove allocations. in: body required: true type: string bios_setting_name: description: | The name of a Bios setting for a Node, eg. "virtualization". in: body required: true type: string bios_setting_value: description: | The value of a Bios setting for a Node, eg. "on". in: body required: true type: string bios_settings: description: | Optional list of one or more Bios settings. It includes following fields "created_at", "updated_at", "links", "name", "value". in: body required: true type: array boot_device: description: | The boot device for a Node, eg. "pxe" or "disk". in: body required: true type: string boot_interface: description: | The boot interface for a Node, e.g. "pxe". in: body required: true type: string candidate_nodes: description: | A list of UUIDs of the nodes that are candidates for this allocation. in: body required: true type: array chassis: description: | A ``chassis`` object. in: body required: true type: array chassis_uuid: description: | UUID of the chassis associated with this Node. May be empty or None. in: body required: true type: string clean_step: description: | The current clean step. Introduced with the cleaning feature. in: body required: false type: string clean_steps: description: | An ordered list of cleaning steps that will be performed on the node. A cleaning step is a dictionary with required keys 'interface' and 'step', and optional key 'args'. If specified, the value for 'args' is a keyword variable argument dictionary that is passed to the cleaning step method. in: body required: false type: array conductor: description: | The conductor currently servicing a node. This field is read-only. in: body required: false type: string conductor_group: description: | The conductor group for a node. Case-insensitive string up to 255 characters, containing ``a-z``, ``0-9``, ``_``, ``-``, and ``.``. in: body required: true type: string configdrive: description: | A config drive to be written to a partition on the Node's boot disk. Can be a full gzip'ed and base-64 encoded image or a JSON object with the keys: * ``meta_data`` (optional) - JSON object with the standard meta data. Ironic will provide the defaults for the ``uuid`` and ``name`` fields. * ``network_data`` (optional) - JSON object with networking configuration. * ``user_data`` (optional) - user data. May be a string (which will be UTF-8 encoded); a JSON object, or a JSON array. * ``vendor_data`` (optional) - JSON object with extra vendor data. This parameter is only accepted when setting the state to "active" or "rebuild". in: body required: false type: string or object console_enabled: description: | Indicates whether console access is enabled or disabled on this node. in: body required: true type: boolean console_interface: description: | The console interface for a node, e.g. "no-console". in: body required: true type: string created_at: description: | The UTC date and time when the resource was created, `ISO 8601 `_ format. in: body required: true type: string d_bios_setting: description: | Dictionary containing the definition of a Bios setting. It includes the following fields "created_at", "updated_at", "links", "name", "value". in: body required: true type: dictionary default_bios_interface: description: | The default bios interface used for a node with a dynamic driver, if no bios interface is specified for the node. in: body required: true type: string default_boot_interface: description: | The default boot interface used for a node with a dynamic driver, if no boot interface is specified for the node. in: body required: true type: string default_console_interface: description: | The default console interface used for a node with a dynamic driver, if no console interface is specified for the node. in: body required: true type: string default_deploy_interface: description: | The default deploy interface used for a node with a dynamic driver, if no deploy interface is specified for the node. in: body required: true type: string default_inspect_interface: description: | The default inspection interface used for a node with a dynamic driver, if no inspection interface is specified for the node. in: body required: true type: string default_management_interface: description: | The default management interface used for a node with a dynamic driver, if no management interface is specified for the node. in: body required: true type: string default_network_interface: description: | The default network interface used for a node with a dynamic driver, if no network interface is specified for the node. in: body required: true type: string default_power_interface: description: | The default power interface used for a node with a dynamic driver, if no power interface is specified for the node. in: body required: true type: string default_raid_interface: description: | The default RAID interface used for a node with a dynamic driver, if no RAID interface is specified for the node. in: body required: true type: string default_rescue_interface: description: | The default rescue interface used for a node with a dynamic driver, if no rescue interface is specified for the node. in: body required: true type: string default_storage_interface: description: | The default storage interface used for a node with a dynamic driver, if no storage interface is specified for the node. in: body required: true type: string default_vendor_interface: description: | The default vendor interface used for a node with a dynamic driver, if no vendor interface is specified for the node. in: body required: true type: string deploy_interface: description: | The deploy interface for a node, e.g. "iscsi". in: body required: true type: string deploy_step: description: | The current deploy step. in: body required: false type: string deploy_template_name: description: | The unique name of the deploy template. in: body required: true type: string deploy_template_steps: description: | The deploy steps of the deploy template. Must be a list containing at least one deploy step. A deploy step is a dictionary with required keys ``interface``, ``step``, ``args``, and ``priority``. The value for ``interface`` is the name of the driver interface. The value for ``step`` is the name of the deploy step method on the driver interface. The value for ``args`` is a dictionary of arguments that are passed to the deploy step method. The value for ``priority`` is a non-negative integer priority for the step. A value of ``0`` for ``priority`` will disable that step. in: body required: true type: array description: description: | Descriptive text about the Ironic service. in: body required: true type: string driver_info: description: | All the metadata required by the driver to manage this Node. List of fields varies between drivers, and can be retrieved from the ``/v1/drivers//properties`` resource. in: body required: true type: JSON driver_internal_info: description: | Internal metadata set and stored by the Node's driver. This field is read-only. in: body required: false type: JSON driver_name: description: | The name of the driver. in: body required: true type: string driver_property_links: description: | A list of links to driver properties. in: body required: true type: array drivers: description: | A list of driver objects. in: body required: true type: array enabled_bios_interfaces: description: | The enabled bios interfaces for this driver. in: body required: true type: list enabled_boot_interfaces: description: | The enabled boot interfaces for this driver. in: body required: true type: list enabled_console_interfaces: description: | The enabled console interfaces for this driver. in: body required: true type: list enabled_deploy_interfaces: description: | The enabled deploy interfaces for this driver. in: body required: true type: list enabled_inspect_interfaces: description: | The enabled inspection interfaces for this driver. in: body required: true type: list enabled_management_interfaces: description: | The enabled management interfaces for this driver. in: body required: true type: list enabled_network_interfaces: description: | The enabled network interfaces for this driver. in: body required: true type: list enabled_power_interfaces: description: | The enabled power interfaces for this driver. in: body required: true type: list enabled_raid_interfaces: description: | The enabled RAID interfaces for this driver. in: body required: true type: list enabled_rescue_interfaces: description: | The enabled rescue interfaces for this driver. in: body required: true type: list enabled_storage_interfaces: description: | The enabled storage interfaces for this driver. in: body required: true type: list enabled_vendor_interfaces: description: | The enabled vendor interfaces for this driver. in: body required: true type: list extra: description: | A set of one or more arbitrary metadata key and value pairs. in: body required: true type: object fault: description: | The fault indicates the active fault detected by ironic, typically the Node is in "maintenance mode". None means no fault has been detected by ironic. "power failure" indicates ironic failed to retrieve power state from this node. There are other possible types, e.g., "clean failure" and "rescue abort failure". in: body required: false type: string hostname: description: | The hostname of this conductor. in: body required: true type: array hosts: description: | A list of active hosts that support this driver. in: body required: true type: array id: description: | Major API version, eg, "v1" in: body required: true type: string inspect_interface: description: | The interface used for node inspection, e.g. "no-inspect". in: body required: true type: string inspection_finished_at: description: | The UTC date and time when the last hardware inspection finished successfully, `ISO 8601 `_ format. May be "null". in: body required: true type: string inspection_started_at: description: | The UTC date and time when the hardware inspection was started, `ISO 8601 `_ format. May be "null". in: body required: true type: string instance_info: description: | Information used to customize the deployed image. May include root partition size, a base 64 encoded config drive, and other metadata. Note that this field is erased automatically when the instance is deleted (this is done by requesting the Node provision state be changed to DELETED). in: body required: true type: JSON instance_uuid: description: | UUID of the Nova instance associated with this Node. in: body required: true type: string internal_info: description: | Internal metadata set and stored by the Port. This field is read-only. in: body required: true type: JSON is_smartnic: description: | Indicates whether the Port is a Smart NIC port. in: body required: false type: boolean last_error: description: | Any error from the most recent (last) transaction that started but failed to finish. in: body required: true type: string lessee: description: | A string or UUID of the tenant who is leasing the object. in: body required: false type: string links: description: | A list of relative links. Includes the self and bookmark links. in: body required: true type: array local_link_connection: description: | The Port binding profile. If specified, must contain ``switch_id`` (only a MAC address or an OpenFlow based datapath_id of the switch are accepted in this field) and ``port_id`` (identifier of the physical port on the switch to which node's port is connected to) fields. ``switch_info`` is an optional string field to be used to store any vendor-specific information. in: body required: true type: JSON maintenance: description: | Whether or not this Node is currently in "maintenance mode". Setting a Node into maintenance mode removes it from the available resource pool and halts some internal automation. This can happen manually (eg, via an API request) or automatically when Ironic detects a hardware fault that prevents communication with the machine. in: body required: true type: boolean maintenance_reason: description: | User-settable description of the reason why this Node was placed into maintenance mode in: body required: false type: string management_interface: description: | Interface for out-of-band node management, e.g. "ipmitool". in: body required: true type: string n_description: description: | Informational text about this node. in: body required: true type: string n_portgroups: description: | Links to the collection of portgroups on this node. in: body required: true type: array n_ports: description: | Links to the collection of ports on this node in: body required: true type: array n_properties: description: | Physical characteristics of this Node. Populated by ironic-inspector during inspection. May be edited via the REST API at any time. in: body required: true type: JSON n_states: description: | Links to the collection of states. Note that this resource is also used to request state transitions. in: body required: true type: array n_traits: description: | List of traits for this node. in: body required: true type: array n_vifs: description: | VIFs attached to this node. in: body required: true type: array n_volume: description: | Links to the volume resources. in: body required: true type: array name: description: | The name of the driver. in: body required: true type: string network_interface: description: | Which Network Interface provider to use when plumbing the network connections for this Node. in: body required: true type: string next: description: | A URL to request a next collection of the resource. This parameter is returned when ``limit`` is specified in a request and there remain items. in: body required: false type: string node_name: description: | Human-readable identifier for the Node resource. May be undefined. Certain words are reserved. in: body required: false type: string node_uuid: description: | UUID of the Node this resource belongs to. in: body required: true type: string node_vif_ident: description: | The UUID or name of the VIF. in: body required: true type: string nodes: description: | Links to the collection of nodes contained in this chassis. in: body required: true type: array owner: description: | A string or UUID of the tenant who owns the object. in: body required: false type: string passthru_async: description: | If True the passthru function is invoked asynchronously; if False, synchronously. in: body required: true type: boolean passthru_attach: description: | True if the return value will be attached to the response object, and False if the return value will be returned in the response body. in: body required: true type: boolean passthru_description: description: | A description of what the method does, including any method parameters. in: body required: true type: string passthru_http_methods: description: | A list of HTTP methods supported by the vendor function. in: body required: true type: array persistent: description: | Whether the boot device should be set only for the next reboot, or persistently. in: body required: true type: boolean pg_ports: description: | Links to the collection of ports belonging to this portgroup. in: body required: true type: array physical_network: description: | The name of the physical network to which a port is connected. May be empty. in: body required: true type: string port_address: description: | Physical hardware address of this network Port, typically the hardware MAC address. in: body required: true type: string portgroup_address: description: | Physical hardware address of this Portgroup, typically the hardware MAC address. in: body required: false type: string portgroup_internal_info: description: | Internal metadata set and stored by the Portgroup. This field is read-only. in: body required: true type: JSON portgroup_mode: description: | Mode of the port group. For possible values, refer to https://www.kernel.org/doc/Documentation/networking/bonding.txt. If not specified in a request to create a port group, it will be set to the value of the ``[DEFAULT]default_portgroup_mode`` configuration option. When set, can not be removed from the port group. in: body required: true type: string portgroup_name: description: | Human-readable identifier for the Portgroup resource. May be undefined. in: body required: false type: string portgroup_properties: description: | Key/value properties related to the port group's configuration. in: body required: true type: JSON portgroup_uuid: description: | UUID of the Portgroup this resource belongs to. in: body required: true type: string portgroups: description: | A collection of Portgroup resources. in: body required: true type: array ports: description: | A collection of Port resources. in: body required: true type: array power_interface: description: | Interface used for performing power actions on the node, e.g. "ipmitool". in: body required: true type: string power_state: description: | The current power state of this Node. Usually, "power on" or "power off", but may be "None" if Ironic is unable to determine the power state (eg, due to hardware failure). in: body required: true type: string power_timeout: description: | Timeout (in seconds) for a power state transition. in: body required: false type: integer properties: description: | A list of links to driver properties. in: body required: true type: array protected: description: | Whether the node is protected from undeploying, rebuilding and deletion. in: body required: false type: boolean protected_reason: description: | The reason the node is marked as protected. in: body required: false type: string provision_state: description: | The current provisioning state of this Node. in: body required: true type: string provision_updated_at: description: | The UTC date and time when the resource was created, `ISO 8601 `_ format. ``null`` if the node is not being provisioned. in: body required: true type: string pxe_enabled: description: | Indicates whether PXE is enabled or disabled on the Port. in: body required: true type: boolean raid_config: description: | Represents the current RAID configuration of the node. Introduced with the cleaning feature. in: body required: false type: JSON raid_interface: description: | Interface used for configuring RAID on this node, e.g. "no-raid". in: body required: true type: string reason: description: | Specify the reason for setting the Node into maintenance mode. in: body required: false type: string req_allocation_name: description: | The unique name of the Allocation. in: body required: false type: string req_allocation_node: description: | The node UUID or name to create the allocation against, bypassing the normal allocation process. .. warning:: This field must not be used to request a normal allocation with one candidate node, use ``candidate_nodes`` instead. in: body required: false type: string req_allocation_resource_class: description: | The requested resource class for the allocation. Can only be missing when backfilling an allocation (will be set to the node's ``resource_class`` in such case). in: body required: true type: string req_allocation_traits: description: | The list of requested traits for the allocation. in: body required: false type: array req_boot_device: description: | The boot device for a Node, eg. "pxe" or "disk". in: body required: true type: string req_boot_interface: description: | The boot interface for a Node, e.g. "pxe". in: body required: false type: string req_candidate_nodes: description: | The list of nodes (names or UUIDs) that should be considered for this allocation. If not provided, all available nodes will be considered. in: body required: false type: array req_chassis: description: | A ``chassis`` object. in: body required: true type: array req_conductor_group: description: | The conductor group for a node. Case-insensitive string up to 255 characters, containing ``a-z``, ``0-9``, ``_``, ``-``, and ``.``. in: body required: false type: string req_console_enabled: description: | Indicates whether console access is enabled or disabled on this node. in: body required: true type: boolean req_console_interface: description: | The console interface for a node, e.g. "no-console". in: body required: false type: string req_deploy_interface: description: | The deploy interface for a node, e.g. "iscsi". in: body required: false type: string req_description: description: | Descriptive text about the Ironic service. in: body required: false type: string req_driver_info: description: | All the metadata required by the driver to manage this Node. List of fields varies between drivers, and can be retrieved from the ``/v1/drivers//properties`` resource. in: body required: false type: JSON req_driver_name: description: | The name of the driver used to manage this Node. in: body required: true type: string req_extra: description: | A set of one or more arbitrary metadata key and value pairs. in: body required: false type: object req_inspect_interface: description: | The interface used for node inspection, e.g. "no-inspect". in: body required: false type: string req_is_smartnic: description: | Indicates whether the Port is a Smart NIC port. in: body required: false type: boolean req_local_link_connection: description: | The Port binding profile. If specified, must contain ``switch_id`` (only a MAC address or an OpenFlow based datapath_id of the switch are accepted in this field) and ``port_id`` (identifier of the physical port on the switch to which node's port is connected to) fields. ``switch_info`` is an optional string field to be used to store any vendor-specific information. in: body required: false type: JSON req_management_interface: description: | Interface for out-of-band node management, e.g. "ipmitool". in: body required: false type: string req_network_interface: description: | Which Network Interface provider to use when plumbing the network connections for this Node. in: body required: false type: string req_node_uuid: description: | UUID of the Node this resource belongs to. in: body required: true type: string req_node_vif_ident: description: | The UUID or name of the VIF. in: body required: true type: string req_persistent: description: | Whether the boot device should be set only for the next reboot, or persistently. in: body required: false type: boolean req_physical_network: description: | The name of the physical network to which a port is connected. May be empty. in: body required: false type: string req_port_address: description: | Physical hardware address of this network Port, typically the hardware MAC address. in: body required: true type: string req_portgroup_address: description: | Physical hardware address of this Portgroup, typically the hardware MAC address. in: body required: false type: string req_portgroup_uuid: description: | UUID of the Portgroup this resource belongs to. in: body required: false type: string req_power_interface: description: | Interface used for performing power actions on the node, e.g. "ipmitool". in: body required: false type: string req_properties: description: | Physical characteristics of this Node. Populated during inspection, if performed. Can be edited via the REST API at any time. in: body required: false type: JSON req_provision_state: description: | The requested provisioning state of this Node. in: body required: true type: string req_pxe_enabled: description: | Indicates whether PXE is enabled or disabled on the Port. in: body required: false type: boolean req_raid_interface: description: | Interface used for configuring RAID on this node, e.g. "no-raid". in: body required: false type: string req_rescue_interface: description: | The interface used for node rescue, e.g. "no-rescue". in: body required: false type: string req_resource_class_create: description: | A string which can be used by external schedulers to identify this Node as a unit of a specific type of resource. in: body required: false type: string req_storage_interface: description: | Interface used for attaching and detaching volumes on this node, e.g. "cinder". in: body required: false type: string req_target_power_state: description: | If a power state transition has been requested, this field represents the requested (ie, "target") state either "power on", "power off", "rebooting", "soft power off" or "soft rebooting". in: body required: true type: string req_target_raid_config: description: | Represents the requested RAID configuration of the node, which will be applied when the Node next transitions through the CLEANING state. Introduced with the cleaning feature. in: body required: true type: JSON req_uuid: description: | The UUID for the resource. in: body required: false type: string req_vendor_interface: description: | Interface for vendor-specific functionality on this node, e.g. "no-vendor". in: body required: false type: string requested_provision_state: description: | One of the provisioning verbs: manage, provide, inspect, clean, active, rebuild, delete (deleted), abort, adopt, rescue, unrescue. in: body required: true type: string rescue_interface: description: | The interface used for node rescue, e.g. "no-rescue". in: body required: true type: string rescue_password: description: | Non-empty password used to configure rescue ramdisk during node rescue operation. in: body required: false type: string reservation: description: | The ``name`` of an Ironic Conductor host which is holding a lock on this node, if a lock is held. Usually "null", but this field can be useful for debugging. in: body required: true type: string resource_class: description: | A string which can be used by external schedulers to identify this Node as a unit of a specific type of resource. For more details, see: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html in: body required: true type: string response_driver_type: description: | Type of this driver ("classic" or "dynamic"). in: body required: true type: string retired: description: | Whether the node is retired and can hence no longer be provided, i.e. move from ``manageable`` to ``available``, and will end up in ``manageable`` after cleaning (rather than ``available``). in: body required: false type: boolean retired_reason: description: | The reason the node is marked as retired. in: body required: false type: string standalone_ports_supported: description: | Indicates whether ports that are members of this portgroup can be used as stand-alone ports. in: body required: true type: boolean storage_interface: description: | Interface used for attaching and detaching volumes on this node, e.g. "cinder". in: body required: true type: string supported_boot_devices: description: | List of boot devices which this Node's driver supports. in: body required: true type: array target_power_state: description: | If a power state transition has been requested, this field represents the requested (ie, "target") state, either "power on" or "power off". in: body required: true type: string target_provision_state: description: | If a provisioning action has been requested, this field represents the requested (ie, "target") state. Note that a Node may go through several states during its transition to this target state. For instance, when requesting an instance be deployed to an AVAILABLE Node, the Node may go through the following state change progression: AVAILABLE -> DEPLOYING -> DEPLOYWAIT -> DEPLOYING -> ACTIVE in: body required: true type: string target_raid_config: description: | Represents the requested RAID configuration of the node, which will be applied when the Node next transitions through the CLEANING state. Introduced with the cleaning feature. in: body required: true type: JSON updated_at: description: | The UTC date and time when the resource was updated, `ISO 8601 `_ format. May be "null". in: body required: true type: string uuid: description: | The UUID for the resource. in: body required: true type: string # variables returned from node-validate v_boot: description: | Status of the "boot" interface in: body required: true type: object v_console: description: | Status of the "console" interface in: body required: true type: object v_deploy: description: | Status of the "deploy" interface in: body required: true type: object v_inspect: description: | Status of the "inspect" interface in: body required: true type: object v_management: description: | Status of the "management" interface in: body required: true type: object v_network: description: | Status of the "network" interface in: body required: true type: object v_power: description: | Status of the "power" interface in: body required: true type: object v_raid: description: | Status of the "raid" interface in: body required: true type: object v_rescue: description: | Status of the "rescue" interface in: body required: true type: object v_storage: description: | Status of the "storage" interface in: body required: true type: object vendor_interface: description: | Interface for vendor-specific functionality on this node, e.g. "no-vendor". in: body required: true type: string version: description: | Versioning of this API response, eg. "1.22". in: body required: true type: string versions: description: | Array of information about currently supported versions. in: body required: true type: array # variables returned from volume-connector volume_connector_connector_id: description: | The identifier of Volume connector. The identifier format depends on the ``type`` of the Volume connector, eg "iqn.2017-05.org.openstack:01:d9a51732c3f" if the ``type`` is "iqn", "192.168.1.2" if the ``type`` is "ip". in: body required: true type: string volume_connector_type: description: | The type of Volume connector such as "iqn", "ip", "wwnn" and "wwpn". in: body required: true type: string volume_connectors: description: | A collection of Volume connector resources. in: body required: true type: array volume_connectors_link: description: | Links to a collection of Volume connector resources. in: body required: true type: array # variables returned from volume-target volume_target_boot_index: description: | The boot index of the Volume target. "0" indicates that this volume is used as a boot volume. in: body required: true type: string volume_target_properties: description: | A set of physical information of the volume such as the identifier (eg. IQN) and LUN number of the volume. This information is used to connect the node to the volume by the storage interface. The contents depend on the volume type. in: body required: true type: object volume_target_volume_id: description: | The identifier of the volume. This ID is used by storage interface to distinguish volumes. in: body required: true type: string volume_target_volume_type: description: | The type of Volume target such as 'iscsi' and 'fibre_channel'. in: body required: true type: string volume_targets: description: | A collection of Volume target resources. in: body required: true type: array volume_targets_link: description: | Links to a collection of Volume target resources. in: body required: true type: array ironic-15.0.0/api-ref/source/baremetal-api-v1-node-allocation.inc0000664000175000017500000000327613652514273024535 0ustar zuulzuul00000000000000.. -*- rst -*- ==================================== Node Allocation (allocations, nodes) ==================================== Given a Node identifier (``uuid`` or ``name``), the API allows to get and delete the associated allocation. .. versionadded:: 1.52 Allocation API was introduced. Show Allocation by Node ======================= .. rest_method:: GET /v1/nodes/{node_ident}/allocation Shows details for an allocation. .. versionadded:: 1.52 Allocation API was introduced. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - candidate_nodes: candidate_nodes - last_error: allocation_last_error - name: allocation_name - node_uuid: allocation_node - resource_class: allocation_resource_class - state: allocation_state - traits: allocation_traits - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/allocation-show-response.json :language: javascript Delete Allocation by Node ========================= .. rest_method:: DELETE /v1/nodes/{node_ident}/allocation Deletes the allocation of this node and resets its ``instance_uuid``. The deletion will fail if the allocation the node is ``active`` and not in the ``maintenance`` mode. .. versionadded:: 1.52 Allocation API was introduced. Normal response codes: 204 Error response codes: 400, 401, 403, 404, 409, 503 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-ports.inc0000664000175000017500000001775613652514273022644 0ustar zuulzuul00000000000000.. -*- rst -*- ============= Ports (ports) ============= Listing, Searching, Creating, Updating, and Deleting of bare metal Port resources are done through the ``ports`` resource. All Ports must be associated to a Node when created. This association can be changed, though the request may be rejected if either the current or destination Node are in a transitive state (e.g., in the process of deploying) or are in a state that would be non-deterministically affected by such a change (e.g., there is an active user instance on the Node). List Ports ========== .. rest_method:: GET /v1/ports Return a list of bare metal Ports. Some filtering is possible by passing in some parameters with the request. By default, this query will return the uuid and address for each Port. .. versionadded:: 1.6 Added the ``node`` query parameter. If both ``node_uuid`` and ``node`` are specified in the request, ``node_uuid`` will be used to filter results. .. versionadded:: 1.8 Added the ``fields`` request parameter. When specified, this causes the content of the response to include only the specified fields, rather than the default set. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` field. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.43 Added the ``detail`` boolean request parameter. When specified ``True`` this causes the response to include complete details about each port. .. versionadded:: 1.53 Added the ``is_smartnic`` field. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - node: r_port_node_ident - node_uuid: r_port_node_uuid - portgroup: r_port_portgroup_ident - address: r_port_address - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - detail: detail Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - links: links **Example Port list response:** .. literalinclude:: samples/port-list-response.json :language: javascript Create Port =========== .. rest_method:: POST /v1/ports Creates a new Port resource. This method requires a Node UUID and the physical hardware address for the Port (MAC address in most cases). .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` request and response fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` request and response fields. .. versionadded:: 1.34 Added the ``physical_network`` request and response fields. .. versionadded:: 1.53 Added the ``is_smartnic`` request and response fields. Normal response code: 201 Request ------- .. rest_parameters:: parameters.yaml - node_uuid: req_node_uuid - address: req_port_address - portgroup_uuid: req_portgroup_uuid - local_link_connection: req_local_link_connection - pxe_enabled: req_pxe_enabled - physical_network: req_physical_network - extra: req_extra - is_smartnic: req_is_smartnic **Example Port creation request:** .. literalinclude:: samples/port-create-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - address: port_address - node_uuid: node_uuid - portgroup_uuid: portgroup_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example Port creation response:** .. literalinclude:: samples/port-create-response.json :language: javascript List Detailed Ports =================== .. rest_method:: GET /v1/ports/detail Return a list of bare metal Ports, with detailed information. .. versionadded:: 1.6 Added the ``node`` query parameter. If both ``node_uuid`` and ``node`` are specified in the request, ``node_uuid`` will be used to filter results. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` response fields. .. versionadded:: 1.24 Added the ``portgroup`` query parameter and ``portgroup_uuid`` response field. .. versionadded:: 1.34 Added the ``physical_network`` response field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - node: r_port_node_ident - node_uuid: r_port_node_uuid - portgroup: r_port_portgroup_ident - address: r_port_address - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - node_uuid: node_uuid - portgroup_uuid: portgroup_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example detailed Port list response:** .. literalinclude:: samples/port-list-detail-response.json :language: javascript Show Port Details ================= .. rest_method:: GET /v1/ports/{port_id} Show details for the given Port. .. versionadded:: 1.8 Added the ``fields`` request parameter. When specified, this causes the content of the response to include only the specified fields, rather than the default set. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` response fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` response field. .. versionadded:: 1.34 Added the ``physical_network`` response field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - port_id: port_ident - fields: fields Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - address: port_address - node_uuid: node_uuid - portgroup_uuid: portgroup_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example Port details:** .. literalinclude:: samples/port-create-response.json :language: javascript Update a Port ============= .. rest_method:: PATCH /v1/ports/{port_id} Update a Port. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` field. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.53 Added the ``is_smartnic`` fields. Normal response code: 200 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - port_id: port_ident **Example Port update request:** .. literalinclude:: samples/port-update-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - address: port_address - node_uuid: node_uuid - portgroup_uuid: portgroup_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example Port update response:** .. literalinclude:: samples/port-update-response.json :language: javascript Delete Port =========== .. rest_method:: DELETE /v1/ports/{port_id} Delete a Port. Normal response code: 204 Request ------- .. rest_parameters:: parameters.yaml - port_id: port_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-volume.inc0000664000175000017500000002315713652514273022774 0ustar zuulzuul00000000000000.. -*- rst -*- =============== Volume (volume) =============== .. versionadded:: 1.32 Information for connecting remote volumes to a node can be associated with a Node. There are two types of resources, Volume connectors and Volume targets. Volume connectors contain initiator information of Nodes. Volume targets contain target information of remote volumes. Listing, Searching, Creating, Updating, and Deleting of Volume connector resources are done through the ``v1/volume/connectors`` resource. The same operations for Volume targets are done through the ``v1/volume/targets`` resources. List Links of Volume Resources ============================== .. rest_method:: GET /v1/volume Return a list of links to all volume resources. Normal response code: 200 Request ------- Response -------- .. rest_parameters:: parameters.yaml - connectors: volume_connectors_link - targets: volume_targets_link - links: links **Example Volume list response:** .. literalinclude:: samples/volume-list-response.json :language: javascript List Volume Connectors ====================== .. rest_method:: GET /v1/volume/connectors Return a list of Volume connectors for all nodes. By default, this query will return the UUID, node UUID, type, and connector ID for each Volume connector. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node: r_volume_connector_node_ident - fields: fields - detail: detail - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - connectors: volume_connectors - uuid: uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - node_uuid: node_uuid - extra: extra - links: links - next: next **Example Volume connector list response:** .. literalinclude:: samples/volume-connector-list-response.json :language: javascript **Example detailed Volume connector list response:** .. literalinclude:: samples/volume-connector-list-detail-response.json :language: javascript Create Volume Connector ======================= .. rest_method:: POST /v1/volume/connectors Creates a new Volume connector resource. This method requires a Node UUID, a connector type and a connector ID. Normal response code: 201 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_uuid: req_node_uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - extra: req_extra **Example Volume connector creation request:** .. literalinclude:: samples/volume-connector-create-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - node_uuid: node_uuid - extra: extra - links: links **Example Volume connector creation response:** .. literalinclude:: samples/volume-connector-create-response.json :language: javascript Show Volume Connector Details ============================= .. rest_method:: GET /v1/volume/connectors/{volume_connector_id} Show details for the given Volume connector. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - volume_connector_id: volume_connector_id - fields: fields Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - node_uuid: node_uuid - extra: extra - links: links **Example Volume connector details:** .. literalinclude:: samples/volume-connector-create-response.json :language: javascript Update a Volume Connector ========================= .. rest_method:: PATCH /v1/volume/connectors/{volume_connector_id} Update a Volume connector. A Volume connector can be updated only while a node associated with the Volume connector is powered off. Normal response code: 200 Error codes: 400,401,403,404,409 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - volume_connector_id: volume_connector_id **Example Volume connector update request:** .. literalinclude:: samples/volume-connector-update-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - node_uuid: node_uuid - extra: extra - links: links **Example Volume connector update response:** .. literalinclude:: samples/volume-connector-update-response.json :language: javascript Delete Volume Connector ======================= .. rest_method:: DELETE /v1/volume/connector/{volume_connector_id} Delete a Volume connector. A Volume connector can be deleted only while a node associated with the Volume connector is powered off. Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - volume_connector_id: volume_connector_id List Volume Targets =================== .. rest_method:: GET /v1/volume/targets Return a list of Volume targets for all nodes. By default, this query will return the UUID, node UUID, volume type, boot index, and volume ID for each Volume target. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node: r_volume_target_node_ident - fields: fields - detail: detail - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - targets: volume_targets - uuid: uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: extra - node_uuid: node_uuid - links: links - next: next **Example Volume target list response:** .. literalinclude:: samples/volume-target-list-response.json :language: javascript **Example detailed Volume target list response:** .. literalinclude:: samples/volume-target-list-detail-response.json :language: javascript Create Volume Target ==================== .. rest_method:: POST /v1/volume/targets Creates a new Volume target resource. This method requires a Node UUID, volume type, volume ID, and boot index.. Normal response code: 201 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_uuid: req_node_uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: req_extra **Example Volume target creation request:** .. literalinclude:: samples/volume-target-create-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: extra - node_uuid: node_uuid - links: links **Example Volume target creation response:** .. literalinclude:: samples/volume-target-create-response.json :language: javascript Show Volume Target Details ========================== .. rest_method:: GET /v1/volume/targets/{volume_target_id} Show details for the given Volume target. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - volume_target_id: volume_target_id - fields: fields Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: extra - node_uuid: node_uuid - links: links **Example Volume target details:** .. literalinclude:: samples/volume-target-create-response.json :language: javascript Update a Volume Target ====================== .. rest_method:: PATCH /v1/volume/targets/{volume_target_id} Update a Volume target. A Volume target can be updated only while a node associated with the Volume target is powered off. Normal response code: 200 Error codes: 400,401,403,404,409 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - volume_target_id: volume_target_id **Example Volume target update request:** .. literalinclude:: samples/volume-target-update-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: extra - node_uuid: node_uuid - links: links **Example Volume target update response:** .. literalinclude:: samples/volume-target-update-response.json :language: javascript Delete Volume Target ==================== .. rest_method:: DELETE /v1/volume/target/{volume_target_id} Delete a Volume target. A Volume target can be deleted only while a node associated with the Volume target is powered off. Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - volume_target_id: volume_target_id ironic-15.0.0/api-ref/source/baremetal-api-v1-driver-passthru.inc0000664000175000017500000000534413652514273024625 0ustar zuulzuul00000000000000.. -*- rst -*- ================================ Driver Vendor Passthru (drivers) ================================ Each driver MAY support vendor-specific extensions, called "passthru" methods. Internally, Ironic's driver API supports flexibly exposing functions via the common HTTP methods GET, PUT, POST, and DELETE. To call a passthru method, the query string must contain the name of the method. For example, if the method name was ``my_passthru_method``, the request would look like ``/vendor_passthru?method=my_passthru_method``. The contents of the HTTP request are forwarded to the driver and validated there. Ironic's REST API provides a means to discover these methods, but does not provide support, testing, or documentation for these endpoints. The Ironic development team does not guarantee any compatibility within these methods between releases, though we encourage driver authors to provide documentation and support for them. Besides the endpoints documented here, all other resources and endpoints under the heading ``vendor_passthru`` should be considered unsupported APIs, and could be changed without warning by the driver authors. List Methods ============ .. rest_method:: GET /v1/drivers/{driver_name}/vendor_passthru/methods Retrieve a list of the available vendor passthru methods for the given Driver. The response will indicate which HTTP method(s) each vendor passthru method allows, whether the method call will be synchronous or asynchronous, and whether the response will include any attachment. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - driver_name: driver_ident Response -------- The response BODY is a dictionary whose keys are the method names. The value of each item is itself a dictionary describing how to interact with that method. .. rest_parameters:: parameters.yaml - async: passthru_async - attach: passthru_attach - description: passthru_description - http_methods: passthru_http_methods Call a Method ============= .. rest_method:: METHOD /v1/drivers/{driver_name}/vendor_passthru?method={method_name} The HTTP METHOD may be one of GET, POST, PUT, DELETE, depending on the driver and method. This endpoint passes the request directly to the hardware driver. The HTTP BODY must be parseable JSON, which will be converted to parameters passed to that function. Unparseable JSON, missing parameters, or excess parameters will cause the request to be rejected with an HTTP 400 error. Normal response code: 200 202 Error codes: 400 Request ------- .. rest_parameters:: parameters.yaml - driver_name: driver_ident - method_name: method_name All other parameters should be passed in the BODY. Parameter list varies by method_name. Response -------- Varies. ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-traits.inc0000664000175000017500000000507213652514273024075 0ustar zuulzuul00000000000000.. -*- rst -*- =================== Node Traits (nodes) =================== .. versionadded:: 1.37 Node traits are used for scheduling in the Compute service, using qualitative attributes to influence the placement of instances to bare metal compute nodes. Traits specified for a node in the Bare Metal service will be registered on the corresponding resource provider in the Compute service's placement API. Traits can be either standard or custom. Standard traits are listed in the `os_traits library `_. Custom traits must meet the following requirements: * prefixed with ``CUSTOM_`` * contain only upper case characters A to Z, digits 0 to 9, or underscores * no longer than 255 characters in length A bare metal node can have a maximum of 50 traits. List Traits of a Node ===================== .. rest_method:: GET /v1/nodes/{node_ident}/traits Return a list of traits for the node. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - traits: n_traits **Example list of traits for the node:** .. literalinclude:: samples/node-traits-list-response.json :language: javascript Set all traits of a node ======================== .. rest_method:: PUT /v1/nodes/{node_ident}/traits Set all traits of a node, replacing any existing traits. Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - traits: n_traits **Example request to set all traits of a Node:** .. literalinclude:: samples/node-set-traits-request.json Add a trait to a node ===================== .. rest_method:: PUT /v1/nodes/{node_ident}/traits/{trait} Add a single trait to a node. Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - trait: trait Remove all traits from a node ============================= .. rest_method:: DELETE /v1/nodes/{node_ident}/traits Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Remove a trait from a node ========================== Remove a single trait from a node. .. rest_method:: DELETE /v1/nodes/{node_ident}/traits/{trait} Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - trait: trait ironic-15.0.0/api-ref/source/baremetal-api-v1-node-management.inc0000664000175000017500000002706013652514273024521 0ustar zuulzuul00000000000000.. -*- rst -*- ======================= Node Management (nodes) ======================= Nodes can be managed through several sub-resources. Maintenance mode can be set by the operator, with an optional "reason" stored by Ironic. The supplied ``driver_info`` can be validated to ensure that the selected ``driver`` has all the information it requires to manage the Node. A Node can be rebooted, turned on, or turned off by requesting a change to its power state. This is handled asynchronously and tracked in the ``target_power_state`` field after the request is received. A Node's boot device can be changed, and the set of supported boot devices can be queried. A request to change a Node's provision state is also tracked asynchronously; the ``target_provision_state`` represents the requested state. A Node may transition through several discrete ``provision_state`` steps before arriving at the requested state. This can vary between drivers and based on configuration. For example, a Node in the ``available`` state can have an instance deployed to it by requesting the provision state of ``active``. During this transition, the Node's ``provision_state`` will temporarily be set to ``deploying``, and depending on the driver, it may also be ``wait call-back``. When the transitions are complete, ``target_provision_state`` will be set to ``None`` and ``provision_state`` will be set to ``active``. To destroy the instance, request the provision state of ``delete``. During this transition, the Node may or may not go through a ``cleaning`` state, depending on the service configuration. Validate Node =============== .. rest_method:: GET /v1/nodes/{node_ident}/validate Request that Ironic validate whether the Node's ``driver`` has enough information to manage the Node. This polls each ``interface`` on the driver, and returns the status of that ``interface`` as an element in the response. Note that each ``driver`` may require different information to be supplied, and not all drivers support all interfaces. Normal response codes: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- Each element in the response will contain a "result" variable, which will have a value of "true" or "false", indicating that the interface either has or does not have sufficient information to function. A value of ``null`` indicates that the Node's driver does not support that interface. .. rest_parameters:: parameters.yaml - boot: v_boot - console: v_console - deploy: v_deploy - inspect: v_inspect - management: v_management - network: v_network - power: v_power - raid: v_raid - rescue: v_rescue - storage: v_storage **Example node validation response:** .. literalinclude:: samples/node-validate-response.json :language: javascript Set Maintenance Flag ============================= .. rest_method:: PUT /v1/nodes/{node_ident}/maintenance Request that Ironic set the maintenance flag on the Node. This will disable certain automatic actions that the Node's driver may take, and remove the Node from Nova's available resource pool. Normal response code: 202 .. TODO: Add link to user / operator documentation on the Maintenance flag Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - reason: reason **Example request: mark a node for maintenance:** .. literalinclude:: samples/node-maintenance-request.json Clear Maintenance Flag ============================== .. rest_method:: DELETE /v1/nodes/{node_ident}/maintenance The maintenance flag is unset by sending a DELETE request to this endpoint. If the request is accepted, Ironic will also clear the ``maintenance_reason`` field. Normal response code: 202 .. TODO: Add link to user / operator documentation on the Maintenance flag Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Set Boot Device =============== .. rest_method:: PUT /v1/nodes/{node_ident}/management/boot_device Set the boot device for the given Node, and set it persistently or for one-time boot. The exact behaviour of this depends on the hardware driver. .. note:: In some drivers, eg. the ``*_ipmitool`` family, this method initiates a synchronous call to the hardware management device (BMC). It should be used with caution! This is `a known bug `_. .. note:: Some drivers do not support one-time boot, and always set the boot device persistently. Normal response code: 204 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - boot_device: req_boot_device - persistent: req_persistent **Example JSON request body to set boot device:** .. literalinclude:: samples/node-set-boot-device.json Get Boot Device =============== .. rest_method:: GET /v1/nodes/{node_ident}/management/boot_device Get the current boot device for the given Node. .. note:: In some drivers, eg. the ``*_ipmitool`` family, this method initiates a synchronous call to the hardware management device (BMC). It should be used with caution! This is `a known bug `_. Normal response code: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - boot_device: boot_device - persistent: persistent **Example JSON response to get boot device:** .. literalinclude:: samples/node-get-boot-device-response.json Get Supported Boot Devices =========================== .. rest_method:: GET /v1/nodes/{node_ident}/management/boot_device/supported Retrieve the acceptable set of supported boot devices for a specific Node. Normal response code: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - supported_boot_devices: supported_boot_devices **Example response listing supported boot devices:** .. literalinclude:: samples/node-get-supported-boot-devices-response.json Inject NMI (Non-Masking Interrupts) =================================== .. rest_method:: PUT /v1/nodes/{node_ident}/management/inject_nmi .. versionadded:: 1.29 Inject NMI (Non-Masking Interrupts) for the given Node. This feature can be used for hardware diagnostics, and actual support depends on a driver. Normal response code: 204 (No content) Error codes: - 400 (Invalid) - 403 (Forbidden) - 404 (NotFound) - 406 (NotAcceptable) - 409 (NodeLocked, ClientError) Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident **Request to inject NMI to a node has to be empty dictionary:** .. literalinclude:: samples/node-inject-nmi.json Node State Summary ================== .. rest_method:: GET /v1/nodes/{node_ident}/states Get a summary of the Node's current power, provision, raid, and console status. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - power_state: power_state - target_power_state: target_power_state - provision_state: provision_state - target_provision_state: target_provision_state - provision_updated_at: provision_updated_at - last_error: last_error - console_enabled: console_enabled - raid_config: raid_config - target_raid_config: target_raid_config **Example node state:** .. literalinclude:: samples/node-get-state-response.json Change Node Power State ======================= .. rest_method:: PUT /v1/nodes/{node_ident}/states/power Request a change to the Node's power state. Normal response code: 202 (Accepted) .. versionadded:: 1.27 In the request, the ``target`` value can also be one of ``soft power off`` or ``soft rebooting``. .. versionadded:: 1.27 In the request, a ``timeout`` can be specified. Error codes: - 409 (NodeLocked, ClientError) - 400 (Invalid, InvalidStateRequested, InvalidParameterValue) - 406 (NotAcceptable) - 503 (NoFreeConductorWorkers) Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - target: req_target_power_state - timeout: power_timeout **Example request to power off a Node:** .. literalinclude:: samples/node-set-power-off.json **Example request to soft power off a Node with timeout:** .. literalinclude:: samples/node-set-soft-power-off.json Change Node Provision State =========================== .. rest_method:: PUT /v1/nodes/{node_ident}/states/provision Request a change to the Node's provision state. Acceptable target states depend on the Node's current provision state. More detailed documentation of the Ironic State Machine is available `in the developer docs `_. .. versionadded:: 1.35 A ``configdrive`` can be provided when setting the node's provision target state to ``rebuild``. .. versionadded:: 1.38 A node can be rescued or unrescued by setting the node's provision target state to ``rescue`` or ``unrescue`` respectively. .. versionadded:: 1.56 A ``configdrive`` can be a JSON object with ``meta_data``, ``network_data`` and ``user_data``. .. versionadded:: 1.59 A ``configdrive`` now accepts ``vendor_data``. Normal response code: 202 Error codes: - 409 (NodeLocked, ClientError) - 400 (InvalidState, NodeInMaintenance) - 406 (NotAcceptable) - 503 (NoFreeConductorWorkers) Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - target: req_provision_state - configdrive: configdrive - clean_steps: clean_steps - rescue_password: rescue_password **Example request to deploy a Node, using a configdrive served via local webserver:** .. literalinclude:: samples/node-set-active-state.json **Example request to clean a Node, with custom clean step:** .. literalinclude:: samples/node-set-clean-state.json Set RAID Config =============== .. rest_method:: PUT /v1/nodes/{node_ident}/states/raid .. versionadded:: 1.12 Store the supplied configuration on the Node's ``target_raid_config`` property. This property must be structured JSON, and will be validated by the driver upon receipt. The request schema is defined in the `documentation for the RAID feature `_ .. note:: Calling this API only stores the requested configuration; it will be applied the next time that the Node transitions through the ``cleaning`` phase. Normal response code: 204 .. TODO: add more description, response code, sample response Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - target_raid_config: req_target_raid_config **Example requested RAID config:** .. literalinclude:: samples/node-set-raid-request.json .. TODO: add more description, response code, sample response Get Console =========== .. rest_method:: GET /v1/nodes/{node_ident}/states/console Get connection information about the console. .. TODO: add more description, response code, sample response Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident .. TODO: add more description, response code, sample response Start/Stop Console =================== .. rest_method:: PUT /v1/nodes/{node_ident}/states/console Start or stop the serial console. .. TODO: add more description, response code, sample response Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - enabled: req_console_enabled ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-volume.inc0000664000175000017500000000613013652514273024072 0ustar zuulzuul00000000000000.. -*- rst -*- ================================================ Listing Volume resources by Node (nodes, volume) ================================================ .. versionadded:: 1.32 Given a Node identifier (``uuid`` or ``name``), the API exposes the list of, and details of, all Volume resources associated with that Node. These endpoints do not allow modification of the Volume connectors and Volume targets; that should be done by accessing the Volume resources under the ``/v1/volume/connectors`` and ``/v1/volume/targets`` endpoint. List Links of Volume Resources by Node ====================================== .. rest_method:: GET /v1/nodes/{node_ident}/volume Return a list of links to all volume resources associated with ``node_ident``. Normal response code: 200 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - connectors: volume_connectors_link - targets: volume_targets_link - links: links **Example Volume list response:** .. literalinclude:: samples/node-volume-list-response.json :language: javascript List Volume connectors by Node ============================== .. rest_method:: GET /v1/nodes/{node_ident}/volume/connectors Return a list of bare metal Volume connectors associated with ``node_ident``. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - connectors: volume_connectors - uuid: uuid - type: volume_connector_type - connector_id: volume_connector_connector_id - node_uuid: node_uuid - extra: extra - links: links - next: next **Example list of Node's Volume connectors:** .. literalinclude:: samples/node-volume-connector-list-response.json **Example detailed list of Node's Volume connectors:** .. literalinclude:: samples/node-volume-connector-detail-response.json List Volume targets by Node =========================== .. rest_method:: GET /v1/nodes/{node_ident}/volume/targets Return a list of bare metal Volume targets associated with ``node_ident``. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - targets: volume_targets - uuid: uuid - volume_type: volume_target_volume_type - properties: volume_target_properties - boot_index: volume_target_boot_index - volume_id: volume_target_volume_id - extra: extra - node_uuid: node_uuid - links: links - next: next **Example list of Node's Volume targets:** .. literalinclude:: samples/node-volume-target-list-response.json **Example detailed list of Node's Volume targets:** .. literalinclude:: samples/node-volume-target-detail-response.json ironic-15.0.0/api-ref/source/baremetal-api-v1-chassis.inc0000664000175000017500000001026413652514273023115 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Chassis (chassis) ================= The Chassis resource type was originally conceived as a means to group Node resources. Support for this continues to exist in the REST API, however, it is very minimal. The Chassis object does not provide any functionality today aside from a means to list a group of Nodes. Use of this resource is discouraged, and may be deprecated and removed in a future release. List chassis with details ========================= .. rest_method:: GET /v1/chassis/detail Lists all chassis with details. Normal response codes: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - chassis: chassis - description: description - extra: extra Response Example ---------------- .. literalinclude:: samples/chassis-list-details-response.json :language: javascript Show chassis details ==================== .. rest_method:: GET /v1/chassis/{chassis_id} Shows details for a chassis. Normal response codes: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - fields: fields - chassis_id: chassis_ident Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - chassis: chassis - description: description - extra: extra Response Example ---------------- .. literalinclude:: samples/chassis-show-response.json :language: javascript Update chassis ============== .. rest_method:: PATCH /v1/chassis/{chassis_id} Updates a chassis. Normal response codes: 200 .. TODO: add error codes Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - chassis_id: chassis_ident - description: req_description - extra: req_extra Request Example --------------- .. literalinclude:: samples/chassis-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: description - links: links - extra: extra - created_at: created_at - updated_at: updated_at - chassis: chassis - nodes: nodes - uuid: uuid Response Example ---------------- .. literalinclude:: samples/chassis-update-response.json :language: javascript Delete chassis ============== .. rest_method:: DELETE /v1/chassis/{chassis_id} Deletes a chassis. .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - chassis_id: chassis_ident Create chassis ============== .. rest_method:: POST /v1/chassis Creates a chassis. Error response codes:201,413,415,405,404,403,401,400,503,409, Request ------- .. rest_parameters:: parameters.yaml - chassis: req_chassis - description: req_description - extra: req_extra Request Example --------------- .. literalinclude:: samples/chassis-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: description - links: links - extra: extra - created_at: created_at - updated_at: updated_at - nodes: nodes - uuid: uuid Response Example ---------------- .. literalinclude:: samples/chassis-show-response.json :language: javascript List chassis ============ .. rest_method:: GET /v1/chassis Lists all chassis. .. versionadded:: 1.43 Added the ``detail`` boolean request parameter. When specified ``True`` this causes the response to include complete details about each chassis. Normal response codes: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - fields: fields - detail: detail Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - chassis: chassis - description: description - extra: extra Response Example ---------------- .. literalinclude:: samples/chassis-list-response.json :language: javascript ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-vifs.inc0000664000175000017500000000317313652514273023536 0ustar zuulzuul00000000000000.. -*- rst -*- ================================== VIFs (Virtual Interfaces) of nodes ================================== .. versionadded:: 1.28 Attaching and detaching VIFs (Virtual Interfaces) to or from a node are done via the ``v1/nodes/{node_ident}/vifs`` endpoint. Attaching a VIF to a node means that a VIF will be mapped to a free port or port group of the specified node. List attached VIFs of a Node ============================ .. rest_method:: GET /v1/nodes/{node_ident}/vifs Return a list of VIFs that are attached to the node. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- .. rest_parameters:: parameters.yaml - vifs: n_vifs - id: node_vif_ident **Example list of VIFs that are attached to the node:** .. literalinclude:: samples/node-vif-list-response.json :language: javascript Attach a VIF to a node ====================== .. rest_method:: POST /v1/nodes/{node_ident}/vifs Attach a VIF to a node. Normal response code: 204 Error codes: 400,401,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - id: req_node_vif_ident - node_ident: node_ident **Example request to attach a VIF to a Node:** .. literalinclude:: samples/node-vif-attach-request.json Detach VIF from a node ====================== .. rest_method:: DELETE /v1/nodes/{node_ident}/vifs/{node_vif_ident} Detach VIF from a Node. Normal response code: 204 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - node_vif_ident: req_node_vif_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-deploy-templates.inc0000664000175000017500000001061313652514273024746 0ustar zuulzuul00000000000000.. -*- rst -*- =================================== Deploy Templates (deploy_templates) =================================== The Deploy Template resource represents a collection of Deploy Steps that may be executed during deployment of a node. A deploy template is matched for a node if at the time of deployment, the template's name matches a trait in the node's ``instance_info.traits``. .. versionadded:: 1.55 Deploy Template API was introduced. Create Deploy Template ====================== .. rest_method:: POST /v1/deploy_templates Creates a deploy template. .. versionadded:: 1.55 Deploy Template API was introduced. Normal response codes: 201 Error response codes: 400, 401, 403, 409 Request ------- .. rest_parameters:: parameters.yaml - name: deploy_template_name - steps: deploy_template_steps - uuid: req_uuid - extra: req_extra Request Example --------------- .. literalinclude:: samples/deploy-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: deploy_template_name - steps: deploy_template_steps - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/deploy-template-create-response.json :language: javascript List Deploy Templates ===================== .. rest_method:: GET /v1/deploy_templates Lists all deploy templates. .. versionadded:: 1.55 Deploy Template API was introduced. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - detail: detail Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: deploy_template_name - steps: deploy_template_steps - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- **Example deploy template list response:** .. literalinclude:: samples/deploy-template-list-response.json :language: javascript **Example detailed deploy template list response:** .. literalinclude:: samples/deploy-template-detail-response.json :language: javascript Show Deploy Template Details ============================ .. rest_method:: GET /v1/deploy_templates/{deploy_template_id} Shows details for a deploy template. .. versionadded:: 1.55 Deploy Template API was introduced. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - fields: fields - deploy_template_id: deploy_template_ident Response Parameters ------------------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: deploy_template_name - steps: deploy_template_steps - extra: extra - created_at: created_at - updated_at: updated_at - links: links Response Example ---------------- .. literalinclude:: samples/deploy-template-show-response.json :language: javascript Update a Deploy Template ======================== .. rest_method:: PATCH /v1/deploy_templates/{deploy_template_id} Update a deploy template. .. versionadded:: 1.55 Deploy Template API was introduced. Normal response code: 200 Error response codes: 400, 401, 403, 404, 409 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. Request ------- .. rest_parameters:: parameters.yaml - deploy_template_id: deploy_template_ident .. literalinclude:: samples/deploy-template-update-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: deploy_template_name - steps: deploy_template_steps - extra: extra - created_at: created_at - updated_at: updated_at - links: links .. literalinclude:: samples/deploy-template-update-response.json :language: javascript Delete Deploy Template ====================== .. rest_method:: DELETE /v1/deploy_template/{deploy_template_id} Deletes a deploy template. .. versionadded:: 1.55 Deploy Template API was introduced. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: parameters.yaml - deploy_template_id: deploy_template_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-node-passthru.inc0000664000175000017500000000466513652514273024264 0ustar zuulzuul00000000000000.. -*- rst -*- ============================ Node Vendor Passthru (nodes) ============================ Each driver MAY support vendor-specific extensions, called "passthru" methods. Internally, Ironic's driver API supports flexibly exposing functions via the common HTTP methods GET, PUT, POST, and DELETE. To call a passthru method, the query string must contain the name of the method, eg. ``/vendor_passthru?method=reset_bmc``. The contents of the HTTP request are forwarded to the Node's driver and validated there. Ironic's REST API provides a means to discover these methods, but does not provide support, testing, or documentation for these endpoints. The Ironic development team does not guarantee any compatibility within these methods between releases, though we encourage driver authors to provide documentation and support for them. Besides the endpoints documented here, all other resources and endpoints under the heading ``vendor_passthru`` should be considered unsupported APIs, and could be changed without warning by the driver authors. List Methods ============ .. rest_method:: GET /v1/nodes/{node_ident}/vendor_passthru/methods Retrieve a list of the available vendor passthru methods for the given Node. The response will indicate which HTTP method(s) each vendor passthru method allows, whether the method call will be synchronous or asynchronous, and whether the response will include any attachment. Normal response code: 200 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident Response -------- **Example passthru methods listing:** .. literalinclude:: samples/node-vendor-passthru-response.json Call a Method ============= .. rest_method:: METHOD /v1/nodes/{node_ident}/vendor_passthru?method={method_name} The HTTP METHOD may be one of GET, POST, PUT, DELETE, depending on the driver and method. This endpoint passes the request directly to the Node's hardware driver. The HTTP BODY must be parseable JSON, which will be converted to parameters passed to that function. Unparseable JSON, missing parameters, or excess parameters will cause the request to be rejected with an HTTP 400 error. Normal response code: 200 202 .. TODO: add error codes Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - method_name: method_name All other parameters should be passed in the BODY. Parameter list varies by method_name. Response -------- Varies.ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes-ports.inc0000664000175000017500000000531613652514273023737 0ustar zuulzuul00000000000000.. -*- rst -*- ==================================== Listing Ports by Node (nodes, ports) ==================================== Given a Node identifier (``uuid`` or ``name``), the API exposes the list of, and details of, all Ports associated with that Node. These endpoints do not allow modification of the Ports; that should be done by accessing the Port resources under the ``/v1/ports`` endpoint. List Ports by Node =================== .. rest_method:: GET /v1/nodes/{node_ident}/ports Return a list of bare metal Ports associated with ``node_ident``. .. versionadded:: 1.8 Added the ``fields`` request parameter. When specified, this causes the content of the response to include only the specified fields, rather than the default set. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` field. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Error codes: TBD Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - links: links **Example list of a Node's Ports:** .. literalinclude:: samples/node-port-list-response.json List detailed Ports by Node =========================== .. rest_method:: GET /v1/nodes/{node_ident}/ports/detail Return a detailed list of bare metal Ports associated with ``node_ident``. .. versionadded:: 1.19 Added the ``pxe_enabled`` and ``local_link_connection`` fields. .. versionadded:: 1.24 Added the ``portgroup_uuid`` field. .. versionadded:: 1.34 Added the ``physical_network`` field. .. versionadded:: 1.53 Added the ``is_smartnic`` response fields. Normal response code: 200 Error codes: TBD Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - ports: ports - uuid: uuid - address: port_address - node_uuid: node_uuid - local_link_connection: local_link_connection - pxe_enabled: pxe_enabled - physical_network: physical_network - internal_info: internal_info - extra: extra - created_at: created_at - updated_at: updated_at - links: links - is_smartnic: is_smartnic **Example details of a Node's Ports:** .. literalinclude:: samples/node-port-detail-response.json ironic-15.0.0/api-ref/source/conf.py0000664000175000017500000001513413652514273017240 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # ironic documentation build configuration file, created by # sphinx-quickstart on Sat May 1 15:17:47 2010. # # This file is execfile()d with the current directory set to # its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys html_theme = 'openstackdocs' html_theme_options = { "sidebar_mode": "toc", } extensions = [ 'os_api_ref', 'openstackdocstheme' ] repository_name = 'openstack/ironic' use_storyboard = True # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Ironic API Reference' copyright = u'OpenStack Foundation' # html_context allows us to pass arbitrary values into the html template html_context = {"bug_tag": "api-ref", "bug_project": "ironic"} # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # The reST default role (used for this markup: `text`) to use # for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for man page output ---------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'ironicdoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Ironic.tex', u'OpenStack Bare Metal API Documentation', u'OpenStack Foundation', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True ironic-15.0.0/api-ref/source/baremetal-api-v1-misc.inc0000664000175000017500000000624313652514273022415 0ustar zuulzuul00000000000000.. -*- rst -*- ======= Utility ======= This section describes two API endpoints used by the ``ironic-python-agent`` ramdisk as it communicates with the Bare Metal service. These were previously exposed as vendor passthrough methods, however, as ironic-python-agent has become the standard ramdisk agent, these methods have been made a part of the official REST API. .. note:: **Operators are reminded not to expose the Bare Metal Service's API to unsecured networks.** Both API endpoints listed below are available to *unauthenticated* clients because the default method for booting the ``ironic-python-agent`` ramdisk does not provide the agent with keystone credentials. .. note:: It is possible to include keys in your ramdisk, or pass keys in via the boot method, if your driver supports it; if that is done, you may configure these endpoints to require authentication by changing the policy rules ``baremetal:driver:ipa_lookup`` and ``baremetal:node:ipa_heartbeat``. In light of that, operators are recommended to ensure that this endpoint is only available on the ``provisioning`` and ``cleaning`` networks. Agent Lookup ============ .. rest_method:: GET /v1/lookup .. versionadded:: 1.22 A ``/lookup`` method is exposed at the root of the REST API. This should only be used by the ``ironic-python-agent`` ramdisk to retrieve required configuration data from the Bare Metal service. By default, ``/v1/lookup`` will only match Nodes that are expected to be running the ``ironic-python-agent`` ramdisk (for instance, because the Bare Metal service has just initiated a deployment). It can not be used as a generic search mechanism, though this behaviour may be changed by setting the ``[api] restrict_lookup = false`` configuration option for the ironic-api service. The query string should include either or both a ``node_uuid`` or an ``addresses`` query parameter. If a matching Node is found, information about that Node shall be returned. Normal response codes: 200 Error response codes: 400 404 Request ------- .. rest_parameters:: parameters.yaml - node_uuid: r_node_uuid - addresses: r_addresses Response -------- Returns only the information about the corresponding Node that the ``ironic-python-agent`` process requires. .. rest_parameters:: parameters.yaml - node: agent_node - config: agent_config Response Example ---------------- .. literalinclude:: samples/lookup-node-response.json :language: javascript Agent Heartbeat =============== .. rest_method:: POST /v1/heartbeat/{node_ident} .. versionadded:: 1.22 A ``/heartbeat`` method is exposed at the root of the REST API. This is used as a callback from within the ``ironic-python-agent`` ramdisk, so that an active ramdisk may periodically contact the Bare Metal service and provide the current URL at which to contact the agent. Normal response codes: 202 Error response codes: 400 404 .. versionadded:: 1.36 ``agent_version`` parameter for passing the version of the Ironic Python Agent to Ironic during heartbeat Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - callback_url: callback_url - agent_version: agent_version ironic-15.0.0/api-ref/source/baremetal-api-versions.inc0000664000175000017500000000471413652514273023007 0ustar zuulzuul00000000000000.. -*- rst -*- ============ API versions ============ Concepts ======== In order to bring new features to users over time, the Ironic API supports versioning. There are two kinds of versions in Ironic. - ''major versions'', which have dedicated urls. - ''microversions'', which can be requested through the use of the ``X-OpenStack-Ironic-API-Version`` header. The Version APIs work differently from other APIs as they *do not* require authentication. Beginning with the Kilo release, all API requests support the ``X-OpenStack-Ironic-API-Version`` header. This header SHOULD be supplied with every request; in the absence of this header, each request is treated as though coming from an older pre-Kilo client. This was done to preserve backwards compatibility as we introduced new features in the server. If you try to use a feature with an API version older than when that feature was introduced the ironic service will respond as would before that feature existed. For example if a new API URL was added, and you try to make a request with an older API version, then you will get a ``Not Found (404)`` error, or if a new field was added to an existing API and you request an older API version then you will get an ``Invalid Parameter`` response. List API versions ================= .. rest_method:: GET / This fetches all the information about all known major API versions in the deployment. Links to more specific information will be provided for each major API version, as well as information about supported min and max microversions. Normal response codes: 200 Request ------- Response Example ---------------- .. rest_parameters:: parameters.yaml - description: description - versions: versions - version: version - id: id - links: links - min_version: x-openstack-ironic-api-min-version .. literalinclude:: samples/api-root-response.json :language: javascript Show v1 API =========== .. rest_method:: GET /v1/ Show all the resources within the Ironic v1 API. Normal response codes: 200 Request ------- Response Example ---------------- .. rest_parameters:: parameters.yaml - id: id - links: links - openstack-request-id: openstack-request-id - x-openstack-ironic-api-version: header_version - x-openstack-ironic-api-min-version: x-openstack-ironic-api-min-version - x-openstack-ironic-api-max-version: x-openstack-ironic-api-max-version .. literalinclude:: samples/api-v1-root-response.json :language: javascript ironic-15.0.0/api-ref/source/baremetal-api-v1-nodes.inc0000664000175000017500000004576613652514273022607 0ustar zuulzuul00000000000000.. -*- rst -*- ============= Nodes (nodes) ============= List, Searching, Creating, Updating, and Deleting of bare metal Node resources are done through the ``/v1/nodes`` resource. There are also several sub-resources, which allow further actions to be performed on a bare metal Node. A Node is the canonical representation of a discretely allocatable server, capable of running an Operating System. Each Node must be associated with a ``driver``; this informs Ironic what protocol to use when managing the Node. .. versionchanged:: 1.6 A Node may be referenced both by its UUID and by a unique human-readable "name" in any request. Throughout this documentation, this is referred to as the ``node_ident``. Responses clearly indicate whether a given field is a ``uuid`` or a ``name``. Depending on the Roles assigned to the authenticated OpenStack User, and upon the configuration of the Bare Metal service, API responses may change. For example, the default value of the "show_password" settings cause all API responses to mask passwords within ``driver_info`` with the literal string "\*\*\*\*\*\*". Create Node =========== .. rest_method:: POST /v1/nodes Creates a new Node resource. This method requires that a ``driver`` be supplied in the request body. Most subresources of a Node (eg, ``properties``, ``driver_info``, etc) may be supplied when the Node is created, or the resource may be updated later. .. versionadded:: 1.2 Added ``available`` state name, which replaced ``None`` as the status of an unprovisioned Node. All clients should be updated to use the new ``available`` state name. Nodes in the ``available`` state may have workloads provisioned on them; they are "available" for use. .. versionadded:: 1.5 Introduced the ``name`` field. .. versionadded:: 1.7 Introduced the ``clean_step`` field. .. versionchanged:: 1.11 The default initial state of newly-created Nodes from ``available`` to ``enroll``. This provides users a workflow to verify the manageability of a Node and perform necessary operational functions (eg, building a RAID array) before making the Node available for provisioning. .. versionadded:: 1.12 Introduced support for the ``raid_config`` and ``target_raid_config`` fields. .. versionadded:: 1.20 Introduced the ``network_interface`` field. If this field is not supplied when creating the Node, the default value will be used. .. versionadded:: 1.21 Introduced the ``resource_class`` field, which may be used to store a resource designation for the proposed OpenStack Placement Engine. This field has no effect within Ironic. .. versionadded:: 1.31 Introduced the ``boot_interface``, ``deploy_interface``, ``management_interface``, ``power_interface``, ``inspect_interface``, ``console_interface``, ``vendor_interface`` and ``raid_interface`` fields. If any of these fields are not supplied when creating the Node, their default value will be used. .. versionchanged:: 1.31 If the specified driver is a dynamic driver, then all the interfaces (boot_interface, deploy_interface, etc.) will be set to the default interface for that driver unless another enabled interface is specified in the creation request. .. versionadded:: 1.33 Introduced the ``storage_interface`` field. If this field is not supplied when creating the Node, the default value will be used. .. versionadded:: 1.38 Introduced the ``rescue_interface`` field. If this field is not supplied when creating the Node, the default value will be used. .. versionadded:: 1.44 Introduced the ``deploy_step`` field. .. versionadded:: 1.46 Introduced the ``conductor_group`` field. .. versionadded:: 1.50 Introduced the ``owner`` field. .. versionadded:: 1.51 Introduced the ``description`` field. .. versionadded:: 1.52 Introduced the ``allocation_uuid`` field. .. versionadded:: 1.65 Introduced the ``lessee`` field. Normal response codes: 201 Error codes: 400,403,406 Request ------- .. rest_parameters:: parameters.yaml - boot_interface: req_boot_interface - conductor_group: req_conductor_group - console_interface: req_console_interface - deploy_interface: req_deploy_interface - driver_info: req_driver_info - driver: req_driver_name - extra: req_extra - inspect_interface: req_inspect_interface - management_interface: req_management_interface - name: node_name - network_interface: req_network_interface - power_interface: req_power_interface - properties: req_properties - raid_interface: req_raid_interface - rescue_interface: req_rescue_interface - resource_class: req_resource_class_create - storage_interface: req_storage_interface - uuid: req_uuid - vendor_interface: req_vendor_interface - owner: owner - description: n_description - lessee: lessee **Example Node creation request with a dynamic driver:** .. literalinclude:: samples/node-create-request-dynamic.json :language: javascript **Example Node creation request with a classic driver:** .. literalinclude:: samples/node-create-request-classic.json :language: javascript Response -------- The response will contain the complete Node record, with the supplied data, and any defaults added for non-specified fields. Most fields default to "null" or "". The list and example below are representative of the response as of API microversion 1.48. .. rest_parameters:: parameters.yaml - uuid: uuid - name: node_name - power_state: power_state - target_power_state: target_power_state - provision_state: provision_state - target_provision_state: target_provision_state - maintenance: maintenance - maintenance_reason: maintenance_reason - fault: fault - last_error: last_error - reservation: reservation - driver: driver_name - driver_info: driver_info - driver_internal_info: driver_internal_info - properties: n_properties - instance_info: instance_info - instance_uuid: instance_uuid - chassis_uuid: chassis_uuid - extra: extra - console_enabled: console_enabled - raid_config: raid_config - target_raid_config: target_raid_config - clean_step: clean_step - deploy_step: deploy_step - links: links - ports: n_ports - portgroups: n_portgroups - states: n_states - resource_class: resource_class - boot_interface: boot_interface - console_interface: console_interface - deploy_interface: deploy_interface - inspect_interface: inspect_interface - management_interface: management_interface - network_interface: network_interface - power_interface: power_interface - raid_interface: raid_interface - rescue_interface: rescue_interface - storage_interface: storage_interface - traits: n_traits - vendor_interface: vendor_interface - volume: n_volume - conductor_group: conductor_group - protected: protected - protected_reason: protected_reason - conductor: conductor - owner: owner - lessee: lessee - description: n_description - allocation_uuid: allocation_uuid **Example JSON representation of a Node:** .. literalinclude:: samples/node-create-response.json :language: javascript List Nodes ========== .. rest_method:: GET /v1/nodes Return a list of bare metal Nodes, with some useful information about each Node. Some filtering is possible by passing in flags with the request. By default, this query will return the name, uuid, instance uuid, power state, provision state, and maintenance setting for each Node. .. versionadded:: 1.8 Added the ``fields`` Request parameter. When specified, this causes the content of the Response to include only the specified fields, rather than the default set. .. versionadded:: 1.9 Added the ``provision_state`` Request parameter, allowing the list of returned Nodes to be filtered by their current state. .. versionadded:: 1.16 Added the ``driver`` Request parameter, allowing the list of returned Nodes to be filtered by their driver name. .. versionadded:: 1.21 Added the ``resource_class`` Request parameter, allowing the list of returned Nodes to be filtered by this field. .. versionadded:: 1.42 Introduced the ``fault`` field. .. versionadded:: 1.43 Added the ``detail`` boolean request parameter. When specified ``True`` this causes the response to include complete details about each node, as shown in the "List Nodes Detailed" section below. .. versionadded:: 1.46 Introduced the ``conductor_group`` request parameter, to allow filtering the list of returned nodes by conductor group. .. versionadded:: 1.49 Introduced the ``conductor`` request parameter, to allow filtering the list of returned nodes by conductor. .. versionadded:: 1.50 Introduced the ``owner`` field. .. versionadded:: 1.51 Introduced the ``description`` field. .. versionadded:: 1.65 Introduced the ``lessee`` field. Normal response codes: 200 Error codes: 400,403,406 Request ------- .. rest_parameters:: parameters.yaml - instance_uuid: r_instance_uuid - maintenance: r_maintenance - associated: r_associated - provision_state: r_provision_state - driver: r_driver - resource_class: r_resource_class - conductor_group: r_conductor_group - conductor: r_conductor - fault: r_fault - owner: owner - lessee: lessee - description_contains: r_description_contains - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - detail: detail Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: node_name - instance_uuid: instance_uuid - power_state: power_state - provision_state: provision_state - maintenance: maintenance - links: links **Example list of Nodes:** .. literalinclude:: samples/nodes-list-response.json :language: javascript List Nodes Detailed =================== .. rest_method:: GET /v1/nodes/detail .. deprecated:: Use ?detail=True query string instead. Return a list of bare metal Nodes with complete details. Some filtering is possible by passing in flags with the request. This method is particularly useful to locate the Node associated to a given Nova instance, eg. with a request to ``v1/nodes/detail?instance_uuid={NOVA INSTANCE UUID}`` .. versionadded:: 1.37 Introduced the ``traits`` field. .. versionadded:: 1.38 Introduced the ``rescue_interface`` field. .. versionadded:: 1.42 Introduced the ``fault`` field. .. versionadded:: 1.46 Introduced the ``conductor_group`` field. .. versionadded:: 1.48 Introduced the ``protected`` and ``protected_reason`` fields. .. versionadded:: 1.49 Introduced the ``conductor`` request parameter and ``conductor`` field. .. versionadded:: 1.50 Introduced the ``owner`` field. .. versionadded:: 1.51 Introduced the ``description`` field. .. versionadded:: 1.52 Introduced the ``allocation_uuid`` field. .. versionadded:: 1.65 Introduced the ``lessee`` field. Normal response codes: 200 Error codes: 400,403,406 Request ------- .. rest_parameters:: parameters.yaml - instance_uuid: r_instance_uuid - maintenance: r_maintenance - fault: r_fault - associated: r_associated - provision_state: r_provision_state - driver: r_driver - resource_class: r_resource_class - conductor_group: r_conductor_group - conductor: r_conductor - owner: owner - lessee: lessee - description_contains: r_description_contains - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: node_name - power_state: power_state - target_power_state: target_power_state - provision_state: provision_state - target_provision_state: target_provision_state - maintenance: maintenance - maintenance_reason: maintenance_reason - fault: fault - last_error: last_error - reservation: reservation - driver: driver_name - driver_info: driver_info - driver_internal_info: driver_internal_info - properties: n_properties - instance_info: instance_info - instance_uuid: instance_uuid - chassis_uuid: chassis_uuid - extra: extra - console_enabled: console_enabled - raid_config: raid_config - target_raid_config: target_raid_config - clean_step: clean_step - deploy_step: deploy_step - links: links - ports: n_ports - portgroups: n_portgroups - states: n_states - resource_class: resource_class - boot_interface: boot_interface - console_interface: console_interface - deploy_interface: deploy_interface - inspect_interface: inspect_interface - management_interface: management_interface - network_interface: network_interface - power_interface: power_interface - raid_interface: raid_interface - rescue_interface: rescue_interface - storage_interface: storage_interface - traits: n_traits - vendor_interface: vendor_interface - volume: n_volume - conductor_group: conductor_group - protected: protected - protected_reason: protected_reason - owner: owner - lessee: lessee - description: n_description - conductor: conductor - allocation_uuid: allocation_uuid - retired: retired - retired_reason: retired_reason **Example detailed list of Nodes:** .. literalinclude:: samples/nodes-list-details-response.json :language: javascript Show Node Details ================= .. rest_method:: GET /v1/nodes/{node_ident} Shows details for a node. By default, this will return the full representation of the resource; an optional ``fields`` parameter can be supplied to return only the specified set. .. versionadded:: 1.37 Introduced the ``traits`` field. .. versionadded:: 1.38 Introduced the ``rescue_interface`` field. .. versionadded:: 1.42 Introduced the ``fault`` field. .. versionadded:: 1.46 Introduced the ``conductor_group`` field. .. versionadded:: 1.48 Introduced the ``protected`` and ``protected_reason`` fields. .. versionadded:: 1.49 Introduced the ``conductor`` field .. versionadded:: 1.50 Introduced the ``owner`` field. .. versionadded:: 1.51 Introduced the ``description`` field. .. versionadded:: 1.52 Introduced the ``allocation_uuid`` field. .. versionadded:: 1.61 Introduced the ``retired`` and ``retired_reason`` fields. .. versionadded:: 1.65 Introduced the ``lessee`` field. Normal response codes: 200 Error codes: 400,403,404,406 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident - fields: fields Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: node_name - power_state: power_state - target_power_state: target_power_state - provision_state: provision_state - target_provision_state: target_provision_state - maintenance: maintenance - maintenance_reason: maintenance_reason - fault: fault - last_error: last_error - reservation: reservation - driver: driver_name - driver_info: driver_info - driver_internal_info: driver_internal_info - properties: n_properties - instance_info: instance_info - instance_uuid: instance_uuid - chassis_uuid: chassis_uuid - extra: extra - console_enabled: console_enabled - raid_config: raid_config - target_raid_config: target_raid_config - clean_step: clean_step - deploy_step: deploy_step - links: links - ports: n_ports - portgroups: n_portgroups - states: n_states - resource_class: resource_class - boot_interface: boot_interface - console_interface: console_interface - deploy_interface: deploy_interface - inspect_interface: inspect_interface - management_interface: management_interface - network_interface: network_interface - power_interface: power_interface - raid_interface: raid_interface - rescue_interface: rescue_interface - storage_interface: storage_interface - traits: n_traits - vendor_interface: vendor_interface - volume: n_volume - conductor_group: conductor_group - protected: protected - protected_reason: protected_reason - owner: owner - lessee: lessee - description: n_description - conductor: conductor - allocation_uuid: allocation_uuid **Example JSON representation of a Node:** .. literalinclude:: samples/node-show-response.json :language: javascript Update Node =========== .. rest_method:: PATCH /v1/nodes/{node_ident} Updates the information stored about a Node. Note that this endpoint can not be used to request state changes, which are managed through sub-resources. .. versionadded:: 1.25 Introduced the ability to unset a node's chassis UUID. .. versionadded:: 1.51 Introduced the ability to set/unset a node's description. Normal response codes: 200 Error codes: 400,403,404,406,409 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - node_ident: node_ident **Example PATCH document updating Node driver_info:** .. literalinclude:: samples/node-update-driver-info-request.json Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: node_name - power_state: power_state - target_power_state: target_power_state - provision_state: provision_state - target_provision_state: target_provision_state - maintenance: maintenance - maintenance_reason: maintenance_reason - fault: fault - last_error: last_error - reservation: reservation - driver: driver_name - driver_info: driver_info - driver_internal_info: driver_internal_info - properties: n_properties - instance_info: instance_info - instance_uuid: instance_uuid - chassis_uuid: chassis_uuid - extra: extra - console_enabled: console_enabled - raid_config: raid_config - target_raid_config: target_raid_config - clean_step: clean_step - deploy_step: deploy_step - links: links - ports: n_ports - portgroups: n_portgroups - states: n_states - resource_class: resource_class - boot_interface: boot_interface - console_interface: console_interface - deploy_interface: deploy_interface - inspect_interface: inspect_interface - management_interface: management_interface - network_interface: network_interface - power_interface: power_interface - raid_interface: raid_interface - rescue_interface: rescue_interface - storage_interface: storage_interface - traits: n_traits - vendor_interface: vendor_interface - volume: n_volume - conductor_group: conductor_group - protected: protected - protected_reason: protected_reason - owner: owner - lessee: lessee - description: n_description - conductor: conductor - allocation_uuid: allocation_uuid **Example JSON representation of a Node:** .. literalinclude:: samples/node-update-driver-info-response.json :language: javascript Delete Node =========== .. rest_method:: DELETE /v1/nodes/{node_ident} Deletes a node. Normal response codes: 204 Error codes: 400,403,404,409 Request ------- .. rest_parameters:: parameters.yaml - node_ident: node_ident ironic-15.0.0/api-ref/source/baremetal-api-v1-portgroups.inc0000664000175000017500000001420713652514273023705 0ustar zuulzuul00000000000000.. -*- rst -*- ======================= Portgroups (portgroups) ======================= .. versionadded:: 1.23 Ports can be combined into portgroups to support static link aggregation group (LAG) or multi-chassis link aggregation group (MLAG) configurations. Listing, Searching, Creating, Updating, and Deleting of bare metal Portgroup resources are done through the ``v1/portgroups`` resource. All Portgroups must be associated with a Node when created. This association can be changed, though the request may be rejected if either the current or destination Node are in a transitive state (for example, in the process of deploying) or are in a state that would be non-deterministically affected by such a change (for example, there is an active user instance on the Node). List Portgroups =============== .. rest_method:: GET /v1/portgroups Return a list of bare metal Portgroups. Some filtering is possible by passing in some parameters with the request. By default, this query will return the UUID, name and address for each Portgroup. .. versionadded:: 1.43 Added the ``detail`` boolean request parameter. When specified ``True`` this causes the response to include complete details about each portgroup. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node: r_portgroup_node_ident - address: r_portgroup_address - fields: fields - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - detail: detail Response -------- .. rest_parameters:: parameters.yaml - portgroups: portgroups - uuid: uuid - address: portgroup_address - name: portgroup_name - links: links **Example Portgroup list response:** .. literalinclude:: samples/portgroup-list-response.json :language: javascript Create Portgroup ================ .. rest_method:: POST /v1/portgroups Creates a new Portgroup resource. This method requires a Node UUID and the physical hardware address for the Portgroup (MAC address in most cases). Normal response code: 201 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node_uuid: req_node_uuid - address: req_portgroup_address - name: portgroup_name **Example Portgroup creation request:** .. literalinclude:: samples/portgroup-create-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: portgroup_name - address: portgroup_address - node_uuid: node_uuid - standalone_ports_supported: standalone_ports_supported - internal_info: portgroup_internal_info - extra: extra - mode: portgroup_mode - properties: portgroup_properties - created_at: created_at - updated_at: updated_at - links: links - ports: pg_ports **Example Portgroup creation response:** .. literalinclude:: samples/portgroup-create-response.json :language: javascript List Detailed Portgroups ======================== .. rest_method:: GET /v1/portgroups/detail Return a list of bare metal Portgroups, with detailed information. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - node: r_portgroup_node_ident - address: r_portgroup_address - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key Response -------- .. rest_parameters:: parameters.yaml - portgroups: portgroups - name: portgroup_name - uuid: uuid - address: portgroup_address - node_uuid: node_uuid - standalone_ports_supported: standalone_ports_supported - internal_info: portgroup_internal_info - extra: extra - mode: portgroup_mode - properties: portgroup_properties - created_at: created_at - updated_at: updated_at - links: links - ports: pg_ports **Example detailed Portgroup list response:** .. literalinclude:: samples/portgroup-list-detail-response.json :language: javascript Show Portgroup Details ====================== .. rest_method:: GET /v1/portgroups/{portgroup_ident} Show details for the given Portgroup. Normal response code: 200 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - portgroup_ident: portgroup_ident - fields: fields Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: portgroup_name - address: portgroup_address - node_uuid: node_uuid - standalone_ports_supported: standalone_ports_supported - internal_info: portgroup_internal_info - extra: extra - mode: portgroup_mode - properties: portgroup_properties - created_at: created_at - updated_at: updated_at - links: links - ports: pg_ports **Example Portgroup details:** .. literalinclude:: samples/portgroup-create-response.json :language: javascript Update a Portgroup ================== .. rest_method:: PATCH /v1/portgroups/{portgroup_ident} Update a Portgroup. Normal response code: 200 Error codes: 400,401,403,404 Request ------- The BODY of the PATCH request must be a JSON PATCH document, adhering to `RFC 6902 `_. .. rest_parameters:: parameters.yaml - portgroup_ident: portgroup_ident **Example Portgroup update request:** .. literalinclude:: samples/portgroup-update-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - uuid: uuid - name: portgroup_name - address: portgroup_address - node_uuid: node_uuid - standalone_ports_supported: standalone_ports_supported - internal_info: portgroup_internal_info - extra: extra - mode: portgroup_mode - properties: portgroup_properties - created_at: created_at - updated_at: updated_at - links: links - ports: pg_ports **Example Portgroup update response:** .. literalinclude:: samples/portgroup-update-response.json :language: javascript Delete Portgroup ================ .. rest_method:: DELETE /v1/portgroups/{portgroup_ident} Delete a Portgroup. Normal response code: 204 Error codes: 400,401,403,404 Request ------- .. rest_parameters:: parameters.yaml - portgroup_ident: portgroup_ident ironic-15.0.0/api-ref/source/samples/0000775000175000017500000000000013652514443017400 5ustar zuulzuul00000000000000ironic-15.0.0/api-ref/source/samples/portgroup-list-response.json0000664000175000017500000000072113652514273025142 0ustar zuulzuul00000000000000{ "portgroups": [ { "address": "11:11:11:11:11:11", "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "name": "test_portgroup", "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ] } ironic-15.0.0/api-ref/source/samples/node-create-request-dynamic.json0000664000175000017500000000034113652514273025570 0ustar zuulzuul00000000000000{ "name": "test_node_dynamic", "driver": "ipmi", "driver_info": { "ipmi_username": "ADMIN", "ipmi_password": "password" }, "power_interface": "ipmitool", "resource_class": "bm-large" } ironic-15.0.0/api-ref/source/samples/allocation-create-request.json0000664000175000017500000000010113652514273025340 0ustar zuulzuul00000000000000{ "name": "allocation-1", "resource_class": "bm-large" } ironic-15.0.0/api-ref/source/samples/volume-list-response.json0000664000175000017500000000112413652514273024406 0ustar zuulzuul00000000000000{ "connectors": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors", "rel": "bookmark" } ], "links": [ { "href": "http://127.0.0.1:6385/v1/volume/", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/", "rel": "bookmark" } ], "targets": [ { "href": "http://127.0.0.1:6385/v1/volume/targets", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/node-vendor-passthru-response.json0000664000175000017500000000050613652514273026220 0ustar zuulzuul00000000000000{ "bmc_reset": { "async": true, "attach": false, "description": "", "http_methods": [ "POST" ], "require_exclusive_lock": true }, "send_raw": { "async": true, "attach": false, "description": "", "http_methods": [ "POST" ], "require_exclusive_lock": true } } ironic-15.0.0/api-ref/source/samples/port-update-response.json0000664000175000017500000000145113652514273024375 0ustar zuulzuul00000000000000{ "address": "22:22:22:22:22:22", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "is_smartnic": true, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "physical_network": "physnet1", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": true, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ironic-15.0.0/api-ref/source/samples/conductor-list-response.json0000664000175000017500000000141113652514273025076 0ustar zuulzuul00000000000000{ "conductors": [ { "hostname": "compute1.localdomain", "conductor_group": "", "links": [ { "href": "http://127.0.0.1:6385/v1/conductors/compute1.localdomain", "rel": "self" }, { "href": "http://127.0.0.1:6385/conductors/compute1.localdomain", "rel": "bookmark" } ], "alive": false }, { "hostname": "compute2.localdomain", "conductor_group": "", "links": [ { "href": "http://127.0.0.1:6385/v1/conductors/compute2.localdomain", "rel": "self" }, { "href": "http://127.0.0.1:6385/conductors/compute2.localdomain", "rel": "bookmark" } ], "alive": true } ] }ironic-15.0.0/api-ref/source/samples/driver-logical-disk-properties-response.json0000664000175000017500000000307113652514273030156 0ustar zuulzuul00000000000000{ "controller": "Controller to use for this logical disk. If not specified, the driver will choose a suitable RAID controller on the bare metal node. Optional.", "disk_type": "The type of disk preferred. Valid values are 'hdd' and 'ssd'. If this is not specified, disk type will not be a selection criterion for choosing backing physical disks. Optional.", "interface_type": "The interface type of disk. Valid values are 'sata', 'scsi' and 'sas'. If this is not specified, interface type will not be a selection criterion for choosing backing physical disks. Optional.", "is_root_volume": "Specifies whether this disk is a root volume. By default, this is False. Optional.", "number_of_physical_disks": "Number of physical disks to use for this logical disk. By default, the driver uses the minimum number of disks required for that RAID level. Optional.", "physical_disks": "The physical disks to use for this logical disk. If not specified, the driver will choose suitable physical disks to use. Optional.", "raid_level": "RAID level for the logical disk. Valid values are 'JBOD', '0', '1', '2', '5', '6', '1+0', '5+0' and '6+0'. Required.", "share_physical_disks": "Specifies whether other logical disks can share physical disks with this logical disk. By default, this is False. Optional.", "size_gb": "Size in GiB (Integer) for the logical disk. Use 'MAX' as size_gb if this logical disk is supposed to use the rest of the space available. Required.", "volume_name": "Name of the volume to be created. If this is not specified, it will be auto-generated. Optional." } ironic-15.0.0/api-ref/source/samples/node-vif-list-response.json0000664000175000017500000000012313652514273024604 0ustar zuulzuul00000000000000{ "vifs": [ { "id": "1974dcfa-836f-41b2-b541-686c100900e5" } ] } ironic-15.0.0/api-ref/source/samples/node-set-available-state.json0000664000175000017500000000003413652514273025043 0ustar zuulzuul00000000000000{ "target": "provide" } ironic-15.0.0/api-ref/source/samples/conductor-list-details-response.json0000664000175000017500000000204413652514273026524 0ustar zuulzuul00000000000000{ "conductors": [ { "links": [ { "href": "http://127.0.0.1:6385/v1/conductors/compute1.localdomain", "rel": "self" }, { "href": "http://127.0.0.1:6385/conductors/compute1.localdomain", "rel": "bookmark" } ], "created_at": "2018-08-07T08:39:21+00:00", "hostname": "compute1.localdomain", "conductor_group": "", "updated_at": "2018-11-30T07:07:23+00:00", "alive": false, "drivers": [ "ipmi" ] }, { "links": [ { "href": "http://127.0.0.1:6385/v1/conductors/compute2.localdomain", "rel": "self" }, { "href": "http://127.0.0.1:6385/conductors/compute2.localdomain", "rel": "bookmark" } ], "created_at": "2018-12-05T07:03:19+00:00", "hostname": "compute2.localdomain", "conductor_group": "", "updated_at": "2018-12-05T07:03:21+00:00", "alive": true, "drivers": [ "ipmi" ] } ] } ironic-15.0.0/api-ref/source/samples/conductor-show-response.json0000664000175000017500000000066413652514273025114 0ustar zuulzuul00000000000000{ "links": [ { "href": "http://127.0.0.1:6385/v1/conductors/compute2.localdomain", "rel": "self" }, { "href": "http://127.0.0.1:6385/conductors/compute2.localdomain", "rel": "bookmark" } ], "created_at": "2018-12-05T07:03:19+00:00", "hostname": "compute2.localdomain", "conductor_group": "", "updated_at": "2018-12-05T07:03:21+00:00", "alive": true, "drivers": [ "ipmi" ] } ironic-15.0.0/api-ref/source/samples/chassis-create-request.json0000664000175000017500000000005013652514273024653 0ustar zuulzuul00000000000000{ "description": "Sample chassis" } ironic-15.0.0/api-ref/source/samples/node-volume-target-list-response.json0000664000175000017500000000107313652514273026620 0ustar zuulzuul00000000000000{ "targets": [ { "boot_index": 0, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "7211f7d3-3f32-4efc-b64e-9b8e92e64a8e", "volume_type": "iscsi" } ] } ironic-15.0.0/api-ref/source/samples/api-root-response.json0000664000175000017500000000113113652514273023656 0ustar zuulzuul00000000000000{ "default_version": { "id": "v1", "links": [ { "href": "http://127.0.0.1:6385/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.37" }, "description": "Ironic is an OpenStack project which aims to provision baremetal machines.", "name": "OpenStack Ironic API", "versions": [ { "id": "v1", "links": [ { "href": "http://127.0.0.1:6385/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.37" } ] } ironic-15.0.0/api-ref/source/samples/node-volume-target-detail-response.json0000664000175000017500000000132613652514273027110 0ustar zuulzuul00000000000000{ "targets": [ { "boot_index": 0, "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "properties": {}, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "7211f7d3-3f32-4efc-b64e-9b8e92e64a8e", "volume_type": "iscsi" } ] } ironic-15.0.0/api-ref/source/samples/node-volume-connector-detail-response.json0000664000175000017500000000125613652514273027616 0ustar zuulzuul00000000000000{ "connectors": [ { "connector_id": "iqn.2017-07.org.openstack:02:10190a4153e", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ] } ironic-15.0.0/api-ref/source/samples/chassis-update-response.json0000664000175000017500000000134013652514273025043 0ustar zuulzuul00000000000000{ "created_at": "2016-08-18T22:28:48.643434+11:11", "description": "Updated Chassis", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "bookmark" } ], "nodes": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "bookmark" } ], "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1" } ironic-15.0.0/api-ref/source/samples/chassis-show-response.json0000664000175000017500000000130113652514273024536 0ustar zuulzuul00000000000000{ "created_at": "2016-08-18T22:28:48.643434+11:11", "description": "Sample chassis", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "bookmark" } ], "nodes": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "bookmark" } ], "updated_at": null, "uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1" } ironic-15.0.0/api-ref/source/samples/api-v1-root-response.json0000664000175000017500000000353713652514273024216 0ustar zuulzuul00000000000000{ "chassis": [ { "href": "http://127.0.0.1:6385/v1/chassis/", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/", "rel": "bookmark" } ], "drivers": [ { "href": "http://127.0.0.1:6385/v1/drivers/", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/", "rel": "bookmark" } ], "heartbeat": [ { "href": "http://127.0.0.1:6385/v1/heartbeat/", "rel": "self" }, { "href": "http://127.0.0.1:6385/heartbeat/", "rel": "bookmark" } ], "id": "v1", "links": [ { "href": "http://127.0.0.1:6385/v1/", "rel": "self" }, { "href": "https://docs.openstack.org/ironic/latest/contributor/webapi.html", "rel": "describedby", "type": "text/html" } ], "lookup": [ { "href": "http://127.0.0.1:6385/v1/lookup/", "rel": "self" }, { "href": "http://127.0.0.1:6385/lookup/", "rel": "bookmark" } ], "media_types": [ { "base": "application/json", "type": "application/vnd.openstack.ironic.v1+json" } ], "nodes": [ { "href": "http://127.0.0.1:6385/v1/nodes/", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/", "rel": "bookmark" } ], "portgroups": [ { "href": "http://127.0.0.1:6385/v1/portgroups/", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/ports/", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/", "rel": "bookmark" } ], "volume": [ { "href": "http://127.0.0.1:6385/v1/volume/", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/node-volume-list-response.json0000664000175000017500000000152613652514273025337 0ustar zuulzuul00000000000000{ "connectors": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/connectors", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/connectors", "rel": "bookmark" } ], "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/", "rel": "bookmark" } ], "targets": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/targets", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume/targets", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/portgroup-port-detail-response.json0000664000175000017500000000165613652514273026423 0ustar zuulzuul00000000000000{ "ports": [ { "address": "22:22:22:22:22:22", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "is_smartnic": true, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "physical_network": "physnet1", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": true, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/node-bios-list-response.json0000664000175000017500000000102513652514273024756 0ustar zuulzuul00000000000000{ "bios": [ { "created_at": "2016-08-18T22:28:49.653974+00:00", "updated_at": "2016-08-18T22:28:49.653974+00:00", "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/bios/virtualization", "rel": "self" }, { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/bios/virtualization", "rel": "bookmark" } ], "name": "virtualization", "value": "on" } ] } ironic-15.0.0/api-ref/source/samples/node-set-active-state.json0000664000175000017500000000014613652514273024402 0ustar zuulzuul00000000000000{ "target": "active", "configdrive": "http://127.0.0.1/images/test-node-config-drive.iso.gz" }ironic-15.0.0/api-ref/source/samples/chassis-list-response.json0000664000175000017500000000065113652514273024540 0ustar zuulzuul00000000000000{ "chassis": [ { "description": "Sample chassis", "links": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "bookmark" } ], "uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1" } ] } ironic-15.0.0/api-ref/source/samples/port-list-response.json0000664000175000017500000000064213652514273024067 0ustar zuulzuul00000000000000{ "ports": [ { "address": "11:11:11:11:11:11", "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/node-portgroup-list-response.json0000664000175000017500000000072113652514273026065 0ustar zuulzuul00000000000000{ "portgroups": [ { "address": "22:22:22:22:22:22", "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "name": "test_portgroup", "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ] } ironic-15.0.0/api-ref/source/samples/deploy-template-update-request.json0000664000175000017500000000013513652514273026346 0ustar zuulzuul00000000000000[ { "path" : "/name", "value" : "CUSTOM_HT_ON", "op" : "replace" } ] ironic-15.0.0/api-ref/source/samples/node-set-clean-state.json0000664000175000017500000000033113652514273024205 0ustar zuulzuul00000000000000{ "target": "clean", "clean_steps": [ { "interface": "deploy", "step": "upgrade_firmware", "args": { "force": "True" } } ] } ironic-15.0.0/api-ref/source/samples/drivers-list-detail-response.json0000664000175000017500000001210613652514273026017 0ustar zuulzuul00000000000000{ "drivers": [ { "default_bios_interface": null, "default_boot_interface": null, "default_console_interface": null, "default_deploy_interface": null, "default_inspect_interface": null, "default_management_interface": null, "default_network_interface": null, "default_power_interface": null, "default_raid_interface": null, "default_rescue_interface": null, "default_storage_interface": null, "default_vendor_interface": null, "enabled_bios_interfaces": null, "enabled_boot_interfaces": null, "enabled_console_interfaces": null, "enabled_deploy_interfaces": null, "enabled_inspect_interfaces": null, "enabled_management_interfaces": null, "enabled_network_interfaces": null, "enabled_power_interfaces": null, "enabled_raid_interfaces": null, "enabled_rescue_interfaces": null, "enabled_storage_interfaces": null, "enabled_vendor_interfaces": null, "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool", "rel": "bookmark" } ], "name": "agent_ipmitool", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool/properties", "rel": "bookmark" } ], "type": "classic" }, { "default_bios_interface": null, "default_boot_interface": null, "default_console_interface": null, "default_deploy_interface": null, "default_inspect_interface": null, "default_management_interface": null, "default_network_interface": null, "default_power_interface": null, "default_raid_interface": null, "default_rescue_interface": null, "default_storage_interface": null, "default_vendor_interface": null, "enabled_bios_interfaces": null, "enabled_boot_interfaces": null, "enabled_console_interfaces": null, "enabled_deploy_interfaces": null, "enabled_inspect_interfaces": null, "enabled_management_interfaces": null, "enabled_network_interfaces": null, "enabled_power_interfaces": null, "enabled_raid_interfaces": null, "enabled_rescue_interfaces": null, "enabled_storage_interfaces": null, "enabled_vendor_interfaces": null, "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/fake", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/fake", "rel": "bookmark" } ], "name": "fake", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/fake/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/fake/properties", "rel": "bookmark" } ], "type": "classic" }, { "default_bios_interface": "no-bios", "default_boot_interface": "pxe", "default_console_interface": "no-console", "default_deploy_interface": "iscsi", "default_inspect_interface": "no-inspect", "default_management_interface": "ipmitool", "default_network_interface": "flat", "default_power_interface": "ipmitool", "default_raid_interface": "no-raid", "default_rescue_interface": "no-rescue", "default_storage_interface": "noop", "default_vendor_interface": "no-vendor", "enabled_bios_interfaces": [ "no-bios" ], "enabled_boot_interfaces": [ "pxe" ], "enabled_console_interfaces": [ "no-console" ], "enabled_deploy_interfaces": [ "iscsi", "direct" ], "enabled_inspect_interfaces": [ "no-inspect" ], "enabled_management_interfaces": [ "ipmitool" ], "enabled_network_interfaces": [ "flat", "noop" ], "enabled_power_interfaces": [ "ipmitool" ], "enabled_raid_interfaces": [ "no-raid", "agent" ], "enabled_rescue_interfaces": [ "no-rescue" ], "enabled_storage_interfaces": [ "noop" ], "enabled_vendor_interfaces": [ "no-vendor" ], "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi", "rel": "bookmark" } ], "name": "ipmi", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi/properties", "rel": "bookmark" } ], "type": "dynamic" } ] } ironic-15.0.0/api-ref/source/samples/portgroup-update-request.json0000664000175000017500000000014513652514273025303 0ustar zuulzuul00000000000000[ { "path" : "/address", "value" : "22:22:22:22:22:22", "op" : "replace" } ] ironic-15.0.0/api-ref/source/samples/volume-target-update-response.json0000664000175000017500000000115313652514273026203 0ustar zuulzuul00000000000000{ "boot_index": 0, "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "properties": {}, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "7211f7d3-3f32-4efc-b64e-9b8e92e64a8e", "volume_type": "iscsi" } ironic-15.0.0/api-ref/source/samples/volume-connector-update-response.json0000664000175000017500000000111013652514273026700 0ustar zuulzuul00000000000000{ "connector_id": "iqn.2017-07.org.openstack:02:10190a4153e", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ironic-15.0.0/api-ref/source/samples/port-create-response.json0000664000175000017500000000141313652514273024354 0ustar zuulzuul00000000000000{ "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "is_smartnic": true, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "physical_network": "physnet1", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": true, "updated_at": null, "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ironic-15.0.0/api-ref/source/samples/node-create-request-classic.json0000664000175000017500000000031013652514273025561 0ustar zuulzuul00000000000000{ "name": "test_node_classic", "driver": "agent_ipmitool", "driver_info": { "ipmi_username": "ADMIN", "ipmi_password": "password" }, "resource_class": "bm-large" } ironic-15.0.0/api-ref/source/samples/node-get-supported-boot-devices-response.json0000664000175000017500000000006013652514273030232 0ustar zuulzuul00000000000000{ "supported_boot_devices": [ "pxe" ] } ironic-15.0.0/api-ref/source/samples/node-set-manage-state.json0000664000175000017500000000003313652514273024352 0ustar zuulzuul00000000000000{ "target": "manage" } ironic-15.0.0/api-ref/source/samples/driver-get-response.json0000664000175000017500000000316313652514273024203 0ustar zuulzuul00000000000000{ "default_bios_interface": "no-bios", "default_boot_interface": "pxe", "default_console_interface": "no-console", "default_deploy_interface": "iscsi", "default_inspect_interface": "no-inspect", "default_management_interface": "ipmitool", "default_network_interface": "flat", "default_power_interface": "ipmitool", "default_raid_interface": "no-raid", "default_rescue_interface": "no-rescue", "default_storage_interface": "noop", "default_vendor_interface": "no-vendor", "enabled_bios_interfaces": [ "no-bios" ], "enabled_boot_interfaces": [ "pxe" ], "enabled_console_interfaces": [ "no-console" ], "enabled_deploy_interfaces": [ "iscsi", "direct" ], "enabled_inspect_interfaces": [ "no-inspect" ], "enabled_management_interfaces": [ "ipmitool" ], "enabled_network_interfaces": [ "flat", "noop" ], "enabled_power_interfaces": [ "ipmitool" ], "enabled_raid_interfaces": [ "no-raid", "agent" ], "enabled_rescue_interfaces": [ "no-rescue" ], "enabled_storage_interfaces": [ "noop" ], "enabled_vendor_interfaces": [ "no-vendor" ], "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi", "rel": "bookmark" } ], "name": "ipmi", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi/properties", "rel": "bookmark" } ], "type": "dynamic" } ironic-15.0.0/api-ref/source/samples/nodes-list-response.json0000664000175000017500000000204213652514273024207 0ustar zuulzuul00000000000000{ "nodes": [ { "instance_uuid": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "maintenance": false, "name": "test_node_classic", "power_state": "power off", "provision_state": "available", "uuid": "6d85703a-565d-469a-96ce-30b6de53079d" }, { "instance_uuid": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428", "rel": "bookmark" } ], "maintenance": false, "name": "test_node_dynamic", "power_state": null, "provision_state": "enroll", "uuid": "2b045129-a906-46af-bc1a-092b294b3428" } ] } ironic-15.0.0/api-ref/source/samples/volume-connector-update-request.json0000664000175000017500000000020113652514273026532 0ustar zuulzuul00000000000000[ { "path" : "/connector_id", "value" : "iqn.2017-07.org.openstack:02:10190a4153e", "op" : "replace" } ] ironic-15.0.0/api-ref/source/samples/node-set-traits-request.json0000664000175000017500000000010013652514273024773 0ustar zuulzuul00000000000000{ "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX" ] } ironic-15.0.0/api-ref/source/samples/deploy-template-detail-response.json0000664000175000017500000000156713652514273026506 0ustar zuulzuul00000000000000{ "deploy_templates": [ { "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://10.60.253.180:6385/v1/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "self" }, { "href": "http://10.60.253.180:6385/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "bookmark" } ], "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "interface": "bios", "priority": 150, "step": "apply_configuration" } ], "updated_at": null, "uuid": "bbb45f41-d4bc-4307-8d1d-32f95ce1e920" } ] } ironic-15.0.0/api-ref/source/samples/node-portgroup-detail-response.json0000664000175000017500000000210713652514273026354 0ustar zuulzuul00000000000000{ "portgroups": [ { "address": "22:22:22:22:22:22", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "mode": "active-backup", "name": "test_portgroup", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "ports": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "bookmark" } ], "properties": {}, "standalone_ports_supported": true, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ] } ironic-15.0.0/api-ref/source/samples/node-validate-response.json0000664000175000017500000000064113652514273024645 0ustar zuulzuul00000000000000{ "boot": { "result": true }, "console": { "result": true }, "deploy": { "result": true }, "inspect": { "result": true }, "management": { "result": true }, "network": { "result": true }, "power": { "result": true }, "raid": { "result": true }, "rescue": { "reason": "not supported", "result": null }, "storage": { "result": true } } ironic-15.0.0/api-ref/source/samples/deploy-template-update-response.json0000664000175000017500000000135113652514273026515 0ustar zuulzuul00000000000000{ "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://10.60.253.180:6385/v1/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "self" }, { "href": "http://10.60.253.180:6385/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "bookmark" } ], "name": "CUSTOM_HT_ON", "steps": [ { "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "interface": "bios", "priority": 150, "step": "apply_configuration" } ], "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "bbb45f41-d4bc-4307-8d1d-32f95ce1e920" } ironic-15.0.0/api-ref/source/samples/allocations-list-response.json0000664000175000017500000000311013652514273025404 0ustar zuulzuul00000000000000{ "allocations": [ { "candidate_nodes": [], "created_at": "2019-02-20T09:43:58+00:00", "extra": {}, "last_error": null, "links": [ { "href": "http://127.0.0.1:6385/v1/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "self" }, { "href": "http://127.0.0.1:6385/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "bookmark" } ], "name": "allocation-1", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "owner": null, "resource_class": "bm-large", "state": "active", "traits": [], "updated_at": "2019-02-20T09:43:58+00:00", "uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88" }, { "candidate_nodes": [], "created_at": "2019-02-20T09:43:58+00:00", "extra": {}, "last_error": "Failed to process allocation eff80f47-75f0-4d41-b1aa-cf07c201adac: no available nodes match the resource class bm-large.", "links": [ { "href": "http://127.0.0.1:6385/v1/allocations/eff80f47-75f0-4d41-b1aa-cf07c201adac", "rel": "self" }, { "href": "http://127.0.0.1:6385/allocations/eff80f47-75f0-4d41-b1aa-cf07c201adac", "rel": "bookmark" } ], "name": "allocation-2", "node_uuid": null, "owner": null, "resource_class": "bm-large", "state": "error", "traits": [ "CUSTOM_GOLD" ], "updated_at": "2019-02-20T09:43:58+00:00", "uuid": "eff80f47-75f0-4d41-b1aa-cf07c201adac" } ] } ironic-15.0.0/api-ref/source/samples/deploy-template-create-response.json0000664000175000017500000000132713652514273026501 0ustar zuulzuul00000000000000{ "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://10.60.253.180:6385/v1/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "self" }, { "href": "http://10.60.253.180:6385/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "bookmark" } ], "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "interface": "bios", "priority": 150, "step": "apply_configuration" } ], "updated_at": null, "uuid": "bbb45f41-d4bc-4307-8d1d-32f95ce1e920" } ironic-15.0.0/api-ref/source/samples/deploy-template-create-request.json0000664000175000017500000000065113652514273026332 0ustar zuulzuul00000000000000{ "extra": {}, "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "priority": 150 } ] } ironic-15.0.0/api-ref/source/samples/allocation-create-request-2.json0000664000175000017500000000014113652514273025503 0ustar zuulzuul00000000000000{ "name": "allocation-2", "resource_class": "bm-large", "traits": ["CUSTOM_GOLD"] } ironic-15.0.0/api-ref/source/samples/volume-connector-list-response.json0000664000175000017500000000105313652514273026377 0ustar zuulzuul00000000000000{ "connectors": [ { "connector_id": "iqn.2017-07.org.openstack:01:d9a51732c3f", "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ] } ironic-15.0.0/api-ref/source/samples/nodes-list-details-response.json0000664000175000017500000001500613652514273025636 0ustar zuulzuul00000000000000{ "nodes": [ { "allocation_uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88", "boot_interface": null, "chassis_uuid": null, "clean_step": {}, "conductor": "compute1.localdomain", "conductor_group": "group-1", "console_enabled": false, "console_interface": null, "created_at": "2016-08-18T22:28:48.643434+11:11", "deploy_interface": null, "deploy_step": {}, "description": null, "driver": "fake", "driver_info": { "ipmi_password": "******", "ipmi_username": "ADMIN" }, "driver_internal_info": { "clean_steps": null }, "extra": {}, "inspect_interface": null, "inspection_finished_at": null, "inspection_started_at": null, "instance_info": {}, "instance_uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88", "last_error": null, "lessee": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "maintenance": false, "maintenance_reason": null, "management_interface": null, "name": "test_node_classic", "network_interface": "flat", "owner": "john doe", "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "bookmark" } ], "power_interface": null, "power_state": "power off", "properties": {}, "protected": false, "protected_reason": null, "provision_state": "available", "provision_updated_at": "2016-08-18T22:28:49.946416+00:00", "raid_config": {}, "raid_interface": null, "rescue_interface": null, "reservation": null, "resource_class": null, "retired": false, "retired_reason": null, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "bookmark" } ], "storage_interface": "noop", "target_power_state": null, "target_provision_state": null, "target_raid_config": {}, "traits": [], "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "vendor_interface": null, "volume": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "bookmark" } ] }, { "allocation_uuid": null, "boot_interface": "pxe", "chassis_uuid": null, "clean_step": {}, "conductor": "compute1.localdomain", "conductor_group": "", "console_enabled": false, "console_interface": "no-console", "created_at": "2016-08-18T22:28:48.643434+11:11", "deploy_interface": "iscsi", "deploy_step": {}, "driver": "ipmi", "driver_info": { "ipmi_password": "******", "ipmi_username": "ADMIN" }, "driver_internal_info": {}, "extra": {}, "inspect_interface": "no-inspect", "inspection_finished_at": null, "inspection_started_at": null, "instance_info": {}, "instance_uuid": null, "last_error": null, "lessee": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428", "rel": "bookmark" } ], "maintenance": false, "maintenance_reason": null, "management_interface": "ipmitool", "name": "test_node_dynamic", "network_interface": "flat", "owner": "43e61ec9-8e42-4dcb-bc45-30d66aa93e5b", "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428/portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428/portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428/ports", "rel": "bookmark" } ], "power_interface": "ipmitool", "power_state": null, "properties": {}, "protected": false, "protected_reason": null, "provision_state": "enroll", "provision_updated_at": null, "raid_config": {}, "raid_interface": "no-raid", "rescue_interface": "no-rescue", "reservation": null, "resource_class": null, "retired": false, "retired_reason": null, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428/states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428/states", "rel": "bookmark" } ], "storage_interface": "noop", "target_power_state": null, "target_provision_state": null, "target_raid_config": {}, "traits": [], "updated_at": null, "uuid": "2b045129-a906-46af-bc1a-092b294b3428", "vendor_interface": "no-vendor", "volume": [ { "href": "http://127.0.0.1:6385/v1/nodes/2b045129-a906-46af-bc1a-092b294b3428/volume", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/2b045129-a906-46af-bc1a-092b294b3428/volume", "rel": "bookmark" } ] } ] } ironic-15.0.0/api-ref/source/samples/node-inject-nmi.json0000664000175000017500000000000313652514273023245 0ustar zuulzuul00000000000000{} ironic-15.0.0/api-ref/source/samples/port-update-request.json0000664000175000017500000000014513652514273024226 0ustar zuulzuul00000000000000[ { "path" : "/address", "value" : "22:22:22:22:22:22", "op" : "replace" } ] ironic-15.0.0/api-ref/source/samples/deploy-template-list-response.json0000664000175000017500000000071713652514273026213 0ustar zuulzuul00000000000000{ "deploy_templates": [ { "links": [ { "href": "http://10.60.253.180:6385/v1/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "self" }, { "href": "http://10.60.253.180:6385/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "bookmark" } ], "name": "CUSTOM_HYPERTHREADING_ON", "uuid": "bbb45f41-d4bc-4307-8d1d-32f95ce1e920" } ] } ironic-15.0.0/api-ref/source/samples/volume-connector-list-detail-response.json0000664000175000017500000000122013652514273027633 0ustar zuulzuul00000000000000{ "connectors": [ { "connector_id": "iqn.2017-07.org.openstack:01:d9a51732c3f", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "updated_at": null, "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ] } ironic-15.0.0/api-ref/source/samples/portgroup-list-detail-response.json0000664000175000017500000000205113652514273026400 0ustar zuulzuul00000000000000{ "portgroups": [ { "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "mode": "active-backup", "name": "test_portgroup", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "ports": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "bookmark" } ], "properties": {}, "standalone_ports_supported": true, "updated_at": null, "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ] } ironic-15.0.0/api-ref/source/samples/node-get-boot-device-response.json0000664000175000017500000000006213652514273026026 0ustar zuulzuul00000000000000{ "boot_device": "pxe", "persistent": false } ironic-15.0.0/api-ref/source/samples/port-list-detail-response.json0000664000175000017500000000162013652514273025324 0ustar zuulzuul00000000000000{ "ports": [ { "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "is_smartnic": true, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "physical_network": "physnet1", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": true, "updated_at": null, "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/node-update-driver.json0000664000175000017500000000012713652514273023772 0ustar zuulzuul00000000000000[ { "op" : "replace", "path" : "/driver", "value" : "fake" } ] ironic-15.0.0/api-ref/source/samples/allocation-create-response.json0000664000175000017500000000111113652514273025510 0ustar zuulzuul00000000000000{ "candidate_nodes": [], "created_at": "2019-02-20T09:43:58+00:00", "extra": {}, "last_error": null, "links": [ { "href": "http://127.0.0.1:6385/v1/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "self" }, { "href": "http://127.0.0.1:6385/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "bookmark" } ], "name": "allocation-1", "node_uuid": null, "owner": null, "resource_class": "bm-large", "state": "allocating", "traits": [], "updated_at": null, "uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88" } ironic-15.0.0/api-ref/source/samples/node-port-list-response.json0000664000175000017500000000064213652514273025012 0ustar zuulzuul00000000000000{ "ports": [ { "address": "22:22:22:22:22:22", "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/chassis-update-request.json0000664000175000017500000000015413652514273024677 0ustar zuulzuul00000000000000[ { "op": "replace", "path": "/description", "value": "Updated Chassis" } ] ironic-15.0.0/api-ref/source/samples/deploy-template-show-response.json0000664000175000017500000000132713652514273026216 0ustar zuulzuul00000000000000{ "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://10.60.253.180:6385/v1/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "self" }, { "href": "http://10.60.253.180:6385/deploy_templates/bbb45f41-d4bc-4307-8d1d-32f95ce1e920", "rel": "bookmark" } ], "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "interface": "bios", "priority": 150, "step": "apply_configuration" } ], "updated_at": null, "uuid": "bbb45f41-d4bc-4307-8d1d-32f95ce1e920" } ironic-15.0.0/api-ref/source/samples/volume-target-update-request.json0000664000175000017500000000017213652514273026035 0ustar zuulzuul00000000000000[ { "path" : "/volume_id", "value" : "7211f7d3-3f32-4efc-b64e-9b8e92e64a8e", "op" : "replace" } ] ironic-15.0.0/api-ref/source/samples/port-create-request.json0000664000175000017500000000055613652514273024215 0ustar zuulzuul00000000000000{ "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "address": "11:11:11:11:11:11", "is_smartnic": true, "local_link_connection": { "switch_id": "0a:1b:2c:3d:4e:5f", "port_id": "Ethernet3/1", "switch_info": "switch1" }, "physical_network": "physnet1" } ironic-15.0.0/api-ref/source/samples/node-get-state-response.json0000664000175000017500000000044413652514273024752 0ustar zuulzuul00000000000000{ "console_enabled": false, "last_error": null, "power_state": "power off", "provision_state": "available", "provision_updated_at": "2016-08-18T22:28:49.946416+00:00", "raid_config": {}, "target_power_state": null, "target_provision_state": null, "target_raid_config": {} } ironic-15.0.0/api-ref/source/samples/node-port-detail-response.json0000664000175000017500000000165613652514273025307 0ustar zuulzuul00000000000000{ "ports": [ { "address": "22:22:22:22:22:22", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "is_smartnic": true, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "physical_network": "physnet1", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": true, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/volume-connector-create-response.json0000664000175000017500000000105213652514273026666 0ustar zuulzuul00000000000000{ "connector_id": "iqn.2017-07.org.openstack:01:d9a51732c3f", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "updated_at": null, "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ironic-15.0.0/api-ref/source/samples/allocation-update-request.json0000664000175000017500000000013213652514273025363 0ustar zuulzuul00000000000000[ { "op": "add", "path": "/extra/foo", "value": "bar" } ] ironic-15.0.0/api-ref/source/samples/node-set-boot-device.json0000664000175000017500000000006613652514273024212 0ustar zuulzuul00000000000000{ "boot_device": "pxe", "persistent": false } ironic-15.0.0/api-ref/source/samples/node-set-soft-power-off.json0000664000175000017500000000006713652514273024670 0ustar zuulzuul00000000000000{ "target": "soft power off", "timeout": 300 } ironic-15.0.0/api-ref/source/samples/allocation-show-response.json0000664000175000017500000000117613652514273025240 0ustar zuulzuul00000000000000{ "candidate_nodes": [], "created_at": "2019-02-20T09:43:58+00:00", "extra": {}, "last_error": null, "links": [ { "href": "http://127.0.0.1:6385/v1/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "self" }, { "href": "http://127.0.0.1:6385/allocations/5344a3e2-978a-444e-990a-cbf47c62ef88", "rel": "bookmark" } ], "name": "allocation-1", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "owner": null, "resource_class": "bm-large", "state": "active", "traits": [], "updated_at": "2019-02-20T09:43:58+00:00", "uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88" } ironic-15.0.0/api-ref/source/samples/drivers-list-response.json0000664000175000017500000000353113652514273024561 0ustar zuulzuul00000000000000{ "drivers": [ { "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool", "rel": "bookmark" } ], "name": "agent_ipmitool", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool/properties", "rel": "bookmark" } ], "type": "classic" }, { "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/fake", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/fake", "rel": "bookmark" } ], "name": "fake", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/fake/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/fake/properties", "rel": "bookmark" } ], "type": "classic" }, { "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi", "rel": "bookmark" } ], "name": "ipmi", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/ipmi/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/ipmi/properties", "rel": "bookmark" } ], "type": "dynamic" } ] } ironic-15.0.0/api-ref/source/samples/lookup-node-response.json0000664000175000017500000000137513652514273024372 0ustar zuulzuul00000000000000{ "config": { "heartbeat_timeout": 300, "metrics": { "backend": "noop", "global_prefix": null, "prepend_host": false, "prepend_host_reverse": true, "prepend_uuid": false }, "metrics_statsd": { "statsd_host": "localhost", "statsd_port": 8125 } }, "node": { "driver_internal_info": { "clean_steps": null }, "instance_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "properties": {}, "uuid": "6d85703a-565d-469a-96ce-30b6de53079d" } } ironic-15.0.0/api-ref/source/samples/portgroup-create-response.json0000664000175000017500000000161313652514273025433 0ustar zuulzuul00000000000000{ "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "mode": "active-backup", "name": "test_portgroup", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "ports": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "bookmark" } ], "properties": {}, "standalone_ports_supported": true, "updated_at": null, "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ironic-15.0.0/api-ref/source/samples/volume-connector-create-request.json0000664000175000017500000000022013652514273026514 0ustar zuulzuul00000000000000{ "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "connector_id": "iqn.2017-07.org.openstack:01:d9a51732c3f" } ironic-15.0.0/api-ref/source/samples/node-set-power-off.json0000664000175000017500000000003513652514273023712 0ustar zuulzuul00000000000000{ "target": "power off" }ironic-15.0.0/api-ref/source/samples/node-traits-list-response.json0000664000175000017500000000010013652514273025321 0ustar zuulzuul00000000000000{ "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX" ] } ironic-15.0.0/api-ref/source/samples/node-maintenance-request.json0000664000175000017500000000005413652514273025166 0ustar zuulzuul00000000000000{ "reason": "Replacing the hard drive" }ironic-15.0.0/api-ref/source/samples/portgroup-create-request.json0000664000175000017500000000017613652514273025270 0ustar zuulzuul00000000000000{ "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "address": "11:11:11:11:11:11", "name": "test_portgroup" } ironic-15.0.0/api-ref/source/samples/volume-target-create-request.json0000664000175000017500000000024713652514273026021 0ustar zuulzuul00000000000000{ "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "volume_type": "iscsi", "boot_index": 0, "volume_id": "04452bed-5367-4202-8bf5-de4335ac56d2" } ironic-15.0.0/api-ref/source/samples/volume-target-list-detail-response.json0000664000175000017500000000127013652514273027134 0ustar zuulzuul00000000000000{ "targets": [ { "boot_index": 0, "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "properties": {}, "updated_at": null, "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "04452bed-5367-4202-8bf5-de4335ac56d2", "volume_type": "iscsi" } ] } ironic-15.0.0/api-ref/source/samples/chassis-list-details-response.json0000664000175000017500000000150413652514273026161 0ustar zuulzuul00000000000000{ "chassis": [ { "created_at": "2016-08-18T22:28:48.643434+11:11", "description": "Sample chassis", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1", "rel": "bookmark" } ], "nodes": [ { "href": "http://127.0.0.1:6385/v1/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/dff29d23-1ded-43b4-8ae1-5eebb3e30de1/nodes", "rel": "bookmark" } ], "updated_at": null, "uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1" } ] } ironic-15.0.0/api-ref/source/samples/node-volume-connector-list-response.json0000664000175000017500000000105313652514273027322 0ustar zuulzuul00000000000000{ "connectors": [ { "connector_id": "iqn.2017-07.org.openstack:02:10190a4153e", "links": [ { "href": "http://127.0.0.1:6385/v1/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/connectors/9bf93e01-d728-47a3-ad4b-5e66a835037c", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "type": "iqn", "uuid": "9bf93e01-d728-47a3-ad4b-5e66a835037c" } ] } ironic-15.0.0/api-ref/source/samples/node-update-driver-info-request.json0000664000175000017500000000054413652514273026414 0ustar zuulzuul00000000000000[ { "op": "replace", "path": "/driver_info/ipmi_username", "value": "OPERATOR" }, { "op": "add", "path": "/driver_info/deploy_kernel", "value": "http://127.0.0.1/images/kernel" }, { "op": "add", "path": "/driver_info/deploy_ramdisk", "value": "http://127.0.0.1/images/ramdisk" } ] ironic-15.0.0/api-ref/source/samples/volume-target-create-response.json0000664000175000017500000000111513652514273026162 0ustar zuulzuul00000000000000{ "boot_index": 0, "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "properties": {}, "updated_at": null, "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "04452bed-5367-4202-8bf5-de4335ac56d2", "volume_type": "iscsi" } ironic-15.0.0/api-ref/source/samples/node-bios-detail-response.json0000664000175000017500000000076713652514273025261 0ustar zuulzuul00000000000000{ "virtualization": { "created_at": "2016-08-18T22:28:49.653974+00:00", "updated_at": "2016-08-18T22:28:49.653974+00:00", "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/bios/virtualization", "rel": "self" }, { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/bios/virtualization", "rel": "bookmark" } ], "name": "virtualization", "value": "on" } } ironic-15.0.0/api-ref/source/samples/node-set-raid-request.json0000664000175000017500000000021013652514273024406 0ustar zuulzuul00000000000000{ "logical_disks" : [ { "size_gb" : 100, "is_root_volume" : true, "raid_level" : "1" } ] } ironic-15.0.0/api-ref/source/samples/allocation-update-response.json0000664000175000017500000000120213652514273025530 0ustar zuulzuul00000000000000{ "node_uuid": null, "uuid": "241db410-7b04-4b1c-87ae-4e336435db08", "links": [ { "href": "http://10.66.169.122/v1/allocations/241db410-7b04-4b1c-87ae-4e336435db08", "rel": "self" }, { "href": "http://10.66.169.122/allocations/241db410-7b04-4b1c-87ae-4e336435db08", "rel": "bookmark" } ], "extra": { "foo": "bar" }, "last_error": null, "created_at": "2019-06-04T07:46:25+00:00", "owner": null, "resource_class": "CUSTOM_GOLD", "updated_at": "2019-06-06T03:28:19.496960+00:00", "traits": [], "state": "error", "candidate_nodes": [], "name": "test_allocation" } ironic-15.0.0/api-ref/source/samples/node-show-response.json0000664000175000017500000000550413652514273024037 0ustar zuulzuul00000000000000{ "allocation_uuid": null, "boot_interface": null, "chassis_uuid": null, "clean_step": {}, "conductor": "compute1.localdomain", "conductor_group": "group-1", "console_enabled": false, "console_interface": null, "created_at": "2016-08-18T22:28:48.643434+11:11", "deploy_interface": null, "deploy_step": {}, "description": null, "driver": "fake", "driver_info": { "ipmi_password": "******", "ipmi_username": "ADMIN" }, "driver_internal_info": { "clean_steps": null }, "extra": {}, "inspect_interface": null, "inspection_finished_at": null, "inspection_started_at": null, "instance_info": {}, "instance_uuid": null, "last_error": null, "lessee": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "maintenance": false, "maintenance_reason": null, "management_interface": null, "name": "test_node_classic", "network_interface": "flat", "owner": null, "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "bookmark" } ], "power_interface": null, "power_state": "power off", "properties": {}, "protected": false, "protected_reason": null, "provision_state": "available", "provision_updated_at": "2016-08-18T22:28:49.946416+00:00", "raid_config": {}, "raid_interface": null, "rescue_interface": null, "reservation": null, "resource_class": "bm-large", "retired": false, "retired_reason": null, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "bookmark" } ], "storage_interface": "noop", "target_power_state": null, "target_provision_state": null, "target_raid_config": {}, "traits": [], "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "vendor_interface": null, "volume": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/node-update-driver-info-response.json0000664000175000017500000000573413652514273026570 0ustar zuulzuul00000000000000{ "allocation_uuid": null, "boot_interface": null, "chassis_uuid": null, "clean_step": {}, "conductor": "compute1.localdomain", "conductor_group": "group-1", "console_enabled": false, "console_interface": null, "created_at": "2016-08-18T22:28:48.643434+11:11", "deploy_interface": null, "deploy_step": {}, "driver": "fake", "driver_info": { "deploy_kernel": "http://127.0.0.1/images/kernel", "deploy_ramdisk": "http://127.0.0.1/images/ramdisk", "ipmi_password": "******", "ipmi_username": "OPERATOR" }, "driver_internal_info": { "clean_steps": null }, "extra": {}, "inspect_interface": null, "inspection_finished_at": null, "inspection_started_at": null, "instance_info": {}, "instance_uuid": null, "last_error": null, "lessee": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "maintenance": true, "maintenance_reason": "Replacing the hard drive", "management_interface": null, "name": "test_node_classic", "network_interface": "flat", "owner": null, "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "bookmark" } ], "power_interface": null, "power_state": "power off", "properties": {}, "protected": false, "protected_reason": null, "provision_state": "available", "provision_updated_at": "2016-08-18T22:28:49.946416+00:00", "raid_config": {}, "raid_interface": null, "rescue_interface": null, "reservation": null, "resource_class": null, "retired": false, "retired_reason": null, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "bookmark" } ], "storage_interface": "noop", "target_power_state": null, "target_provision_state": null, "target_raid_config": {}, "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX" ], "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "vendor_interface": null, "volume": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/node-create-response.json0000664000175000017500000000530613652514273024322 0ustar zuulzuul00000000000000{ "allocation_uuid": null, "boot_interface": null, "chassis_uuid": null, "clean_step": {}, "conductor_group": "group-1", "console_enabled": false, "console_interface": null, "created_at": "2016-08-18T22:28:48.643434+11:11", "deploy_interface": null, "deploy_step": {}, "description": null, "driver": "agent_ipmitool", "driver_info": { "ipmi_password": "******", "ipmi_username": "ADMIN" }, "driver_internal_info": {}, "extra": {}, "inspect_interface": null, "inspection_finished_at": null, "inspection_started_at": null, "instance_info": {}, "instance_uuid": null, "last_error": null, "lessee": null, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d", "rel": "bookmark" } ], "maintenance": false, "maintenance_reason": null, "management_interface": null, "name": "test_node_classic", "network_interface": "flat", "owner": null, "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/ports", "rel": "bookmark" } ], "power_interface": null, "power_state": null, "properties": {}, "protected": false, "protected_reason": null, "provision_state": "enroll", "provision_updated_at": null, "raid_config": {}, "raid_interface": null, "rescue_interface": null, "reservation": null, "resource_class": "bm-large", "retired": false, "retired_reason": null, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/states", "rel": "bookmark" } ], "storage_interface": "noop", "target_power_state": null, "target_provision_state": null, "target_raid_config": {}, "traits": [], "updated_at": null, "uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "vendor_interface": null, "volume": [ { "href": "http://127.0.0.1:6385/v1/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/6d85703a-565d-469a-96ce-30b6de53079d/volume", "rel": "bookmark" } ] } ironic-15.0.0/api-ref/source/samples/driver-property-response.json0000664000175000017500000000513213652514273025306 0ustar zuulzuul00000000000000{ "deploy_forces_oob_reboot": "Whether Ironic should force a reboot of the Node via the out-of-band channel after deployment is complete. Provides compatibility with older deploy ramdisks. Defaults to False. Optional.", "deploy_kernel": "UUID (from Glance) of the deployment kernel. Required.", "deploy_ramdisk": "UUID (from Glance) of the ramdisk that is mounted at boot time. Required.", "image_http_proxy": "URL of a proxy server for HTTP connections. Optional.", "image_https_proxy": "URL of a proxy server for HTTPS connections. Optional.", "image_no_proxy": "A comma-separated list of host names, IP addresses and domain names (with optional :port) that will be excluded from proxying. To denote a domain name, use a dot to prefix the domain name. This value will be ignored if ``image_http_proxy`` and ``image_https_proxy`` are not specified. Optional.", "ipmi_address": "IP address or hostname of the node. Required.", "ipmi_bridging": "bridging_type; default is \"no\". One of \"single\", \"dual\", \"no\". Optional.", "ipmi_disable_boot_timeout": "By default ironic will send a raw IPMI command to disable the 60 second timeout for booting. Setting this option to False will NOT send that command; default value is True. Optional.", "ipmi_force_boot_device": "Whether Ironic should specify the boot device to the BMC each time the server is turned on, eg. because the BMC is not capable of remembering the selected boot device across power cycles; default value is False. Optional.", "ipmi_local_address": "local IPMB address for bridged requests. Used only if ipmi_bridging is set to \"single\" or \"dual\". Optional.", "ipmi_password": "password. Optional.", "ipmi_port": "remote IPMI RMCP port. Optional.", "ipmi_priv_level": "privilege level; default is ADMINISTRATOR. One of ADMINISTRATOR, CALLBACK, OPERATOR, USER. Optional.", "ipmi_protocol_version": "the version of the IPMI protocol; default is \"2.0\". One of \"1.5\", \"2.0\". Optional.", "ipmi_target_address": "destination address for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".", "ipmi_target_channel": "destination channel for bridged request. Required only if ipmi_bridging is set to \"single\" or \"dual\".", "ipmi_terminal_port": "node's UDP port to connect to. Only required for console access.", "ipmi_transit_address": "transit address for bridged request. Required only if ipmi_bridging is set to \"dual\".", "ipmi_transit_channel": "transit channel for bridged request. Required only if ipmi_bridging is set to \"dual\".", "ipmi_username": "username; default is NULL user. Optional." } ironic-15.0.0/api-ref/source/samples/volume-target-list-response.json0000664000175000017500000000107313652514273025675 0ustar zuulzuul00000000000000{ "targets": [ { "boot_index": 0, "links": [ { "href": "http://127.0.0.1:6385/v1/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "self" }, { "href": "http://127.0.0.1:6385/volume/targets/bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "rel": "bookmark" } ], "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "uuid": "bd4d008c-7d31-463d-abf9-6c23d9d55f7f", "volume_id": "04452bed-5367-4202-8bf5-de4335ac56d2", "volume_type": "iscsi" } ] } ironic-15.0.0/api-ref/source/samples/node-vif-attach-request.json0000664000175000017500000000006513652514273024734 0ustar zuulzuul00000000000000{ "id": "1974dcfa-836f-41b2-b541-686c100900e5" } ironic-15.0.0/api-ref/source/samples/portgroup-port-list-response.json0000664000175000017500000000064213652514273026126 0ustar zuulzuul00000000000000{ "ports": [ { "address": "22:22:22:22:22:22", "links": [ { "href": "http://127.0.0.1:6385/v1/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/d2b30520-907d-46c8-bfee-c5586e6fb3a1", "rel": "bookmark" } ], "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } ] } ironic-15.0.0/api-ref/source/samples/portgroup-update-response.json0000664000175000017500000000165113652514273025454 0ustar zuulzuul00000000000000{ "address": "22:22:22:22:22:22", "created_at": "2016-08-18T22:28:48.643434+11:11", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a", "rel": "bookmark" } ], "mode": "active-backup", "name": "test_portgroup", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "ports": [ { "href": "http://127.0.0.1:6385/v1/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/e43c722c-248e-4c6e-8ce8-0d8ff129387a/ports", "rel": "bookmark" } ], "properties": {}, "standalone_ports_supported": true, "updated_at": "2016-08-18T22:28:49.653974+00:00", "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a" } ironic-15.0.0/api-ref/source/index.rst0000664000175000017500000000231313652514273017575 0ustar zuulzuul00000000000000:tocdepth: 2 ================ Bare Metal API ================ .. rest_expand_all:: .. include:: baremetal-api-versions.inc .. include:: baremetal-api-v1-nodes.inc .. include:: baremetal-api-v1-node-management.inc .. include:: baremetal-api-v1-node-passthru.inc .. include:: baremetal-api-v1-nodes-traits.inc .. include:: baremetal-api-v1-nodes-vifs.inc .. include:: baremetal-api-v1-portgroups.inc .. include:: baremetal-api-v1-nodes-portgroups.inc .. include:: baremetal-api-v1-ports.inc .. include:: baremetal-api-v1-nodes-ports.inc .. include:: baremetal-api-v1-portgroups-ports.inc .. include:: baremetal-api-v1-volume.inc .. include:: baremetal-api-v1-nodes-volume.inc .. include:: baremetal-api-v1-drivers.inc .. include:: baremetal-api-v1-driver-passthru.inc .. include:: baremetal-api-v1-nodes-bios.inc .. include:: baremetal-api-v1-conductors.inc .. include:: baremetal-api-v1-allocation.inc .. include:: baremetal-api-v1-node-allocation.inc .. include:: baremetal-api-v1-deploy-templates.inc .. NOTE(dtantsur): keep chassis close to the end since it's semi-deprecated .. include:: baremetal-api-v1-chassis.inc .. NOTE(dtantsur): keep misc last, since it covers internal API .. include:: baremetal-api-v1-misc.inc ironic-15.0.0/api-ref/regenerate-samples.sh0000775000175000017500000003021213652514273020555 0ustar zuulzuul00000000000000#!/bin/bash set -e -x if [ ! -x /usr/bin/jq ]; then echo "This script relies on 'jq' to process JSON output." echo "Please install it before continuing." exit 1 fi OS_AUTH_TOKEN=$(openstack token issue | grep ' id ' | awk '{print $4}') IRONIC_URL="http://127.0.0.1:6385" IRONIC_API_VERSION="1.55" export OS_AUTH_TOKEN IRONIC_URL DOC_BIOS_UUID="dff29d23-1ded-43b4-8ae1-5eebb3e30de1" DOC_CHASSIS_UUID="dff29d23-1ded-43b4-8ae1-5eebb3e30de1" DOC_NODE_UUID="6d85703a-565d-469a-96ce-30b6de53079d" DOC_DYNAMIC_NODE_UUID="2b045129-a906-46af-bc1a-092b294b3428" DOC_PORT_UUID="d2b30520-907d-46c8-bfee-c5586e6fb3a1" DOC_PORTGROUP_UUID="e43c722c-248e-4c6e-8ce8-0d8ff129387a" DOC_VOL_CONNECTOR_UUID="9bf93e01-d728-47a3-ad4b-5e66a835037c" DOC_VOL_TARGET_UUID="bd4d008c-7d31-463d-abf9-6c23d9d55f7f" DOC_PROVISION_UPDATED_AT="2016-08-18T22:28:49.946416+00:00" DOC_CREATED_AT="2016-08-18T22:28:48.643434+11:11" DOC_UPDATED_AT="2016-08-18T22:28:49.653974+00:00" DOC_IRONIC_CONDUCTOR_HOSTNAME="897ab1dad809" DOC_ALLOCATION_UUID="3bf138ba-6d71-44e7-b6a1-ca9cac17103e" DOC_DEPLOY_TEMPLATE_UUID="bbb45f41-d4bc-4307-8d1d-32f95ce1e920" function GET { # GET $RESOURCE curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "X-OpenStack-Ironic-API-Version: $IRONIC_API_VERSION" \ ${IRONIC_URL}/$1 | jq -S '.' } function POST { # POST $RESOURCE $FILENAME curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "X-OpenStack-Ironic-API-Version: $IRONIC_API_VERSION" \ -H "Content-Type: application/json" \ -X POST --data @$2 \ ${IRONIC_URL}/$1 | jq -S '.' } function PATCH { # POST $RESOURCE $FILENAME curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "X-OpenStack-Ironic-API-Version: $IRONIC_API_VERSION" \ -H "Content-Type: application/json" \ -X PATCH --data @$2 \ ${IRONIC_URL}/$1 | jq -S '.' } function PUT { # PUT $RESOURCE $FILENAME curl -s -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "X-OpenStack-Ironic-API-Version: $IRONIC_API_VERSION" \ -H "Content-Type: application/json" \ -X PUT --data @$2 \ ${IRONIC_URL}/$1 } function wait_for_node_state { local node="$1" local field="$2" local target_state="$3" local attempt=10 while [[ $attempt -gt 0 ]]; do res=$(openstack baremetal node show "$node" -f value -c "$field") if [[ "$res" == "$target_state" ]]; then break fi sleep 1 attempt=$((attempt - 1)) echo "Failed to get node $field == $target_state in $attempt attempts." done if [[ $attempt == 0 ]]; then exit 1 fi } pushd source/samples ########### # ROOT APIs GET '' > api-root-response.json GET 'v1' > api-v1-root-response.json ########### # DRIVER APIs GET v1/drivers > drivers-list-response.json GET v1/drivers?detail=true > drivers-list-detail-response.json GET v1/drivers/ipmi > driver-get-response.json GET v1/drivers/agent_ipmitool/properties > driver-property-response.json GET v1/drivers/agent_ipmitool/raid/logical_disk_properties > driver-logical-disk-properties-response.json ######### # CHASSIS POST v1/chassis chassis-create-request.json > chassis-show-response.json CID=$(cat chassis-show-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$CID" == "" ]; then exit 1 else echo "Chassis created. UUID: $CID" fi GET v1/chassis > chassis-list-response.json GET v1/chassis/detail > chassis-list-details-response.json PATCH v1/chassis/$CID chassis-update-request.json > chassis-update-response.json # skip GET /v1/chassis/$UUID because the response is same as POST ####### # NODES # Create a node with a real driver, but missing ipmi_address, # then do basic commands with it POST v1/nodes node-create-request-classic.json > node-create-response.json NID=$(cat node-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$NID" == "" ]; then exit 1 else echo "Node created. UUID: $NID" fi # Also create a node with a dynamic driver for viewing in the node list # endpoint DNID=$(POST v1/nodes node-create-request-dynamic.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$DNID" == "" ]; then exit 1 else echo "Node created. UUID: $DNID" fi # get the list of passthru methods from agent* driver GET v1/nodes/$NID/vendor_passthru/methods > node-vendor-passthru-response.json # Change to the fake driver and then move the node into the AVAILABLE # state without saving any output. # NOTE that these three JSON files are not included in the docs PATCH v1/nodes/$NID node-update-driver.json PUT v1/nodes/$NID/states/provision node-set-manage-state.json PUT v1/nodes/$NID/states/provision node-set-available-state.json # Wait node to become available wait_for_node_state $NID provision_state available GET v1/nodes/$NID/validate > node-validate-response.json PUT v1/nodes/$NID/states/power node-set-power-off.json # Wait node to reach power off state wait_for_node_state $NID power_state "power off" GET v1/nodes/$NID/states > node-get-state-response.json GET v1/nodes > nodes-list-response.json GET v1/nodes/detail > nodes-list-details-response.json GET v1/nodes/$NID > node-show-response.json # Node traits PUT v1/nodes/$NID/traits node-set-traits-request.json GET v1/nodes/$NID/traits > node-traits-list-response.json ############ # ALLOCATIONS POST v1/allocations allocation-create-request.json > allocation-create-response.json AID=$(cat allocation-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$AID" == "" ]; then exit 1 else echo "Allocation created. UUID: $AID" fi # Create a failed allocation for listing POST v1/allocations allocation-create-request-2.json # Poor man's wait_for_allocation sleep 1 GET v1/allocations > allocations-list-response.json GET v1/allocations/$AID > allocation-show-response.json GET v1/nodes/$NID/allocation > node-allocation-show-response.json ############ # NODES - MAINTENANCE # Do this after allocation API to be able to create successful allocations PUT v1/nodes/$NID/maintenance node-maintenance-request.json ############ # PORTGROUPS # Before we can create a portgroup, we must # write NODE ID into the create request document body sed -i "s/.*node_uuid.*/ \"node_uuid\": \"$NID\",/" portgroup-create-request.json POST v1/portgroups portgroup-create-request.json > portgroup-create-response.json PGID=$(cat portgroup-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$PGID" == "" ]; then exit 1 else echo "Portgroup created. UUID: $PGID" fi GET v1/portgroups > portgroup-list-response.json GET v1/portgroups/detail > portgroup-list-detail-response.json PATCH v1/portgroups/$PGID portgroup-update-request.json > portgroup-update-response.json # skip GET $PGID because same result as POST # skip DELETE ########### # PORTS # Before we can create a port, we must # write NODE ID and PORTGROUP ID into the create request document body sed -i "s/.*node_uuid.*/ \"node_uuid\": \"$NID\",/" port-create-request.json sed -i "s/.*portgroup_uuid.*/ \"portgroup_uuid\": \"$PGID\",/" port-create-request.json POST v1/ports port-create-request.json > port-create-response.json PID=$(cat port-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$PID" == "" ]; then exit 1 else echo "Port created. UUID: $PID" fi GET v1/ports > port-list-response.json GET v1/ports/detail > port-list-detail-response.json PATCH v1/ports/$PID port-update-request.json > port-update-response.json # skip GET $PID because same result as POST # skip DELETE ################ # NODE PORT APIs GET v1/nodes/$NID/ports > node-port-list-response.json GET v1/nodes/$NID/ports/detail > node-port-detail-response.json ##################### # NODE PORTGROUP APIs GET v1/nodes/$NID/portgroups > node-portgroup-list-response.json GET v1/nodes/$NID/portgroups/detail > node-portgroup-detail-response.json ##################### # PORTGROUPS PORT APIs GET v1/portgroups/$PGID/ports > portgroup-port-list-response.json GET v1/portgroups/$PGID/ports/detail > portgroup-port-detail-response.json ############ # LOOKUP API GET v1/lookup?node_uuid=$NID > lookup-node-response.json ##################### # NODES MANAGEMENT API # These need to be done while the node is in maintenance mode, # and the node's driver is "fake", to avoid potential races # with internal processes that lock the Node # this corrects an intentional ommission in some of the samples PATCH v1/nodes/$NID node-update-driver-info-request.json > node-update-driver-info-response.json GET v1/nodes/$NID/management/boot_device/supported > node-get-supported-boot-devices-response.json PUT v1/nodes/$NID/management/boot_device node-set-boot-device.json GET v1/nodes/$NID/management/boot_device > node-get-boot-device-response.json PUT v1/nodes/$NID/management/inject_nmi node-inject-nmi.json ############################# # NODES VIF ATTACH/DETACH API POST v1/nodes/$NID/vifs node-vif-attach-request.json GET v1/nodes/$NID/vifs > node-vif-list-response.json ############# # VOLUME APIs GET v1/volume/ > volume-list-response.json sed -i "s/.*node_uuid.*/ \"node_uuid\": \"$NID\",/" volume-connector-create-request.json POST v1/volume/connectors volume-connector-create-request.json > volume-connector-create-response.json VCID=$(cat volume-connector-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$VCID" == "" ]; then exit 1 else echo "Volume connector created. UUID: $VCID" fi GET v1/volume/connectors > volume-connector-list-response.json GET v1/volume/connectors?detail=True > volume-connector-list-detail-response.json PATCH v1/volume/connectors/$VCID volume-connector-update-request.json > volume-connector-update-response.json sed -i "s/.*node_uuid.*/ \"node_uuid\": \"$NID\",/" volume-target-create-request.json POST v1/volume/targets volume-target-create-request.json > volume-target-create-response.json VTID=$(cat volume-target-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$VTID" == "" ]; then exit 1 else echo "Volume target created. UUID: $VCID" fi GET v1/volume/targets > volume-target-list-response.json GET v1/volume/targets?detail=True > volume-target-list-detail-response.json PATCH v1/volume/targets/$VTID volume-target-update-request.json > volume-target-update-response.json ################## # NODE VOLUME APIs GET v1/nodes/$NID/volume > node-volume-list-response.json GET v1/nodes/$NID/volume/connectors > node-volume-connector-list-response.json GET v1/nodes/$NID/volume/connectors?detail=True > node-volume-connector-detail-response.json GET v1/nodes/$NID/volume/targets > node-volume-target-list-response.json GET v1/nodes/$NID/volume/targets?detail=True > node-volume-target-detail-response.json ################## # DEPLOY TEMPLATES POST v1/deploy_templates deploy-template-create-request.json > deploy-template-create-response.json DTID=$(cat deploy-template-create-response.json | grep '"uuid"' | sed 's/.*"\([0-9a-f\-]*\)",*/\1/') if [ "$DTID" == "" ]; then exit 1 else echo "Deploy template created. UUID: $DTID" fi GET v1/deploy_templates > deploy-template-list-response.json GET v1/deploy_templates?detail=True > deploy-template-detail-response.json GET v1/deploy_templates/$DTID > deploy-template-show-response.json PATCH v1/deploy_templates/$DTID deploy-template-update-request.json > deploy-template-update-response.json ##################### # Replace automatically generated UUIDs by already used in documentation sed -i "s/$BID/$DOC_BIOS_UUID/" *.json sed -i "s/$CID/$DOC_CHASSIS_UUID/" *.json sed -i "s/$NID/$DOC_NODE_UUID/" *.json sed -i "s/$DNID/$DOC_DYNAMIC_NODE_UUID/" *.json sed -i "s/$PID/$DOC_PORT_UUID/" *.json sed -i "s/$PGID/$DOC_PORTGROUP_UUID/" *.json sed -i "s/$VCID/$DOC_VOL_CONNECTOR_UUID/" *.json sed -i "s/$VTID/$DOC_VOL_TARGET_UUID/" *.json sed -i "s/$AID/$DOC_ALLOCATION_UUID/" *.json sed -i "s/$DTID/$DOC_DEPLOY_TEMPLATE_UUID/" *.json sed -i "s/$(hostname)/$DOC_IRONIC_CONDUCTOR_HOSTNAME/" *.json sed -i "s/created_at\": \".*\"/created_at\": \"$DOC_CREATED_AT\"/" *.json sed -i "s/updated_at\": \".*\"/updated_at\": \"$DOC_UPDATED_AT\"/" *.json sed -i "s/provision_updated_at\": \".*\"/provision_updated_at\": \"$DOC_PROVISION_UPDATED_AT\"/" *.json ironic-15.0.0/CONTRIBUTING.rst0000664000175000017500000000076413652514273015562 0ustar zuulzuul00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps documented at: https://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored since OpenStack projects use a Gerrit instance hosted on OpenDev. https://review.opendev.org Contributor documentation for the Ironic project can be found in the OpenStack Ironic documentation. https://docs.openstack.org/ironic/latest/contributor/ ironic-15.0.0/bindep.txt0000664000175000017500000000710213652514273015114 0ustar zuulzuul00000000000000# these are needed to run ironic with default ipmitool and (i)PXE boot drivers ipmitool [default] ipxe [platform:dpkg default] ipxe-bootimgs [platform:rpm default] open-iscsi [platform:dpkg default] socat [default] xinetd [default] tftpd-hpa [platform:dpkg default] tftp-server [platform:rpm default] # Starting with Debian Jessie (and thus in Ubuntu Xenial too), # pxelinux package provides the pxelinux.0 boot loader, # but such package is absent from Debian Wheezy / Ubuntu Trusty. # Also, in Debian Wheezy / Ubuntu Trusty 'syslinux' depends on syslinux-common, # but only recommends it in Jessie/Xenial. # Make sure syslinux-common is installed for those distros as it provides # *.c32 modules for syslinux # TODO remove distro pinning when Wheezy / Trusty are EOLed (May 2019) # or DevStack stops supporting those. # In the mean time, new Debian-based release codenames will have to be added # as distros can not be pinned with 'if-later-than' specified. pxelinux [platform:ubuntu-xenial platform:debian-jessie default] syslinux [platform:rpm platform:ubuntu-trusty platform:debian-wheezy default] syslinux-common [platform:ubuntu-xenial platform:debian-jessie default] socat [default] # Grub2 files for boot loadingusing PXE/GRUB2 shim [platform:dpkg default] grub-efi-amd64-signed [platform:dpkg default] # these are needed to create and access VMs when testing with virtual hardware libvirt-bin [platform:dpkg devstack] libvirt [platform:rpm devstack] libvirt-dev [platform:dpkg devstack] libvirt-devel [platform:rpm devstack] qemu [platform:dpkg devstack build-image-dib] qemu-kvm [platform:dpkg devstack] qemu-utils [platform:dpkg devstack build-image-dib] sgabios [devstack] ipxe-qemu [platform:dpkg devstack] edk2-ovmf [platform:rpm devstack] ipxe-roms-qemu [platform:rpm devstack] openvswitch [platform:rpm devstack] iptables [devstack] net-tools [platform:rpm devstack] # these are needed to compile Python dependencies from sources python-dev [platform:dpkg test] python3-all-dev [platform:dpkg !platform:ubuntu-precise test] python-devel [platform:rpm test] python3-devel [platform:rpm test] build-essential [platform:dpkg test] libssl-dev [platform:dpkg test] # these are needed by infra for python-* jobs libpq-dev [platform:dpkg test] postgresql postgresql-client [platform:dpkg] # postgresql-devel [platform:rpm] postgresql-server [platform:rpm] mariadb [platform:rpm] mariadb-server [platform:rpm] # mariadb-devel [platform:rpm] dev-db/mariadb [platform:gentoo] mysql-client [platform:dpkg] mysql-server [platform:dpkg] # libmysqlclient-dev [platform:dpkg] # gettext and graphviz are needed by doc builds only. For transition, # have them in both doc and test. # TODO(jaegerandi): Remove test once infra scripts are updated. # this is needed for compiling translations gettext [test doc] # this is needed to build the FSM diagram graphviz [!platform:gentoo test doc] # libsrvg2 is needed for sphinxcontrib-svg2pdfconverter in docs builds. librsvg2-tools [doc platform:rpm] librsvg2-bin [doc platform:dpkg] # these are needed to build a deploy ramdisk # NOTE apparmor is an undeclared dependency for docker on ubuntu, # see https://github.com/docker/docker/issues/9745 apparmor [platform:dpkg imagebuild] docker.io [platform:dpkg imagebuild] docker-io [platform:rpm imagebuild] gnupg [imagebuild] squashfs-tools [platform:dpkg platform:redhat imagebuild] squashfs [platform:suse imagebuild] libguestfs0 [platform:dpkg imagebuild] libguestfs [platform:rpm imagebuild] python-guestfs [platform:dpkg imagebuild] # for TinyIPA build wget [imagebuild] python-pip [imagebuild] unzip [imagebuild] sudo [imagebuild] gawk [imagebuild] ironic-15.0.0/ChangeLog0000664000175000017500000074632713652514442014705 0ustar zuulzuul00000000000000CHANGES ======= 15.0.0 ------ * Add ironic-python-agent-builder to grenade projects and use netboot * Update python-dracclient version * Don't break UEFI install with older IPAs * Fix supported sushy-oem-idrac version * Implements: Reactive HUAWEI ibmc driver * Fix agent\_client handling of embedded errors * In-band deploy steps: correctly wipe driver\_internal\_info * Restore missing node.save() in agent\_base.py * Add link to other Redfish parms to iDRAC doc * Log when IPA fallback occurs on bootloader install * Delay validating deploy templates until we get all steps * Support executing in-band deploy steps * Upgrade flake8-import-order version to 0.17.1 * Stop configuring install\_command in tox * Prepare release notes/docs for 15.0 release * Ironic 15.0 prelude * DRAC: Added redfish management interface issue * Fix SpanLength calculation for DRAC RAID configuration * Fix RAID configuration with idrac-wsman interface * Revert "Generalize ISO building for virtual media driver" * Add ironic 15.0 release mapping * Fixes unusable Guru meditation report * Don't use wsme test webapp for patch tests * Centralise imports of wsme types * Update iDRAC doc about soft power off timeout * Implement the bios-interface for idrac-wsman driver * Improve the command status checks in the agent's process\_next\_step * Change [deploy]/default\_boot\_option to local * Update iDRAC doc about vendor passthru timeout * Use trailing slash in the agent command URL * Fix missing print format in log messages * Extend timeout on CI job with automated cleaning * Fix issue where server fails to reboot * Add my new address to .mailmap * "dual stack" support for PXE/iPXE * Generalize ISO building for virtual media driver * Remove six minions * Increase VM RAM value in local.conf example * Release reservation when stoping the ironic-conductor service * Update jobs description * Change default ram value * Added node multitenancy doc * Support burning configdrive into boot ISO * [doc] Remove the device selection limitation for Software RAID * Add sushy-cli to client libraries release list * Fix AttributeError in check allowed port fields * Fix gunicorn name on Py3@CentOS7 in devstack * Add node lessee field * Software RAID: Pass the boot mode to the IPA * Refactor AgentBase.heartbeat and process\_next\_step * [doc] Images need some metadata for software RAID * Drop netaddr - use netutils.is\_valid\_ipv6() * Allow INSPECTWAIT state for lookup * Improve \`redfish\` set-boot-device behaviour * Improve \`redfish\` set-boot-mode implementation * Change multinode job to voting * Cleanup Python 2.7 support * Use auth values from neutron conf when managing Neutron ports * Fetch netmiko session log * Doc - IPv6 Provisioning * Additional IP addresses to IPv6 stateful ports * Add network\_type to port local\_link\_connection * Make oslo.i18n an optional dependency * Make oslo.reports an optional dependency * Do not autoescape all Jinja2 templates * Make deploy step failure logging indicate the error * Fix the remaining hacking issues * Bump hacking to 3.0.0 * Extend install\_bootloader command timeout * Document deploy\_boot\_mode and boot\_option for standalone deployments * Remove future usage * Fix enabled\_hardware\_types from idrac-wsman to idrac * Document our policies for stable branches * Retry agent get\_command\_status upon failures * Add troubleshooting on IPMI section * Default IRONIC\_RAMDISK\_TYPE to dib * Generalize clean step functions to support deploy steps * Raise human-friendly messages on attempt to use pre-deploy steps drivers * Hash the rescue\_password * DRAC: Fix a failure to create virtual disk bug * [doc] Add documentation for retirement support * Add info on how to enable ironic-tempest-plugin * Follow-up releasenote use\_secrets * Add indicators REST API endpoints * Do not use random to generate token * Signal agent token is required * Support centos 7 rootwrap data directory * Refactoring: split out wrap\_ipv6 * Refactoring: move iSCSI deploy code to iscsi\_deploy.py * Clean up nits from adding additional node update policies * Allow specifying target devices for software RAID * Documentation clarifications for software RAID * Drop rootwrap.d/ironic-lib.filters file * Expand user-image doc * Move ipmi logging to a separate option * Change readfp to read\_file * Make image\_checksum optional if other checksum is present * Remove compatibility with pre-deploy steps drivers * Extend power sync timeout for Ericsson SDI * Skip clean steps from 'fake' interfaces in the documentation * Rename ironic-tox-unit-with-driver-libs-python3 * Send our token back to the agent * Enable agent\_token for virtual media boot * Add separate policies for updating node instance\_info and extra * Follow up to console port allocation * Change force\_raw\_images to use sha256 if md5 is selected * Make reservation checks caseless * [doc] Missing --name option * Bump minimum supported ansible version to 2.7 * Set abstract for ironic-base * Refactoring: move generic agent clean step functions to agent\_base * Docs: split away user image building and highlight whole disk images * Redfish: Add root\_prefix to Sushy * Cleanup docs building * Rename \`create\_isolinux\_image\_for\_uefi\` function as misleading * Finalize removal of ipxe\_enabled option * Start removing ipxe support from the pxe interface * Pre-shared agent token * DRAC: Fix RAID create\_config clean step * Expose allocation owner to additional policy checks * Project Contributing updates for Goal * Refactoring: rename agent\_base\_vendor to agent\_base * Use FIPS-compatible SHA256 for comparing files * Revert "Move ironic-standalone to non-voting" * Move ironic-standalone to non-voting * Make \`redfish\_system\_id\` property optional * Lower tempest concurrency * Refactoring: finish splitting do\_node\_deploy 14.0.0 ------ * Fix up release notes for 14.0.0 * Actually use ironic-python-agent from source in source builds * Update release mappings for Ussuri * Automatic port allocation for the serial console * Remove the [pxe]ipxe\_enabled configuration option * tell reno to ignore the kilo branch * Update API version history for v1.61 * [Trivial] Remove redundant brackets * Split cleaning-related functions from manager.py into a new module * Split deployment-related functions from manager.py into a new module * Disable debug output in doc building * Fix bash comparisons for grenade multinode switch * Fix jsonpatch related tests * Fix ipxe interface to perform ipxe boot without ipxe\_enabled enabled * Fix typo in setup-network.sh script * Support node retirement * Make ironic-api compatible with WSGI containers other than mod\_wsgi * Don't require root partition when installing a whole disk image * Clean up api controller base classes * Deprecate irmc hardware type * Subclass wsme.exc.ClientSideError * Use str type instead of wsme.types.text * Use bionic job for bifrost integration * Follow up to root device hints in instance\_info * Deprecate ibmc * Fix incorrect ibmc\_address parsing on Python 3.8 * Fix entry paths for cleaning and deployment * Nodes in maintenance didn't fail, when they should have * Fix API docs for target\_power\_state response * Document using CentOS 8 DIB IPA images for Ussuri and newer * Lower RAM for DIB jobs to 2 GiB * Remove reference to deprecated [disk\_utils]iscsi\_verify\_attempts * Add node info and exc name when getting rootfs info from Glance * Fix fast\_track + agent\_url update fix * CI: make the metalsmith job voting and gating * devstack: install bindep for diskimage-builder * Allow reading root\_device from instance\_info * Implement managed in-band inspection boot for ilo-virtual-media * Add a missing versionadded for configdrive[vendor\_data] * Make qemu hook running with python3 * Refactor glance retry code to use retrying lib * Fix duplicated words issue like "are are placed" * devstack: switch to using CentOS 8 DIB ramdisks by default * Remove the deprecated [glance]glance\_num\_retries * Fix missing job\_id parameter in the log message * Fix get\_boot\_option logic for software raid * Allow node owners to administer associated ports * Explicitly use ipxe as boot interface for iPXE testing * Replace disk-image-create with ironic-python-agent-builder * Remove those switches for python2 * Fix invalid assertIsNone statements * Add librsvg2\* to bindep * Stop using six library * Add notes on the pxe template for aarch64 * Enforce running tox with correct python version based on env * Tell the multinode subnode and grenade to use /opt * Disable automated clean on newer jobs * Extend service timeout * Tune down multinode concurrency * Restrict ability to change owner on provisioned or allocated node * Correct power state handling for managed in-band inspection * Implement managed in-band inspection boot for redfish-virtual-media * redfish-vmedia: correctly pass ipa-debug * Add a CI job to UEFI boot over Redfish virtual media * Fix use of urlparse.urljoin * Import importlib directly * Increasing BUILD\_TIMEOUT value for multinode job * Remove deprecated ironic-agent element * Add owner to allocations and create relevant policies * CI: do not enable rescue on indirect jobs * Update nova os-server-external-events response logic * DRAC: Drives conversion from raid to jbod * Changed to bug fix to follow-on idrac job patch * Fixes issue with checking whether ISO is passed * docs: add a missing heading * Add a CI job to legacy boot over Redfish virtual media * Fix UEFI NVRAM collision in devstack * Remove references to 'firewall\_driver' * Make redfish CI jobs pulling sushy-tools from git * Prevent localhost from being used as ironic-inspector callback URL * Add an ironic-inspector job with managed boot * Add timeout when querying agent's command statuses * docs: update the local development quickstart to use JSON RPC * Drop python 2.7 support and testing * Remove unused migration tests * Wire in in-band inspection for PXE boot and neutron-based networking * Foundation for boot/network management for in-band inspection * Add \`instance\_info/kernel\_append\_params\` to \`redfish\` * Add indicator management to redfish hw type * Mock out the correct greenthread sleep method * Don't install syslinux-nonlinux on rhel7 * Ensure text-only console in devstack * Pass correct flags during PXE cleanup in iPXEBoot * Drop [agent]heartbeat\_timeout * Remove old online migration codes * Block ability update callback\_url * Stop supporting incompatible heartbeat interfaces * Allow node owners to administer nodes * Fix variable name in cleanup\_baremetal\_basic\_ops func * Switch legacy jobs to Py3 * Ensure \`isolinux.bin\` is present and configured in devstack * Fix \`snmp\` unit test * Backward compatibility for the ramdisk\_params change * Allow vendor\_data to be included in a configdrive dict * Improve iDrac Documentation * Correct handling of ramdisk\_params in (i)PXE boot * Software RAID: Identify the root fs via its UUID from image metadata * Change integration jobs to run under Python3 * Using loop instead of with\_X * CI: add ironic-python-agent-builder to the multinode job * Update release with information about zuul job * Add virtual media boot section to the docs * CI: limit rescue testing to only two jobs * Mask secrets when logging in json\_rpc * Use new shiny Devices class instead of old ugly Device * Switch to ussuri job * Do not ignore 'fields' query parameter when building next url * Update sushy library version * Minor string formatting follow-up to idrac jbod patch * Document systemd-nspawn as a nice trick for patching a ramdisk * DRAC: Drives conversion from JBOD to RAID * Setup ipa-builder before building ramdisk * Fix EFIBOOT image upload in devstack * Fix drive sensors collection in \`redfish\` mgmt interface * Add Redfish vmedia boot interface to idrac HW type * Change MTU logic to allow for lower MTUs automatically * DRAC: Fix a bug for clear\_job\_queue clean step with non-BIOS pending job * Documentation for iLO hardware type deploy steps * ironic-tempest-functional-python3 unused variables * docs: use openstackdocstheme extlink extension * grub configuration should use user kernel & ramdisk * Raising minimum version of oslo.db * DRAC: Fix a bug for delete\_config with multiple controllers * Use correct function to stop service * Fix devstack installation failure * DRAC: Fix a bug for job creation when only required * Add a CI job with a DIB-built ramdisk * Remove old online migrations and new models * Remove earliest version from releasing docs, update examples * Change log level based on node status * enable\_python3\_package should not be necessary anymore * Update doc for CI * Add versions to release notes series * Document pre-built ramdisk images (including DIB) * Run DIB with tracing enabled and increase the DHCP timeout * Improve documentation about releasing deliverables * Update master for stable/train 13.0.0 ------ * Update release mappings for Train * Release notes cleanup for 13.0.0 (mk2) * Document PXE retries * Update env. variables in the documentation * Add iDRAC RAID deploy steps * Don't resume deployment or cleaning on heartbeat when polling * Make multinode jobs non-voting * devstack: wait for conductor to start and register itself * Allow retrying PXE boot if it takes too long * Lower MTU override * Devstack: Fix iPXE apache log location bug * Serve virtual media boot images from ironic conductor * Add Redfish inspect interface to idrac HW type * Add deploy steps for iLO Management interface * Do not log an error on heartbeat in deploying/cleaning/rescuing * Add an option to abort cleaning and deployment if node is in maintenance * CI: move libvirt images to /opt for standalone and multinode jobs * Add first idrac HW type Redfish interface support * Remove cisco references and add release note * Add \`FLOPPY\` boot device constant * Combined gate fixes * Read in non-blocking fashion when starting console * Release notes cleanup for 13.0.0 * CI: move the fast-track job to the experimental pipeline * Remove support for CoreOS images * Fix gate failure related to jsonschema * Minor: change a misleading InvalidState error message * Build pdf doc * iLO driver doc update * Use openstack cli in image creation guide * iLO driver doc update * devstack: save iPXE httpd logs * Prelude for 13.0.0 * Add a release note for iscsi\_verify\_attempts deprecation * Fix typo in handling of exception FailedToGetIPAddressOnPort * Add iLO RAID deploy steps * add table of available cleaning steps to documentation * Prepare for deprecation of iscsi\_verify\_attempts in ironic-lib * Add software raid release note to ironic * Add ironic-specs link to readme.rst * Fixed problem with UEFI iSCSI boot for nic adapters * DRAC : clear\_job\_queue clean step to fix pending bios config jobs * Add deploy steps for iLO BIOS interface * Follow-up for deploy steps for Redfish BIOS interface * Adding file uri support for ipa image location * Adjust placement query for reserved nodes * Add indicator management harness to ManagementInterface * Adds dhcp-all-interfaces element * Do not wait for console being started on timeout * Out-of-band \`erase\_devices\` clean step for Proliant Servers * Pass target\_raid\_config field to ironic variable * Allow deleting unbound ports on active node * Follow up to Option to send all portgroup data * Lower standalone concurrency to 3 from 4 * Make ironic\_log Ansible callback Python 3 ready * Remove ironic command bash completion * devstack: Fix libvirtd/libvirt-bin detection * Add iPXE boot interface to 'ilo' hardware type * Move to unsafe caching * Allow to configure additional ipmitool retriable errors * Fix exception on provisioning with idrac hw type * Add logic to determine Ironic node is HW or not into configure\_ironic\_dirs * Install sushy if redfish is a hardware type * Add \`filename\` parameter to Redfish virtual media boot URL * Add set\_boot\_device hook in \`redfish\` boot interface * Add Redfish Virtual Media Boot support * Follow-up to power sync reno * Add new method 'apply\_configuration' to RAIDInterface * Do not tear down node upon cleaning failure * Switch non-multinode jobs to new-style neutron services * Add deploy steps for Redfish BIOS interface * Ansible: fix partition\_configdrive for logical root\_devices * Support power state change callbacks to nova using ksa\_adapter * Docu: Fix broken link * Fixing broken links * DRAC : Fix issue for RAID-0 creation for multiple disks for PERC H740P * Uses IPA-B to build in addition to CoreOS * Asynchronous out of band deploy steps fails to execute * Clean up RAID documentation * Enable testing software RAID in the standalone job * devstack: allow creating more than one volume for a VM * Allow configuring global deploy and rescue kernel/ramdisk * Fix missing print format error * Update software RAID configuration documentation * Use HTTPProxyToWSGI middleware from oslo * RAID creation fails with 'ilo5' RAID interface * RAID create fails if 'controller' is missing in 'target\_raid\_config' * Use openstacksdk for accessing ironic-inspector * CI Documentation * Enable no IP address to be returned * Change debug to error for heartbeats * CI: stop using pyghmi from git master * Fixes power-on failure for 'ilo' hardware type * Creation of UEFI ISO fails with efiboot.img * Remove deprecated Neutron authentication options * Follow-up to the IntelIPMIHardware patch * Ansible driver: fix deployment with serial specified as root device hint * Enable testing adoption in the CI * Fix serial/wwn gathering for ansible+python3 * Update api-ref location * IPA does not boot up after cleaning reboot for 'redfish' bios interface * Revert "Add logic to determine Ironic node is HW or not into configure\_ironic\_dirs" * Filter security group list on the ID's we expect * Clean lower-constraints.txt * [Trivial] Fix is\_fast\_track parameter doc string * Failure in get\_sensor\_data() of 'redfish' management interface * Abstract away pecan.request/response * Fix potential race condition on node power on and reboot * iLO firmware update fails with 'update\_firmware\_sum' clean step * Bump keystonauth and warlock versions * Don't install ubuntu efi debs on cent * Remove the PXE driver page * Ansible module: fix deployment for private and/or shared images * Add logic to determine Ironic node is HW or not into install\_ironic * Add logic to determine Ironic node is HW or not into configure\_ironic\_dirs * Deal with iPXE boot interface incompatibility in Train * Bump openstackdocstheme to 1.20.0 * Remove deprecated app.wsgi script * devstack: Install arch specific debs only when deploying to that arch * DRAC: Upgraded RAID delete\_config cleaning step * Fix invalid assert state * CI: remove quotation marks from TEMPEST\_PLUGINS variable * Remove CIMC/UCS drivers * Add IntelIPMIHardware * Collect sensor data in \`\`redfish\`\` hardware type * [Trivial] Software RAID: Documentation edits * Software RAID: Add documentation * Blacklist sphinx 2.1.0 (autodoc bug) * Follow-up on UEFI/Grub2 job * Adds bandit template and exclude some of tests * Add documentation for IntelIPMI hardware * Add check on get\_endpoint returning None * Option to send all portgroup data 12.2.0 ------ * Replace deprecated with\_lockmode with with\_for\_update * Spruce up release notes for 12.2.0 release * Update API history and release mapping for 12.2.0 * Refactoring: flatten the glance service module * Remove the deprecated glance authentication options * DRAC: Adding reset\_idrac and known\_good\_state cleaning steps * devstack: add missing variables for ironic-python-agent-builder * Remove ipxe tags when ipx6 is in use * Update qemu hook to facilitate Multicast * redfish: handle missing Bios attribute * Fix :param: in docstring * Updates ironic for using ironic-python-agent-builder * Do not log an exception if Allocation is deleted during handling * Add release note updating status of smartnics * Switch to use exception from ironic-lib * Change constraints opendev.org to release.openstack.org * Incorporate bandit support in CI * Remove elilo support * Ansible module: fix configdrive partition creation step * Remove deprecated option [DEFAULT]enabled\_drivers * Fix regex string in the hacking check * Add api-ref for allocation update * Add a pxe/uefi/grub2 CI job * Bump lower mock version to 3.0.0 * Start using importlib for Python 3.x * Remove XML support in parsable\_error middleware * Fix binary file upload to Swift * fix typo in code comment * Software RAID: Trigger grub installation on the holder disks * Move stray reno file * Trivial: correct configuration option copy-pased from inspector * Remove commit\_required in iDRAC hardware type * Make the multinode grenade job voting again * devstack: configure rabbit outside of API configuration * Blacklist python-cinderclient 4.0.0 * Publish baremetal endpoint via mdns * Fix inaccurate url links * Update sphinx requirements * Allocation API: correct setting name to None * Allocation API: backfilling allocations * Fix GRUB config path when building EFI ISO * Add DHCP server part to make the document more detail * Do not try to return mock as JSON in unit tests * Remove deprecated option [ilo]power\_retry * Add API to allow update allocation name and extra field * Update Python 3 test runtimes for Train * Replace hardcoded "stack" user to $STACK\_USER * Run vbmcd as stack user in devstack * Adding enabled\_boot\_interface attribute in tempest config * Add openstack commands in node deployment guide * Add a high level vision reflection document * Add iDRAC driver realtime RAID creation and deletion * Correct spelling errors * Replace git.openstack.org URLs with opendev.org URLs * Direct bridge to be setup * Fix pyghmi path * OpenDev Migration Patch * Removes \`hash\_distribution\_replicas\` configuration option * Truncate node text fields when too long * Add note for alternative checksums * Make the JSON RPC server work with both IPv4 and IPv6 * Jsonschema 3.0.1: Binding the schema to draft-04 * Place upper bound on python-dracclient version * devstack: Remove syslinux dependency * Do not try to create temporary URLs with zero lifetime * Ansible module: fix partition\_configdrive.sh file * Use the PUBLIC\_BRIDGE for vxlan * Move devstack emulators configs under /etc/ironic * Uncap jsonschema in requirements * Split ibmc power/reboot classes * Temporarily mark grenade multinode as non-voting * Improve VirtualBMC use in Devstack * Run IPMI, SNMP and Redfish BMC emulators as stack * Add UEFI firmware to Redfish emulator config * Add systemd unit for sushy emulator in devstack * Ansible module: fix clean error handling * [Trivial] Fix typo in agent\_base\_vendor unit test * Fix exception generation errors * Add a request\_timeout to neutron * doc: update ibmc driver support servers document * Ansible module fix: stream\_url * Make it possible to send sensor data for all nodes * Slightly rephrase note in tenant networking docs * Bump sphinxcontrib-pecanwsme to 0.10.0 * ipmi: Ignore sensor debug data * Make 'noop' the explicit default of default\_storage\_interface * Docs: correct expected host format for drac\_address * Check for deploy.deploy deploy step in heartbeat * Workaround for sendfile size limit * Workaround for uefi job with ubuntu bionic * Replace openstack.org git:// URLs with https:// * Remove vbmc log file in devstack * Add versions to release notes series * Imported Translations from Zanata * Update master for stable/stein 12.1.0 ------ * Fix capabilities passed as string in agent prepare * Respect $USE\_PYTHON3 settings for gunicorn * Add systemd unit for vbmcd in devstack * Workaround for postgres job with ubuntu bionic * Add release note on conntrack issue on bionic * Update release-mappings and api version data for Stein release * Pass kwargs to exception to get better formatted error message * Advance python-dracclient version requirement * Add prelude and update release notes for 12.1.0 * Optimize: HUAWEI iBMC driver utils * Set boot\_mode in node properties during OOB Introspection * Fix idrac driver unit test backwards compat issue * Deploy Templates: factor out ironic.conductor.steps * Make metrics usable * Kg key for IPMIv2 authentication * Add fast-track testing * fast tracked deployment support * Update doc for UEFI first * Fix lower-constraints job * Fix idrac Job.state renamed to Job.status * Deprecates \`hash\_distribution\_replicas\` config option * Add Huawei iBMC driver support * Fix misuse of assertTrue * Allow methods to be both deploy and clean steps * Adding ansible python interpreter as driver\_info * Return 405 for old versions in allocation and deploy template APIs * honor ipmi\_port in serial console drivers * Follow up to available node protection * Migrate ironic-grenade-dsvm-multinode-multitenant job to Ubuntu Bionic * Deploy templates: conductor and API nits * Deploy Templates: documentation * Fixing a bash test in devstack ironic lib * Deploy Templates: API reference * Fix formatting issue in doc * Update dist filter for devstack ubuntu * Add a non-voting metalsmith job for local boot coverage * Document building configdrive on the server side * Check microversions before validations for allocations and deploy templates * Add python3 unit test with drivers installed * Fix missing print format error * Fix typo and docstring in pxe/ipxe * Stop requiring root\_gb for whole-disk images * driver-requirements: mark UcsSdk as Python 2 only * Set boot\_mode in node properties during Redfish introspection * Add option to set python interpreter for ansible * Document using a URL for image\_checksum * [docs] IPv6 support for iLO * Temporary marking ironic-standalone non-voting * Allow building configdrive from JSON in the API * Allocation API: optimize check on candidate nodes * Fix TypeError: \_\_str\_\_ returned non-string (type ImageRefValidationFailed) * Deploy templates: API & notifications * Deploy templates: conductor * Drop installing python-libvirt system package * Test API max version is in RELEASE\_MAPPINGS * Update the log message for ilo drivers * Deploy templates: fix updating steps in Python 3 * Fix pysendfile requirement marker * Add option to protect available nodes from accidental deletion * Deploy Templates: add 'extra' field to DB & object * Trivial: Fix error message when waiting for power state * Allocation API: fix minor issues in the API reference * Allocation API: reference documentation * Adding bios\_interface reference to api docs * Set available\_nodes in tempest conf * Update the proliantutils version in documentation * [trivial] Removing python 3.5 template jobs * Deploy Templates: Fix DB & object nits * Add check for object versions * [Trivial] Fix incorrect logging in destroy\_allocation * Allocation API: taking over allocations of offline conductors * Allocation API: resume allocations on conductor restart * Devstack - run vbmc as sudo * Documentation update for iLO Drivers * Follow up - API - Implement /events endpoint * Follow up to node description * ensure that socat serial proxy keeps running * Deprecate Cisco drivers * Follow up to ISO image build patch * API - Implement /events endpoint * Add a requisite for metadata with BFV * [Follow Up] Add support for Smart NICs * Support using JSON-RPC instead of oslo.messaging * Deploy templates: data model, DB API & objects * [Follow Up] Expose is\_smartnic in port API * Prioritize sloppy nodes for power sync * Expose conductors: api-ref * Remove duplicated jobs and refactor jobs * Allocation API: fix a small inconsistency * Expose is\_smartnic in port API * [Trivial] Allocation API: correct syntax in API version history docs * Allocation API: REST API implementation * Make power sync unit test operational * Allow case-insensitivity when setting conductor\_group via API * Optionally preserve original system boot order upon instance deployment * Add support for Smart NICs * Add a voting CI job running unit tests with driver-requirements * [Refactor] Make caching BIOS settings explicit * [docs] OOB RAID implementation for ilo5 based HPE Proliant servers * Make iLO BIOS interface clean steps asynchronous * Provides mount point as cinder requires it to attach volume * Add description field to node: api-ref * Add description field to node * Fix test for 'force\_persistent\_boot\_device' (i)PXE driver\_info option * Fix iPXE boot interface with ipxe\_enabled=False * Allocation API: conductor API (without HA and take over) * Removing deprecated drac\_host property * Add is\_smartnic to Port data model * Remove uses of logger name "oslo\_messaging" * [Trivial] Fix typo in noop interface comment * Remove duplicated fault code * Fix listing nodes with conductor could raise * Parallelize periodic power sync calls follow up * Build ISO out of EFI system partition image * Make versioned notifications topics configurable * Build UEFI-only ISO for UEFI boot * Parallelize periodic power sync calls * Limit the timeout value of heartbeat\_timeout * Replace use of Q\_USE\_PROVIDERNET\_FOR\_PUBLIC * Make ipmi\_force\_boot\_device more user friendly * Follow-up logging change * Remove dsvm from zuulv3 jobs * Allocation API: allow picking random conductor for RPC topic * Fix updating nodes with removed or broken drivers * Fix ironic port creation after Redfish inspection * Allocation API: minor fixes to DB and RPC * Allocation API: allow skipping retries in TaskManager * Allocation API: database and RPC * Allow missing \`\`local\_gb\`\` property * Fix typo in release note * Fix IPv6 iPXE support * OOB RAID implementation for ilo5 based HPE Proliant servers * Fix SushyError namespacing in Redfish inspection * Allow disabling TFTP image cache * Add pxe template per node * Fix the misspelling of "configuration" * Switch to cirros 0.4.0 * Update tox version to 2.0 * Disable metadata\_csum when creating ext4 filesystems * Switch the default NIC driver to e1000 * Change openstack-dev to openstack-discuss * Fix XClarity driver management defect * Ignore newly introduced tables in pre-upgrade versions check * Switch CI back to xenial 12.0.0 ------ * Add "owner" information field * Introduce configuration option [ipmi]ipmi\_disable\_timeout * Enroll XClarity machines in Ironic's devstack setting * spelling error * api-ref: update node.resource\_class description * Add a note regarding IPA multidevice fix * Allow disabling instance image cache * Add a prelude for ironic 12.0 * Set proper version numbering * Change multinode jobs to default to local boot * Follow-up Retries and timeout for IPA command * Fix "import xxx as xxx" grammar * Kill misbehaving \`ipmitool\` process * Fix OOB introspection to use pxe\_enabled flag in idrac driver * Add configurable Redfish client authentication * Expose conductors: api * Fix node exclusive lock not released on console start/restart * Fix IPv6 Option Passing * Let neutron regenerate mac on port unbind * Slim down grenade jobs * Extend job build timeout * Mark several tests to not test cleaning * Add BIOS interface to Redfish hardware type * Avoid cpu\_arch None values in iscsi deployments * Expose conductors: db and rpc * Fix Chinese quotes * Add ipmi\_disable\_timeout to avoid problematic IPMI command * Correct author email address * Ensure we unbind flat network ports and clear BM mac addresses * Retries and timeout for IPA command * Support for protecting nodes from undeploying and rebuilding * Add download link apache configuration with mod\_wsgi * spelling error * Add Redfish inspect interface follow up * Add the noop management interface to the manual-management hardware type * Add missing ws separator between words * Switch ironic-tempest-...-tinyipa-multinode to zuulv3 * Add a non-voting bifrost job to ironic * Increase RAM for the ironic node in UEFI job * Reuse Redfish sessions follow up * Improve logs when hard linking images fails * Don't fail when node is in CLEANFAIL state * Fix ipv6 URL formatting for pxe/iPXE * Fix redfish test\_get\_system\_resource\_not\_found test * Improve sushy mocks * Recommend to set boot mode explicitly * Add Redfish inspect interface * Fix CPU count returned by introspection in Ironic iDRAC driver * Add ironic-status upgrade check command framework * Passing thread pool size to IPA for parallel erasure * Change BFV job to use ipxe interface * [devstack] Allow setting TFTP max blocksize * Reuse Redfish sessions * Migration step to update objects to latest version * Cleanup of remaining pxe focused is\_ipxe\_enabled * Remove the xclarity deprecation * Follow-up to fix not exist deploy image of patch 592247 * Remove pywsman reference * Fix DHCPv6 support * Revert "Add openstack/placement as a required project for ironic-grenade\*" * Add api-ref for conductor group * Follow-up patch for I71feefa3d0593fd185a286bec4ce38607203641d * Fix ironic developer quickstart document * Add note to pxe configuration doc * Create base pxe class * Wrap up PXE private method to pxe\_utils move * Enhanced checksum support * Enable configuration of conversion flags for iscsi * Document how to implement a new deploy step * Refactor API code for checking microversions * Allow streaming raw partition images * Remove Vagrant * ipxe boot interface * Remove oneview drivers * Completely remove support for deprecated Glance V1 * Avoid race with nova on power sync and rescue * Log a warning for Gen8 Inspection * Doc: Adds cinder as a service requires creds * Fix unit test run on OS X * Fixes a race condition in the hash ring code * Add automated\_clean field to the API * Stop console at tearing down without unsetting console\_enabled * Add functionality for individual cleaning on nodes * Documentation for 'ramdisk' deploy with 'ilo-virtual-media' boot * Add documentation for soft power for ilo hardware type * Add documentation for 'inject nmi' for ilo hardware type * Remove unnecessary checks in periodic task methods * Remove token expiration * Adds support for soft power operations to 'ilo' power interface * Add openstack/placement as a required project for ironic-grenade\* * Remove tox checkconfig * Add admin documentation for rescue mode in iLO driver * Correct headings in README.rst * Minor fixes for docs on changing hardware types * Add admin documentation for rescue interface * pxe/ipxe: Move common calls out pxe.py * Switch ironic-tempest-dsvm-functional-python3 to zuulv3 * Switch ironic-tempest-dsvm-functional-python2 to zuulv3 * Switch grenade nic driver to e1000 * Remove ironic experimental jobs * Restore the nova-api redirect * Update docs to portgroup with creating windows images * Use templates for cover and lower-constraints * Remove wrong install-guide-jobs in zuul setup * Fix grenade tests * Add a more detailed release note for Dell BOSS RAID1 fix * Honors return value from BIOS interface cleansteps * Reuse checksum calculation from oslo * Adds support for 'ramdisk' deploy with 'ilo-virtual-media' boot * Remove inspecting state support from inspect\_hardware * Adds support for 'Inject NMI' to 'ilo' management interface * Docs for agent http provisioning * Ensure pagination marker is always set * Direct deploy serve HTTP images from conductor * Fix doc builds for ironic * Fix async keyword for Python 3.7 * Add vendor step placement suggestion * Prevent HTML from appearing in API error messages * Replace assertRaisesRegexp with assertRaisesRegex * Add version discovery information to the /v1 endpoint * Replace assertRaisesRegexp with assertRaisesRegex * Fix provisioning failure with \`ramdisk\` deploy interface * Minor fixes to contributor vision * Add automated\_clean field * Use HostAddressOpt for opts that accept IP and hostnames * Remove the duplicated word * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * Prevents deletion of ports for active nodes * Disable periodic tasks if interval set to 0 * Reformat instructions related with various OS * Imported Translations from Zanata * Add conductor\_group docs * Switch ironic-tempest-dsvm-ironic-inspector too zuulv3 * Switch ironic-tempest-dsvm-bfv too zuulv3 * A minor update to documentation of \`ilo\` hardware type * Imported Translations from Zanata * Update reno for stable/rocky * Fix not exist deploy image within irmc-virtual-media booting 11.1.0 ------ * Switch the "snmp" hardware type to "noop" management * Add "noop" management and use it in the "ipmi" hardware type * Update docs on ironic boot mode management * Follow-up to always link MAC address files * Simplify subclasses for PXERamdiskDeploy * Node gets stuck in ING state when conductor goes down * Add notes on Redfish boot mode management * Prepare for Rocky release * Update the reno for the reset\_interfaces feature * Use max version of an object * A vision * Improve the "Ironic behind mod wsgi" documentation * Deploy steps documentation * Mark the ZeroMQ driver deprecated * Remove rabbit\_max\_retries option * Fix iDRAC hardware type does not work with UEFI * Pass prep\_boot\_part\_uuid to install\_bootloader for ppc64\* partition images * Remove redundant swift vars * Document locale requirement for local testing * Switch ironic-tempest-dsvm-ipa-partition-pxe\_ipmitool-tinyipa-python3 * Improve doc of Node serial console * Follow-up patch to ramdisk interface * Ramdisk deploy driver doc * Change PXE logic to always link macs with UEFI * Add documentation for BIOS settings * Fix for failure of cleaning for iRMC restore\_bios\_config * Refactor RAID configuration via iRMC driver * Adds ramdisk deploy driver * Follow-up patch for 7c5a04c1149f14900f504f32e000a7b4e69e661f * Switch ironic-tempest-dsvm-ipa-partition-uefi-pxe\_ipmitool-tinyipa * Switch ironic-tempest-dsvm-ipa-wholedisk-bios-pxe\_snmp-tinyipa * Switch ironic-tempest-dsvm-ipa-wholedisk-bios-agent\_ipmitool-tinyipa * Switch ironic-tempest-dsvm-pxe\_ipmitool-postgres * Documentation update of iLO BIOS settings * Follow-up to improve pep8 checking with hacking * Fix for failure in cleaning in iRMC driver * Add deploy\_step to NodePayload.SCHEMA * Add conductor\_group to node notifications * Deprecate xclarity hardware type * Be more precise with conductor group API tests * Simplify hash ring tests * Add documentation for changing node's hardware type * Fix the list of irrelevant-files * snmp: Keep get\_next method backward-compatible * Fix for failure in cleaning * Expose node.conductor\_group in the REST API * Use conductor group for hash ring calculations * Expose BIOS interface * Ignore bashate E044 * Remove deprecated option [ipmi]retry\_timeout * iLO BIOS interface implementation * Make pxelinux.cfg folder configurable * Use openstack client instead of neutron client * Replace port 35357 with 5000 for "auth\_url" * Add conductor\_group field to config, node and conductor objects * Add reset\_interfaces parameter to node's PATCH * Don't handle warnings as errors * Follow up Add CUSTOM\_CPU\_FPGA Traits value to ironic inspection * Follow-up changes to iRMC bios interface * Minor changes for deploy\_steps framework * Caching of PDU autodiscovery * Migrate ironic \`snmp\` driver to the latest pysnmp API * Add conductor\_group field to nodes and conductors tables * Add mock object for get\_bios\_settings * Fix bug to doc:configdrive * Add notes for future job migrations * Assert a build timeout for zuul templated CI jobs * Fixed link to Storyboard instead of launchpad * Update CI jobs for rescue mode * Fix bug to doc:kernel-boot-parameters * Deploy steps - API & notifications * Deploy steps - conductor & drivers * Add CUSTOM\_CPU\_FPGA Traits value to ironic inspection * Implement iRMC BIOS configuration * Deploy steps - versioned objects * Deploy steps - DB model * Follow-up to RAID configuration via iRMC driver patch * Poweroff server after 10 tries * Make the lower-constraints tox env actually use lower-constraints * Fix typo of function naming conventions in test\_deploy\_utils.py * Update the doc regarding the removal of calssic drivers * Update boot-from-volume feature docs * [doc] Use openstack client commands to replace neutron client * Detect skip version upgrades from version earlier than Pike * Update API version history with release 11.0.0 * Bump osprofiler minimum requirement to 1.5.0 * Add 11.0 to release mapping * Add read&write SNMP community names to \`snmp\` driver * Add unit tests that "remove" is acceptable on /XXX\_interface node fields * Fix 11.0 prelude formatting * Change docs bug link to storyboard 11.0.0 ------ * Support RAID configuration for BM via iRMC driver * Fix list node vifs api error * Remove support for creating and loading classic drivers * Ensure we allow Ironic API traffic from baremetal network * Add a prelude for version 11 * iDRAC RAID10 creation with greater than 16 drives * Remove doc of classic drivers from the admin guide * Modifying 'whole\_disk\_image\_url' and 'whole\_disk\_image\_checksum' variable * Follow-up to update doc for oneview driver * Small change of doc title for the drivers * Fix wrong in apidoc\_excluded\_paths * Switch ironic-tempest-dsvm-ipa-partition-redfish-tinyipa * Switch ironic-dsvm-standalone to zuulv3 native * Follow-up to update doc for ilo driver * Add BayTech MRP27 snmp driver type * Improve pep8 checking along with hacking * Follow-up to update doc for irmc driver * DevStack: Tiny changes following iRMC classic driver removal * include all versions of Node in release\_mappings * Deprecate [inspector]enabled option * Do not disable inspector periodic tasks if [inspector]enabled is False * Remove the ipmitool classic drivers * Add snmp driver auto discovery * During cleaning, use current node.driver\_internal\_info * Rename test class * Remove the iRMC classic drivers * Remove the OneView classic drivers * Remove the deprecated pxe\_snmp driver * Remove the deprecated classic drivers for Cisco UCS hardware * Remove the iDRAC classic drivers * Separate unit tests into different classes * Add helper method for testing node fields * Fix conductor manager unit tests * Remove the ilo classic drivers * Move parse\_instance\_info\_capabilities() to common utils.py * Fix error when deleting a non-existent port * BIOS Settings: update admin doc * BIOS Settings: add bios\_interface field in NodePayload * BIOS Settings: update default BIOS setting version in db utils * Add documentation for XClarity Driver * Release note clean-ups for ironic release * Move boot-related code to boot\_mode\_utils.py * Raise TemporaryFailure if no conductors are online * BIOS Settings: add sync\_node\_setting * Fix for Unable to create RAID1 on Dell BOSS card * Add an external storage interface * fix typos * fix typos * Add detail=[True, False] query string to API list endpoints * Adds enable\_ata\_secure\_erase option * Remove the remaining fake drivers * Document that nova-compute attaches VIF to active nodes on start up * Added Redfish boot mode management * iRMC: Support ipmitool power interface with irmc hardware * Doc: Remove -r option for running a specific unit test * Fix stestr has no lower bound in test-requirements * Adds boot mode support to ManagementInterface * Modify the Ironic api-ref's parameters in parameters.yaml * rectify 'a image ID' to 'an image ID' * change 'a ordinary file ' to 'an ordinary file' * Validating fault value when querying with fault field * change 'a optional path' to 'an optional path' * Update links in README * Remove the fake\_ipmitool, fake\_ipmitool\_socat and fake\_snmp drivers * Add release notes link to README * BIOS Settings: add admin doc * Remove deprecated [keystone] config section * Make method public to support out-of-band cleaning * Remove the fake\_agent, fake\_pxe and fake\_inspector drivers * Consolidate the setting of ironic-extra-vars * Remove deprecated ansible driver options * Remove dulicate uses for zuul-cloner * Comply with PTI for Python testing * fix tox python3 overrides * Remove the "fake" and "fake\_soft\_power" classic drivers * Completely stop using the "fake" classic driver in unit tests * Power fault recovery follow up * Adds more \`ipmitool\` errors as retryable * Stop using pxe\_ipmitool in grenade * Fix FakeBIOS to allow tempest testing * Power fault recovery: Notification objects * Power fault recovery: API implementation * Add mock to doc requirements to fix doc build * Fix task\_manager process\_event docstring * Implements baremetal inspect abort * Add the ability to setup enabled bios interfaces in devstack * [Doc] Scheduling needs validated 'management' interface * Fix authentication issues along with add multi extra volumes * Stop passing IP address to IPA by PXE * Add Node BIOS support - REST API * Follow up to power fault recovery db tests * Power fault recovery: apply fault * Reraise exception with converting node ID * Gracefully handle NodeLocked exceptions during heartbeat * SNMPv3 security features added to the \`snmp\` driver * Allow customizing libvirt NIC driver * Convert conductor manager unit tests to hardware types * Remove excessive usage of mock\_the\_extension\_manager in unit tests - part 2 * Improve exception handling in agent\_base\_vendor * Check pep8 without ignoring D000 * Missing import of "\_" * Remove endpoint\_type from configuration * Power fault recovery: db and rpc implementation * Change exception msg of BIOS caching * Remove excessive usage of mock\_the\_extension\_manager in unit tests - part 1 * Mark xclarity password as secret * Fix E501 errors * Fix tenant DeprecationWarning from oslo\_context * update "auth\_url" in documents * Fix tenant DeprecationWarning from oslo\_context * Tear down console during unprovisioning * Fix XClarity parameters discrepancy * Follow up to inspect wait implementation * Silence F405 errors * Fix W605 Errors * Fix E305 Errors * Fix W504 errors * Gate fix: Cap hacking to avoid gate failure * Preserve env when running vbmc * Make validation failure on node deploy a 4XX code * Install OSC during quickstart * Ignore new errors until we're able to fix them * BIOS Settings: Add BIOS caching * BIOS Settings: Add BIOSInterface * Remove ip parameter from ipxe command line * Clarify image\_source with BFV * Update install guide to require resource classes * Fix error thrown by logging in common/neutron.py * Add note to oneview docs re: derprecation * Deprecate Oneview * Switch to the fake-hardware hardware type for API tests * Remove the Keystone API V2.0 endpoint registration * Move API (functional) tests to separate jobs * Add unit test for check of glance image status * Devstack plugin support for Redfish and Hardware * Collect periodic tasks from all enabled hardware interfaces * Stop verifying updated driver in creating task * BIOS Settings: Add RPC object * fix a typo * Trivial: Update pypi url to new url * Add more parameter explanation when create a node * Fix test\_get\_nodeinfo\_list\_with\_filters * Install reno to venv for creating release note * Stop removing root uuid in vendor interfaces * Fix \`\`agent\`\` deploy interface to call \`\`boot.prepare\_instance\`\` * Update wording used in removal of VIFs * [devstack] Switch ironic to uWSGI * Make ansible error message clearer * BIOS Settings: Add DB API * BIOS Settings: Add bios\_interface db field * BIOS Settings: Add DB model * Clean up driver\_internal\_info after tear\_down * Run jobs if requirements change * Remove vifs upon teardown * uncap eventlet * Update auth\_uri option to www\_authenticate\_uri * Resolve pep8 E402 errors and no longer ignore E402 * Remove pycodestyle version pin. Add E402 and W503 to ignore * Pin pycodestyle to <=2.3.1 * Check for PXE-enabled ports when creating neutron ports * Implementation of inspect wait state * Update Launchpad references to Storyboard * Add reno for new config [disk\_utils]partprobe\_attempts * Implement a function to check the image status * Fix callback plugin for Ansible 2.5 compatability * Follow the new PTI for document build * Clarify deprecation of "async" parameter * Fix incompatible requirement in lower-constraints * Reference architecture: small cloud with trusted tenants * Update and replace http with https for doc links * Assume node traits in instance trait validation * Adding grub2 bootloader support to devstack plugin * Describe unmasking fields in security document * Copy port[group] VIF info from extra to internal\_info * DevStack: Enroll node with iRMC hardware * Stop overriding tempdir in unit test * Uniformly capitalize parameter description * Gate: run ironic tests in the regular multinode job * Do not use async parameter * Remove the link to the old drivers wiki page * add lower-constraints job * Test driver-requirements changes on standalone job * Updated from global requirements * Exclude Ansible 2.5 from driver-reqs * Fix typos There are two 'the', delete one of them * fix typos in documentation * Fix nits in the XClarity Driver codebase * Validate instance\_info.traits against node traits * Prevent overwriting of last\_error on cleaning failures * Infiniband Port Configuration update[1] * Rework Bare Metal service overview in the install guide * Gate: stop setting IRONIC\_ENABLED\_INSPECT\_INTEFACES=inspector * Follow-up patch for rescue mode devstack change * devstack: enabled fake-hardware and fake interfaces * Updated from global requirements * Add descriptions for config option choices * devstack: add support for rescue mode * Updated from global requirements * Implements validate\_rescue() for IRMCVirtualMediaBoot * Updated from global requirements * Update config option for collecting sensor data * Use node traits during upgrade * multinode, multitenant grenade votes in gate * zuul: Remove duplicated TEMPEST\_PLUGIN entry * Use more granular mocking in test\_utils * change python-libguestfs to python-guestfs for ubuntu * Update links in README * Updated from global requirements * Remove useless variable * Don't validate local\_link\_connection when port has client-id * Updated from global requirements * Update docstring to agent client related codes * Move execution of 'tools/check-releasenotes.py' to pep8 * reloads mutable config values on SIGHUP * Make grenade-mulinode voting again * tox.ini: flake8: Remove I202 from ignore list * fix a typo in driver-property-response.json: s/doman/domain/ * Trivial: Remove the non ascii codes in tox.ini * Register traits on nodes in devstack * [devstack] block iPXE boot from HTTPS TempURLs * Fix issue with double mocking of utils.execute functions * Updates boot mode on the baremetal as per \`boot\_mode\` * Support nested objects and object lists in as\_dict * Revert "Don't try to lock for vif detach" * Rework logic handling reserved orphaned nodes in the conductor * Set 'initrd' to 'rescue\_ramdisk' for rescue with iPXE * Update iLO documentation for deprecating classical drivers * Increase the instance\_info column size to LONGTEXT on MySQL/MariaDB * Update release instructions wrt grenade * [ansible] use manual-mgmt hw type in unit tests * Use oslo\_db.sqlalchemy.test\_fixtures * Disable .pyc files for grenade multinode * Add docs for ansible deploy interface * Update comment and mock about autospec not working on staticmethods * Build instance PXE options for unrescue * Updated from global requirements * Fix default object versioning for Rocky * Allow sqalchemy filtering by id and uuid * Fix rare HTTP 400 from port list API * Clean nodes stuck in CLEANING state when ir-cond restarts * Imported Translations from Zanata * tox: stop validating locale files * Switch contributor documentation to hardware types * Stop using --os-baremetal-api-version in devstack by default * Conductor version cannot be null in Rocky * Add 'Other considerations' to security doc * Updated from global requirements * Implements validate\_rescue() for IloVirtualMediaBoot * Update to standalone ironic doc * Remove too large configdrive for handling error * Added known issue to iDRAC driver docs * Add missing noop implementations to fake-hardware * Stop running standalone tests for classic drivers * Stop running non-voting jobs in gate * Add optional healthcheck middleware * releasing docs: document stable jobs for the tempest plugin * Add meaningful exception in Neutron port show * Clean up CI playbooks * Fix broken log message * Add validate\_rescue() method to boot interface * Empty commit to bump minor pre-detected version * Remove test\_contains\_current\_release\_entry * Fix grammar errors * Clean up RPC versions and database migrations for Rocky * Remove validate\_boot\_option\_for\_trusted\_boot metric * Update reno for stable/queens 10.1.0 ------ * Add some missed test cases in node object tests * [reno] timeout parameter worked * Remove unnecessary lines from sample local.conf * Stop guessing mime types based on URLs * Clean up release notes before a release * Don't try to lock for vif detach * Revert grenade jobs to classic drivers * Handle case when a glance image contains no data * Add 10.1 and queens to the release mapping * Do not pass credentials to the ramdisk on cleaning * correct grammar, duplicate the found * Update iRMC document for classic driver deprecation * correct grammar, duplicate the found * Correct grammar, duplicate the found * Only set default network interface flat if enabled in config * Fix handling of 'timeout' parameter to power methods * Fixed some typos in test code * Replace chinese quotes to English quotes * Zuul: Remove project name * Modify error quotation marks * cleanup: Remove usage of some\_dict.keys() * Use zuul.override\_checkout instead of custom branch\_override var * Add validate\_rescue() method to network interface * [docs] Firmware based boot from volume for iLO drivers * Follow-up patch for api-ref documentation for rescue * Remove sample policy and config files * correct referenced url in comments * Remove unused code in unittest * Fix configure-networking docs * Migrate the remaining classic drivers to hardware types * Remove mode argument from boot.(prepare|clean\_up)\_ramdisk * Do not use asserts with business logic * Add option to specify mac adress in devstack/.../create-node.sh * Updated from global requirements * [api-ref] clarify what /v1/lookup returns * Update FAQ about updates of release notes * Add documentation for baremetal mech * Flat networks use node.uuid when binding ports * Add missing ilo vendor to the ilo hardware types * Follow-up for Switch OneView driver to hpOneView and ilorest libraries * Soft power operations for OneView hardware type * Deprecate classic drivers * Declare support for Python 3.5 in setup.cfg * Add api-ref and ironic state documentation for rescue * Mock check\_dir in ansible interface tests * Add documentation for node traits * Fix nits found in node traits * Follow-up for Implementation for UEFI iSCSI boot for ILO * Explicitly mark earliest-version for release notes * Remove unused code in common/neutron.py * Correct link address * Wait for ironic-neutron-agent to report state * Devstack - use neutron segments (routed provider networks) * Zuul: Remove project name * Add traits field to node notifications * Update description for config params of 'rescue' interface * Add rescue interface field to node-related notifications * Follow-up for API methods for rescue implementation * Add support for preparing rescue ramdisk in iLO PXE * Automatically migrate nodes to hardware types * Add API methods for [un]rescue * Fix unit tests for UEFI iSCSI boot for ILO * Follow-up for agent rescue implementation * iRMC:Support preparing rescue ramdisk in iRMC PXE * Redundant alias in import statement * Agent rescue implementation * Allow data migrations to accept options * Resolve race in validating neutron networks due to caching * Update api-ref for port group create * Implementation for UEFI iSCSI boot for ILO * Add node traits to API reference * Add a timeout for powering on/off a node on oneview * Fix persistent information when getting boot device * Remove python-oneviewclient from oneview hardware type * API: Node Traits API * Add RPC API and conductor manager for traits * Be more sane about cleaning * Fix node update with PostgreSQL * Switch the CI to hardware types * Migrate python-oneviewclient validations to oneview hardware type * Updated from global requirements * Add RPC object for traits * Allow setting {provisioning,cleaning,rescuing}\_network in driver\_info * Migrate oneview hardware type to use python-hpOneView * remeber spelling error * Add rescuewait timeout periodic task * Add rescue related methods to network interface * Add XClarity Driver * [docs] mention new nova scheduler option * Add a version argument to traits DB API * Mark multinode job as non-voting * Updated from global requirements * Fix docs for Sphinx 1.6.6 * fix a typo in ilo.rst: s/fimware/firmware/ * Do not send sensors data for nodes in maintenance mode 10.0.0 ------ * Adds RPC calls for rescue interface * Make the Python 3 job voting * Add additional context to contribution guide * node\_tag\_exists(): raise exception if bad node * Setup ansible interface in devstack * Remove the deprecated "giturl" option * Join nodes with traits * Update links * Node traits: Add DB API & model * Add release 10.0 to release mappings * Remove ironic\_tempest\_plugin/ directory * Do not validate root partition size for whole disk images in iscsi deploy * Switch non-vendor parts admin guide to hardware types * Clean up release notes before a release * Add Error Codes * Remove ironic\_tempest\_plugin/ directory * Fix initialization of auth token AuthProtocol * Rework exception handling on deploy failures in conductor * Add a provisioning target:adopt * Devstack: install qemu-system-x86 on RHEL * Add uWSGI support * Fix ironic node create cli * zuul: Update TLSPROXY based on branch * Run in superconductor cellsv2 mode for non-grenade jobs * Updated from global requirements * Add documentation covering storage multi-attach * Adds rescue\_interface to base driver class * Document the check done in "ironic-dbsync upgrade" * zuul: Add ability to specify a 'branch\_override' value * zuul: Remove some redundancy by consolidating the 'post.yaml' files * Use openstack port create instead of neutron port-create * ansible: handle mount of /sys the same way IPA does it * [ansible] add defaults to config * Prevent changes to the ironic\_tempest\_plugin/ directory * Finalize migration to keystoneauth adapters * Updated from global requirements * Follow up Add additional capabilities discovery for iRMC driver * Use NamedExtensionManager for drivers * Use the tempest plugin from openstack/ironic-tempest-plugin * Switch emphasis to hardware types in the installation guide * Use adapters for neutronclient * Remove deprecated ironic.common.policy.enforce() * Introduce hpOneView and ilorest to OneView * Auto-detect the defaults for [glance]swift\_{account,temp\_url\_key,endpoint\_url} * Add 'nova hypervisor-list' in example set of commands to compare the resources in Compute service and Bare Metal service * Receive and store agent version on heartbeat * tox: Use the default version of Python 3 for tox tests * Remove unused methond \_get\_connect\_string * Update comment on location of webapi-version-history.rst * Updated from global requirements * Do not access dbapi attributes on dbsync import * Fix swiftclient creation * Update docs to include API version pinning * Add networking-fujitsu ML2 driver to multitenacy doc * Updated from global requirements * 9.2.0 is the ironic version with rebuild configdrive * Pin API version during rolling upgrade * devstack to \`git pull sushy-tools\` if required * Add spec & priorities links to contributor doc * Fix HPE headers for oneview * Updated from global requirements * Fix the format command-line * Add information about neutron ML2 drivers to multitenancy docs * Apply pep8 check to app.wsgi * ironic.conf.sample includes default\_resource\_class * Add a configuration option for the default resource class * Rework drivers page in the admin documentation * Update bindep.txt for doc builds * Don't collect logs from powered off nodes * Add additional capabilities discovery for iRMC driver * Use adapters for inspectorclient * Use adapters for cinderclient * Imported Translations from Zanata * Followup to I07fb8115d254e877d8781207eaec203e3fdf8ad6 * Add missing gzip call to FAQ item on how to repack IPA * Rework keystone auth for glance * Remove setting of version/release from releasenotes * zuul.d: Remove unneeded required-projects * Updated from global requirements * Add 9.2 to release mappings * Remove provisioning network ports during tear down * Fix image type for partition-pxe\_ipmitool-tinyipa-python3 job 9.2.0 ----- * update description for Change Node Power State * Add no-vendor interface to the idrac hardware types * Updated from global requirements * Fail deploy if agent returns >= 400 * Don't run multinode jobs for changes to driver-requirements.txt * Revert "Introduce hpOneView and ilorest to OneView" * Revert "Migrate oneview driver to use python-hpOneView" * Revert "Fix persistent information when getting boot device" * Revert "Add a timeout for powering on/off a node on HPE OneView Driver" * Revert "Migrate python-oneviewclient validations to Ironic OneView drivers" * Revert "Remove python-oneviewclient from Ironic OneView drivers" * Revert "Get a new OneView client when needed" * Revert "Update python-ilorest-library to hardware type OneView" * Add missing 'autospec' to unit tests - /unit/objects/ * Add ansible deploy interface * Clean up release notes from the upcoming release * Fix misplaced reno note * Make the api format correctly * [devstack] stop setting or relying on standard properties * Remove some deprecated glance options * zuul.d/projects.yaml: Sort the job list * project.yaml: Remove 'branches:' & jobs that don't run on master * Miss node\_id in devstack lib * Update idrac hardware type documentation * Update Zuul 'gate' job * Rolling upgrades related dev documentation * Update python-ilorest-library to hardware type OneView * Add rescue\_interface to node DB table * Get a new OneView client when needed * Run tempest jobs when update requirements * Updated from global requirements * Remove unused IronicObjectIndirectionAPI from ironic-api * Add release note for fix to port 0 being valid * Simplify the logic of validate\_network\_port * Follow up Secure boot support for irmc-virtual-media driver * devstack: Clean up some of the devstack code * Remove python-oneviewclient from Ironic OneView drivers * Allow to set default ifaces in DevStack * Reword interface information in multitenancy docs * Ensure ping actually succed * Fix minor documentation missing dependency * Small fixes in the common reference architecture docs * [reno] Update ironic-dbsync's check object version * Migrate python-oneviewclient validations to Ironic OneView drivers * Remove unnesessary description for config parameters in cinder group * Update ironic.sample.conf * Fix the format issues of User guide * Zuul: add file extension to playbook path * Add I202 to flake ignore list * Revise deploy process documentation * Add a timeout for powering on/off a node on HPE OneView Driver * ironic-dbsync: check object versions * Update validating node information docs * Use jinja rendering from utils module * Add ability to provide configdrive when rebuilding * Finish the guide on upgrading to hardware types * Move ironic legacy jobs into the ironic tree * Fix missing logging format error * Add missing 'autospec' to unit tests - /unit/common/ * [bfv] Set the correct iqn for pxe * Fix "import xx as xx" grammer * Secure boot support for irmc-virtual-media driver * Change pxe dhcp options name to codes * Updated from global requirements * [docs] describe vendor passthru in hw types * Add bindep.txt file * Fix some mis-formatted log messages in oneview driver * Disallow rolling upgrade from Ocata to Queens * Add online data migrations for conductor version * [Devstack] Replace tap with veth * Support SUM based firmware update as clean step for iLO drivers * Add missing 'autospec' to unit tests - /unit/dhcp/ * Fix mis-formatted log messages * Use oslotest for base test case * Update tests to do not use deprecated test.services() * Follow-up patch 'Cleanup unit tests for ipmittool' * Makes ironic build reproducible * Remove 'next' for GET /nodes?limit=1&instance\_uuid= * ListType preserves the order of the input * Stop passing raw Exceptions as the reasons for ironic Image exceptions * Update after recent removal of cred manager aliases * ipmitool: reboot: Don't power off node if already off * Reduce complexity of node\_power\_action() function * Add default configuration files to data\_files * Documentation for 'oneview' hardware type * Cleanup unit tests for ipmittool * Use DocumentedRuleDefault instead of RuleDefault * main page: add links to docs on Upgrade to HW Types * Add documentation describing each Ironic state * Cleanup test-requirements * Fix API VIF tests when using flat network * Updated from global requirements * Migrate to stestr as unit tests runner * [reno] update for MAC address update fix * Revert "Change pxe dhcp options name to codes." * Drop neutron masking exception in vif\_attach * Rework update\_port\_address logic * api-ref portgroup\_id should be portgroup\_ident * Document setting discover\_hosts\_in\_cells\_interval in nova.conf * Adds more exception handling for ironic-conductor heartbeat * Updated from global requirements * Change pxe dhcp options name to codes * Updated from global requirements * Updated from global requirements * Reference architecture: common bits * Stop using Q\_PLUGIN\_EXTRA\_CONF\_{PATH|FILES} variables * Put unit test file in correct directory * Update vif\_attach from NeutronVIFPortIDMixin * Replace http with https for doc links * flake8: Enable some off-by-default checks * Update upgrade guide to use new pike release * [install docs] ironic -> openstack baremetal CLI * Using devstack configure\_rootwrap to configure ironic rootwrap * Use newer location for iso8601 UTC * reformat REST API Version History page * Fix persistent information when getting boot device * Migrate oneview driver to use python-hpOneView * [reno] Clarify fix for missing boot.prepare\_instance * [doc] Non-word updates to releasing doc * Introduce hpOneView and ilorest to OneView * Fix race condition in backfill\_version\_column() * Switch API ref to use versionadded syntax throughout * Replace DbMigrationError with DBMigrationError * [reno] Clarify fix for BFV & image\_source * Fix unit test for new fields in invaid API version * Put tests in correct location for ironic/api/controllers/v1/ * Troubleshooting docs: explain disabled compute services * Update documentation for \`\`ilo\`\` hardware type * Updated from global requirements * Boot from volume fails with 'iscsi' deploy interface * Boot from volume fails with 'iscsi' deploy interface * [contributor docs] ironic -> OSC baremetal CLI * Minor improvements to the resource classes documentation * Update Nova configuration documentation * Build docs with Python 2 for now * [doc] add FAQ about updating release notes * Follow-up for commit cb793d013610e6905f58c823e68580714991e2df * [docs] Update Releasing Ironic Projects * Add doc/source/\_static to .gitignore * Fix indentation in few of the documentation pages * Upgrade guide for \`snmp\` hardware type * tox.ini: Add 'py36' to the default envlist * devstack: Comment variables related to multi-tenant networking * Test ironic-dbsync online\_data\_migrations * Add a comment about default devstack images * Fix to use "." to source script files * Add #!/bin/bash to devstack/common\_settings * Add Sem-Ver flag to increment master branch version * conductor saves version in db * Update Pike release title to include version range * Updated from global requirements * remove REST API examples from RAID doc * [admin docs] ironic -> openstack baremetal CLI * [doc] change absolute to relative URL * Configuration documentation migrated * fix a typo in agent.py: s/doman/domain/ * Documentation for irmc hardware type * correct URLs in contributor docs & main index * Correct URLs in install docs * correct URLs in admin docs * Documentation for 'snmp' hardware type * Fix incorrect documentation urls * Updated from global requirements * Partially revert "Set resource class during upgrade" * Introduce keystoneauth adapters for clients * [doc] Replace http with https * Follow-up to \`\`ilo\`\` hardware type documentation * Set explicit default to enabled driver interfaces * Set resource class during upgrade * Fix names of capabilities for FibreChannel volume boot * iRMC: Follow-up: volume boot for virtual media boot interface * Do not restart n-cpu during upgrade * Make SNMP UDP transport settings configurable * Enable OSProfiler support in Ironic - follow-up * Wait for cleaning is completed after base smoke tests * Add 'hardware type' for Dell EMC iDRACs * Fix DRAC classic driver double manage/provide * [devstack] use resource classes by default * Add 9.1 to release\_mappings * Imported Translations from Zanata * Add 'force\_persistent\_boot\_device' to pxe props * devstack: Remove unused variable IRONIC\_VM\_NETWORK\_RANGE * Adds 9.0 to release\_mappings * Get rid of sourcing stackrc in grenade settings * Update reno for stable/pike * Revert "[reno] Add prelude for Pike release" 9.0.0 ----- * Add the new capabilities to the iLO InspectInterface * [docs] update irmc boot-from-volume * [releasenotes] update irmc's boot-from-volume support * [reno] Add prelude for Pike release * Add storage interface to enabling-drivers doc * Add admin guide for boot from volume * iRMC: Add documentation for remote volume boot * Remove ensure\_logs\_exist check during upgrade * Add functional API tests for volume connector and volume target * Follow-up to rolling upgrade docs * Update proliantutils version for Pike release * [reno] update * Documetation for 'ilo' hardware type * Follow up Secure boot support for irmc-pxe driver * Update the documentation links - code comments * Update the documentation links - install guide * Remove translator assignments from i18n * Add hardware types to support Cisco UCS Servers * Remove setting custom http\_timeout in grenade * Upgrade to hardware types: document changing interfaces for active nodes * Update the resource classes documentation based on recent progress * [devstack] switch to the latest API version and OSC commands * Prevent changes of a resource class for an active node * Guide on upgrading to hardware types * iRMC: Support volume boot for iRMC virtual media boot interface * Rolling upgrade procedure documentation * Release notes clean up for the next release * Fix missing print format error * Secure boot support for irmc-pxe driver * Adds hardware type for SNMP powered systems * Add a guide for Devstack configuration for boot-from-volume * Add a flag to always perform persistent boot on PXE interface * Put tests in correct location for ironic/api/controllers/v1/ * [tempest] also catch BadRequest in negative tests with physical\_network in old API * Use more specific asserts in tests * [Trivialfix]Fix typos in ironic * Remove WARNING from pin\_release\_version's help * Update ironic.conf.sample due to non-ironic code * Add new dbsync command with first online data migration * BFV Deploy skip minor logging, logic, and test fixes * Add hardware type for HPE OneView * [doc-migration] Add configuration folder for documentation * Add storage interface to api-ref * Add API for volume resources to api-ref * Disable automated cleaning for single node grenade * Optimize node locking on heartbeat * Remove file RELEASE-NOTES * Removed unnecessary setUp() call in unit tests * Adds doc for restore\_irmc\_bios\_config clean step * Remove SSH-based driver interfaces and drivers * [Tempest] fix negative tests on old API versions * Remove install-guide env which is no longer effective * Address review feedback for ipxe boot file fix * Change ramdisk log filename template * Remove usage of some of the deprecated methods * Updated from global requirements * grenade: Use test\_with\_retry to check if route is up * Don't use multicell setup for ironic & increase timeout * Tempest scenario test for boot-from-volume * Refactor VIFPortIDMixin: factor out common methods * Add negative attribute to negative port tests * Rolling upgrades support for create\_port RPCAPI * Fixes hashing issues for py3.5 * Generate iPXE boot script on start up * grenade: For multi-node grenade, do not upgrade nova * Changes log level of a message * Fix small issues in the installation documentation * Removes agent mixin from oneview drivers * Fix docstring and default value for local\_group\_info * [doc] update ironic's landing page * Adding note for ironic virt driver nova-compute changes * Added a condition for 'ilo' hardware type * Updated from global requirements * py3.5:Workaround fix for forcing virtualbmc installation with pip2 * [devstack] add support for running behind tls-proxy * Start passing portgroup information to Neutron * Add tempest tests for physical networks * Updated from global requirements * Refactor VIFPortIDMixin: rename * Doc for disk erase support in iLO drivers * DevStack: Add configuration for boot-from-volume * Refactor get\_physnets\_by\_portgroup\_id * Rolling upgrades support for port.physical\_network * Allow updating interfaces on a node in available state * replace 'interrace' with 'interface' * Improve port update API unit tests * Improve ports API reference * Expose ports' physical network attribute in API * Rename 'remove\_unavail\_fields' parameter * Updated from global requirements * Add missing parameter descriptions * Updated from global requirements * Generate iPXE boot script when deploying with boot from volume * Add Driver API change in 1.33 to history * Update URL home-page in documents according to document migration * Using non-persistent boot in PXE interface * Modifications for rolling upgrades * Update comments related to ipmi & old BMCs * Follow-up to fix for power action failure * Fix copy/paste error in VIF attach note * [reno] Clarify fix for inspect validation failures * [trivial] Fix argument descriptions * Remove \_ssh drivers from dev-quickstart * Fix broken links in tempest plugin README * Remove future plan from portgroup document * Enable OSProfiler support in Ironic * Revert "Wait until iDRAC is ready before out-of-band cleaning" * Force InnoDB engine on interfaces table * Add storage interface field to node-related notifications * Removed nonexistent option from quickstart snippet * Enable cinder storage interface for generic hardware * Mock random generator for BackoffLoopingCall in IPMI unittests * Raise HTTP 400 rather than 500 error * Make IP address of socat console configurable * Set nomulticell flag for starting nova-compute in grenade * Physical network aware VIF attachment * Update README to point at new doc location * Move ironic dbsync tool docs into doc/source/cli * Move doc/source/dev to doc/source/contributor * Move operator docs into into doc/source/admin * Move install guide into new doc/source/install location * Improve graceful shutdown of conductor process * switch from oslosphinx to openstackdocstheme * Fix quotes in documentation and schema description * Follow-up for bugfix 1694645 patch * Add REST API for volume connector and volume target operation * Add node power state validation to volume resource update/deletion * Make redfish power interface wait for the power state change * Refactor common keystone methods * Adds clean step 'restore\_irmc\_bios\_config' to iRMC drivers * Add CRUD notification objects for volume connector and volume target * Updated from global requirements * Don't retry power status if power action fails * Fix VIF list for noop network interface * Fetch Glance endpoint from Keystone if it's not provided in the configuration * Replace the usage of 'manager' with 'os\_primary' * Logic for skipping deployment with BFV * iPXE template support for iSCSI * Move \_abort\_attach\_volumes functionality to detach\_volumes * Allow to load a subset of object fields from DB * Unit test consistency: DB base and utils prefix * Updated from global requirements * Updated from global requirements * Remove unnecessary line in docstring * Validate portgroup physical network consistency * Wire in storage interface attach/detach operations * Wait until iDRAC is ready before out-of-band cleaning * Minor changes to object version-related code * Remove times.dbm prior to test run * Discover hosts while waiting for hypervisors to show up in devstack * Add docs for node.resource\_class and flavor creation * Updated from global requirements * Move port object creation to conductor * Make default\_boot\_option configurable in devstack * Trigger interface attach tests * Support setting inbound global-request-id * Follow-up docstring revision * Runs the script configure\_vm.py in py3.5 * Replace get\_transport with get\_rpc\_transport * Add version column * Add ldlinux.c32 to boot ISO for virtual media * Remove legacy auth loading * Add a note for specifying octal value of permission * Improve driver\_info/redfish\_verify\_ca value validation * Updated from global requirements * Stop sending custom context values over RPC * Replace assertTrue(isinstance()) with assertIsInstance() * Change volume metadata not to use nested dicts * Add physical network to port data model * Move deploy\_utils warnings to conductor start * Remove unused methods from GlanceImageService * [install-guide] explain the defaults calculation for hardware types * Improve driver\_info/redfish\_system\_id value validation * Add guru meditation report support * Adds parameters to run CI with hardware types * Fix description for [cinder] action\_retries option * Deprecate elilo support * Updated from global requirements * Update ipmitool installation and usage documentation * Replace test.attr with decorators.attr * Updated from global requirements * Replace test.attr with decorators.attr * remove explicit directions for release notes on current branch * Use cfg.URIOpt for URLs with required schemes * Updated from global requirements * Remove unneeded lookup policy check * Add Cinder storage driver * Add ipmitool vendor interface to the ipmi hardware type * Replace test.attr with decorators.attr * Fix directories permission for tftpboot * Comment the default values in policy.json.sample * Replace deprecated .assertRaisesRegexp() * Updated from global requirements * Remove remaining vendor passthru lookup/heartbeat * Prevent tests from using utils.execute() * Remove unit tests that test oslo\_concurrency.processutils.execute * Remove single quoted strings in json sample * Refactor install-guide: update node enrollment * Refactor install-guide: driver and hardware types configuration * Minor clean up in iLO drivers unit tests * Remove translation of log messages * Enable getting volume targets by their volume\_id * Check if sort key is allowed in API version * Updated from global requirements * Remove logging translation calls from ironic.common * [install-guide] add section on Glance+Swift config * Fix attribute name of cinder volume * Update reno for new ilo hardware type * Remove log translations from ironic/drivers Part-1 * Update developer quickstart doc about required OS version * Add 'iscsi' deploy support for 'ilo' hardware type * Trivial fix typos while reading doc * Fix docstrings in conductor manager * [devstack] start virtualpdu using full path * [Devstack] Increase default NIC numbers for VMs to 2 * Remove usage of parameter enforce\_type * Properly allow Ironic headers in REST API * Updated from global requirements * Fix a typo * DevStack: Install gunicorn and sushy based on g-r constraints * Fix keystone.py 'get\_service\_url' method parameter * Add functional api tests for node resource class * Refactor install-guide: integration with other services * Remove references to EOLed version of Ironic from the install guide * DevStack: Setup a Redfish environment * Add hardware type for HPE ProLiant servers based on iLO 4 * Bring the redfish driver address parameter closer to one of other drivers * [Grenade]: Do not run ir-api on primary node after upgrade * Validate outlet index in SNMP driver * [Devstack] Rework VMs connection logic * Fix oslo.messaging log level * Add context to IronicObject.\_from\_db\_object() * Add release notes for 8.0.0 * [api-ref] remove reference to old lookup/heartbeat * Follow-up patch to redfish documentation * [devstack] use the generic function to setup logging * Fix cleaning documents * Remove obsolete sentence from comment * TrivialFix: Remove logging import unused * Remove translation of log messages from ironic/drivers/modules/irmc * Run db\_sync after upgrade * Remove translation of log messages from ironic/drivers/modules/ucs * Start enforcing config variables type in tests * Add documentation for the redfish driver * Read disk identifier after config drive setup * Add a paragraph about image validation to Install Guide * Make terminal timeout value configurable * Remove nova mocks from documentation configuration * Remove fake\_ipmitool\_socat driver from the documentation * Add redfish driver * Ensure we install latest libivrt * Set env variables when all needed files are source * save\_and\_reraise\_exception() instead of raise * Follow-up patch of 7f12be1b14e371e269464883cb7dbcb75910e16f * VirtualPDU use libvirt group instead of libvirtd * Fix unit tests for oslo.config 4.0 * Always set host\_id when adding neutron ports * Add /baremetal path instead of port 6385 * Add SUSE instructions to the install guide * Remove pre-allocation model for OneView drivers * Remove log translations from iLO drivers * Follow-up patch of 565b31424ef4e1441cae022486fa6334a2811d21 * Setup logging in unit tests * Remove deprecated DHCP provider methods * Make config generator aware of 'default\_log\_levels' override * [Devstack] Fix libvirt group usage * Common cinder interface additional improvements * Config drive support for ceph radosgw * Improve error message for deleting node from error state * Updated from global requirements * Add comments re RPC versions being in sync * Help a user to enable console redirection * Fix some reST field lists in docstrings * Avoid double ".img" postfix of image file path in devstack installation * add portgroups in the task\_manager docstrings * Remove unneeded exception handling from agent driver * Updated from global requirements * Remove translation of log messages from ironic/dhcp and ironic/cmd * Updated from global requirements * Bypassing upload deploy ramdisk/kernel to glance when deploy iso is given * Drop commented import * Enforce releasenotes file naming * Remove unused methods in common/paths and common/rpc * Remove translation of log messages from ironic/api * Fix access to CONF in dhcp\_options\_for\_instance * Add string comparison for 'IRONIC\_DEPLOY\_DRIVER' * Modify the spelling mistakes Change explictly to explicitly 8.0.0 ----- * Revert "[Devstack] Rework VMs connection logic" * Fix base object serialization checks * Node should reflect what was saved * Changes 'deploy' and 'boot' interface for 'pxe\_ilo' driver * Use standard deploy interfaces for iscsi\_ilo and agent\_ilo * Refactor iLO drivers code to clean 'boot' and 'deploy' operations * Updated from global requirements * Add base cinder common interface * Updates to RPC and object version pinning * Add release note for messaging alias removal * Remove deprecated method build\_instance\_info\_for\_deploy() * Remove deprecated, untested ipminative driver * [Devstack] Rework VMs connection logic * Docs: bump tempest microversion caps after branching * Add assertion of name to test\_list\_portgroups test * Skip PortNotFound when unbinding port * Remove unnecessary setUp function in testcase * Remove deprecated [ilo]/clean\_priority\_erase\_devices config * Remove extra blank space in ClientSide error msg * Updated from global requirements * Convert BaseDriver.\*\_interfaces to tuples * [Devstack] cleanup upgrade settings * [doc] Update examples in devstack section * devstack: install python-dracclient if DRAC enabled * Call clean\_up\_instance() during node teardown for Agent deploy * Don't pass sqlite\_db in db\_options.set\_defaults() * Fix some api field lists in docstrings * Copy and append to static lists * Define minimum required API ver for portgroups * Add RPC and object version pinning * Updated from global requirements * Fix docstrings for creating methods in baremetal api tests * Extend tests and checks for node VIFs * Remove translation of log messages from ironic/conductor * Add functional API tests for portgroups * Revert the move of the logger setup * [devstack] Use global requirements for virtualbmc * Updates documentation to install PySqlite3 * Remove log translation function calls from ironic.db * Fix local copy of scenario manager * Add standalone tests using direct HTTP links * devstack: When Python 3 enabled, use Python 3 * Remove old oslo.messaging transport aliases * Fix file\_has\_content function for Py3 * Fix usage of various deprecated methods * Prune local copy of tempest.scenario.manager.py * devstack: Don't modprobe inside containers * Include a copy of tempest.scenario.manager module * flake8: Specify 'ironic' as name of app * Updated from global requirements * Fix API doc URL in GET / response * Add ironic standlaone test with ipmi dynamic driver * Update new proliantutils version to 2.2.1 * Add Ironic standalone tests * Fix typos of filename in api-ref * Updated from global requirements * Fix the exception message in tempest plugin * Speed up test\_touch\_conductor\_deadlock() * Cleanup hung iscsi session * Refactor waiters in our tempest plugin * Deprecate support for glance v1 * This adds a tempest test for creating a chassis with a specific UUID * Address a shell syntax mistake * Update ironic.conf.sample * grenade: Only 'enable\_plugin ironic' if not already in conf * Remove overwriting the default value of db\_max\_retries * Do not load credentials on import in tempest plugin clients.py * Update the Ironic Upgrade guide * Validation before perform node deallocation * Add wsgi handling to ironic-api in devstack * Fix updating node.driver to classic * devstack: Make sentry \_IRONIC\_DEVSTACK\_LIB a global variable * Use Sphinx 1.5 warning-is-error * Fixed release note for DBDeadLock handling * Remove references to py34 from developer guide * Delete release note to fix build * Correct typos in doc files * Clean up eventlet monkey patch comment and reno * Moved fix-socat-command release note * Allow to attach/detach VIFs to active ironic nodes * Move eventlet monkey patch code * Updated from global requirements * doc: update FAQ for release notes * Update test requirement * Add tempest plugin API tests for driver * Updated from global requirements * Remove gettext.install() for unit tests * Fix missing \_ import in driver\_factory * Add support for DBDeadlock handling * Fix BaseBaremetalTest.\_assertExpected docstring * Updated ramdisk API docstrings * Trivial: Change hardcoded values in tempest plugin * Developer guide should not include Python 3.4 * Add testcases for iLO drivers * Deduplicate \_assertExpected method in tests * Remove unused logging import * Use specific end version since liberty is EOL * Use flake8-import-order * Document PXE with Spanning Tree in troubleshooting FAQ * Skip VIF tests for standalone ironic * Switch to new location for oslo.db test cases * Explicitly use python 2 for the unit-with-driver-libs tox target * Add ironic port group CRUD notifications * Remove logging import unused * Update release nodes for Ocata * reno 'upgrades' should be 'upgrade' * Updated from global requirements * Update docs create port group 7.0.0 ----- * Clean up release notes for 7.0.0 * Add a summary release note for ocata * Walk over all objects when doing VIF detach * Fix unit tests with UcsSdk installed * Mock client initializations for irmc and oneview * Follow up patch for SNMPv3 support * Add a tox target for unit tests with driver libraries * Fix missed '\_' import * Change misc to test\_utils for tempest test * Source lib/ironic in grenade settings * Update api-ref for dynamic drivers * Switch to use test\_utils.call\_until\_true * Add port groups configuration documentation * Remove most unsupported drivers * SNMP agent support for OOB inspection for iLO Drivers * No node interface settings for classic drivers * Unbind tenant ports before rebuild * Remove a py34 environment from tox * Fix object save after refresh failure * Pass session directly to swiftclient * Adds network check in upgrade phase in devstack * Fix log formating in ironic/common/neutron * Follow-up iRMC power driver for soft reboot/poff * Use https instead of http for git.openstack.org * Validate the network interface before cleaning * log if 'flat' interface and no cleaning network * exception from driver\_factory.default\_interface() * devstack: Adding a README for ironic-bm-logs directory * [devstack] Allow using "ipmi" hardware type * Remove trailing slash from base\_url in tempest plugin * Improve enabled\_\*\_interfaces config help and validation * Prepare for using standard python tests * [Devstack] fix waiting resources on subnode * Log an actual error message when failed to load new style credentials * Speed up irmc power unit tests * Add bumping sem-ver to the releasing docs * Make \_send\_sensors\_data concurrent * [devstack] remove deprecated IRONIC\_IPMIINFO\_FILE * Fail conductor startup if invalid defaults exist * Add dynamic interfaces fields to base node notification * Improve conductor driver validation at startup * Remove iSCSI deploy support for IPA Mitaka * Do not change admin\_state for tenant port * Use delay configoption for ssh.SSHPower drivers * Add the timeout parameter to relevant methods in the fake power interface * Adding clean-steps via json string examples * Allow duplicate execution of update node DB api method * Remove deprecated heartbeat policy check * Add sem-ver flag so pbr generates correct version * Fix a few docstring warnings * Remove deprecated [deploy]erase\_devices\_iterations * Remove support for driver object periodic tasks * Log reason for hardware type registration failure * Duplicated code in ..api.get\_active\_driver\_dict() * Add hardware type 'irmc' for FUJITSU PRIMERGY servers * Allow using resource classes * DevStack: Only install edk2-ovmf on Fedora * [Devstack] Add stack user to libvirt group * Add soft reboot, soft power off and power timeout to api-ref * Add dynamic interfaces fields to nodes API * Add dynamic driver functionality to REST API * [Devstack] Download both disk and uec images * [Devstack] Set DEFAULT\_IMAGE\_NAME variable * Update the outdated link in user-guide * Add Inject NMI to api-ref * Don't override device\_owner for tenant network ports * Validate port info before assume we may use it * Switch to decorators.idempotent\_id * Updated from global requirements * Minor updates to multi-tenancy documentation * Follow-up iRMC driver doc update * Devstack: Create a "no ansi" logfile for the baremetal console logs * Add hardware type for IPMI using ipmitool * [Devstack] enable only pxe|agent\_ipmitool by default * Update iRMC driver doc for soft reboot and soft power off * Fix broken link in the iLO driver docs * DevStack: Fix cleaning up nodes with NVRAM (UEFI) * iRMC power driver for soft reboot and soft power off * Update proliantutils version required for Ocata release * Fix rel note format of the new feature Inject NMI * iRMC management driver for Inject NMI * Revert "Revert "Remove ClusteredComputeManager"" * Use context manager for better file handling * Updated from global requirements * Fix typo in the metrics.rst file * Allow to use no nova installation * Fix api-ref warnings * Turn NOTE into docstring * Updated from global requirements * Correctly cache "abortable" flag for manual clean steps * Use global vars for storing image deploy path's * Ipmitool management driver for Inject NMI * Generic management I/F for Inject NMI * Clean up driver\_factory.enabled\_supported\_interfaces * Add hardware types to the hash ring * Default ironic to not use nested KVM * Do not use user token in neutron client * Use only Glance V2 by default (with a compatibility option) * Enable manual-management hardware type in devstack * Register/unregister hardware interfaces for conductors * Validate the generated swift temp url * Move to tooz hash ring implementation * Add VIFs attach/detach to api-ref * DevStack: Configure nodes/environment to boot in UEFI mode * Add tests for Payloads with SCHEMAs * make sure OVS\_PHYSICAL\_BRIDGE is up before bring up vlan interface * Update troubleshooting docs on no valid host found error * Expose default interface calculation from driver\_factory * Add default column to ConductorHardwareInterfaces * Do not fail in Inspector.\_\_init\_\_ if [inspector]enabled is False * Use TENANT\_VIF\_KEY constant everywhere * Updated from global requirements * Allow to attach/detach VIF to portgroup * Refactor DRAC driver boot-device tests * Updated from global requirements * Remove check for UEFI + Whole disk images * Updated from global requirements * Update validate\_ports from BaremetalBasicOps * Ipmitool power driver for soft reboot and soft power off * Allow to set min,max API microversion in tempest * Skip VIF api tests for old api versions * Fix assertEqual parmeters position in unittests * Ensures that OneView nodes are free for use by Ironic * Move default image logic from DevStack to Ironic * Document HCTL for root device hints * Removes unnecessary utf-8 encoding * Move heartbeat processing to separate mixin class * Add Virtual Network Interface REST APIs * Fix logging if power interface does not support timeout * Add lsblk to ironic-lib filters * Fix setting persistent boot device does not work * Updated from global requirements * Add docs about creating release note when metrics change * Fix take over of ACTIVE nodes in AgentDeploy * Fix take over for ACTIVE nodes in PXEBoot * Don't translate exceptions w/ no message * Correct logging of loaded drivers/hardware types/interfaces * Move baremetal tempest config setting from devstack * Change object parameter of swift functions * Remove greenlet useless requirement * Fixes grammar in the hash\_partition\_exponent description * Revert "Disable placement-api by default" * Remove service argument from tempest plugin client manager * Fix the comma's wrong locations * Remove netaddr useless requirement * Generic power interface for soft reboot and soft power off * Create a table to track loaded interfaces * Remove trailing backtick * Updated from global requirements * Remove 'fork' option from socat command * Add Virtual Network Interface RPC APIs * Catch unknown exceptions in validate driver ifaces * Disable placement-api by default * Update regenerate-samples.sh api-ref script * Updated from global requirements * Add Virtual Network Interface Driver APIs * 'updated\_at' field value after node is updated * Add node console notifications * Add node maintenance notifications * Add ironic resources CRUD notifications * Auto-set nullable notification payload fields when needed * Update dev-quickstart: interval value cannot be -1 * Fix wrong exception message when deploy failed * Add storage\_interface to base driver class * Update multi-tenancy documentation * Add storage\_interface to node DB table * Add API reference for portgroup's mode and properties * Set access\_policy for messaging's dispatcher * Add a NodePayload test * Add test to ensure policy is always authorized * Fix bashate warning in devstack plugin * Forbid removing portgroup mode * Configure tempest for multitenancy/flat network * Wrap iscsi portal in []'s if IPv6 * Fix policy dict checkers * Updated from global requirements * Introduce generic hardware types * Remove grenade config workaround * Add portgroup configuration fields * Onetime boot when set\_boot\_device isn't persistent * Revert "Change liberty's reno page to use the tag" * Update multitenancy docs * Use oslo\_serialization.base64 to follow OpenStack Python3 * Updated from global requirements * Support defining and loading hardware types * Change liberty's reno page to use the tag * DevStack: Make $IRONIC\_IMAGE\_NAME less dependent of the name in DevStack * Fix error when system uses /usr/bin/qemu-kvm, as in CentOS 7.2 * Adds another validation step when using dynamic allocation * Fix return values in OneView deploy interface * Clarify the comment about the object hashes * Reusing oneview\_client when possible * Enhance wait\_for\_bm\_node\_status waiter * Use polling in set\_console\_mode tempest test * Make CONF.debug also reflect on IPA * Fail ironic startup if no protocol prefix in ironic api address * Remove agent vendor passthru completely * Remove iBoot, WoL and AMT drivers * Remove agent vendor passthru from OneView drivers * Move CONF.service\_available.ironic to our plugin * devstack: add vnc listen address * Autospec ironic-lib mocks, fix test error string * Remove deprecation of snmp drivers * Allow setting dhcp\_provider in devstack * Fix default value of "ignore\_req\_list" config option * Add unit test for create\_node RPC call * Documentation for Security Groups for baremetal servers * Remove agent vendor passthru from iLO drvers * Updated from global requirements * Add release names & numbers to API version history * Remove the VALID\_ROOT\_DEVICE\_HINTS list * Make "enabled\_drivers" config option more resilient to failures * Fix double dots at the end of a message to single dot * Clean up object code * Use IronicObject.\_from\_db\_object\_list method * Update help for 'provisioning\_network' option * Updated from global requirements * Add virtualpdu to ironic devstack plugin * Auto enable the deploy driver * Add volume\_connectors and volume\_targets to task * Renaming audit map conf sample file * Support names for {cleaning,provisioning}\_network * Allow use \*\_ipmitool with vbmc on multinode * Add RPCs to support volume target operations * Fix import method to follow community guideline * Add VolumeTarget object * Unneeded testing in DB migration of volume connector * Add volume\_targets table to database * Cleanup adding Ironic to cluster on upgrade case * Move interface validation from API to conductor side * Update the links in iLO documentation * Turn off tempest's multitenant network tests * Make all IronicExceptions RPC-serializable * Do not source old/localrc twise in grenade * Fix docs error about OOB RAID support * Remove agent vendor passthru from most drivers * Follow-up for volume connector db\_id * Remove file prefix parameter from lockutils methods * Install syslinux package only for Wheezy / Trusty * Show team and repo badges on README * Drac: Deprecate drac\_host property * Update keystone\_authtoken configuration sample in the install guide * Add RPCs to support volume connector operation * Add VolumeConnector object * Add volume\_connectors table to save connector information * Minor changes to neutron security groups code * Drop bad skip check in tempest plugin * Correct DB Interface migration test * Updated from global requirements * Add support for Security Groups for baremetal servers * mask private keys for the ssh power driver * Remove deprecated Neutron DHCP provider methods * Add notification documentation to install guide * Fix the message in the set\_raid\_config method * Convert iPXE boot script to Jinja template * Fix PXE setup for fresh Ubuntu Xenial * Add node (database and objects) fields for all interfaces * Move \`deploy\_forces\_oob\_reboot\` to deploy drivers * Add route to Neutron private network * Rely on portgroup standalone\_ports\_supported * Add node provision state change notification * Update the alembic migration section in the developer FAQ * Add notification documentation to administrator's guide * Revert "Remove ClusteredComputeManager" * Remove ClusteredComputeManager * Followup to 0335e81a8787 * Update iptables rules and services IPs for multinode * Add devstack setup\_vxlan\_network() * Skip some steps for multinode case * Timing metrics: iRMC drivers * Use function is\_valid\_mac from oslo.utils * Docs: Document using operators with root device hints * Add portgroup to api-ref * Updated from global requirements * Add user and project domains to ironic context * Bring configurations from tempest to ironic\_tempest\_plugin * Do not pass ipa-driver-name as kernel parameter * Timing metrics: OneView drivers * Add unit test for microversion validator * Update ironic node names for multinode case * Update devstack provision net config for multihost * Add CI documentation outline * Add possibility to remove chassis\_uuid from a node * Create dummy interfaces for use with hardware types * [install-guide] describe service clients auth * Simplify base interfaces in ironic.drivers.base * Integrate portgroups with ports to support LAG * Updated from global requirements * Increase verbosity of devstack/lib/ironic * Update to hacking 0.12.0 and use new checks * Add PS4 for better logfile information of devstack runs * Update guide section for messaging setup * Updated from global requirements * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Enable PXE for systems using petitboot * Fix typo of 'authenticaiton' * Add a unit test for microversion validation V1.22 * Clean up unit test of API root test * DevStack: Fix standard PXE on Ubuntu Xenial * Skip db configuration on subnodes * Ignore required\_services for multinode topology * Add PortGroups API * DevStack: Support for creating UEFI VMs * Updated from global requirements * Clarify ironic governance requirements and process * API: lookup() ignore malformed MAC addresses * TrivialFix: Fix typo in config file * DRAC get\_bios\_config() passthru causes exception * Fix exception handling in iscsi\_deploy.continue\_deploy * Log currently known iSCSI devices when we retry waiting for iSCSI target * Use kvm for ironic VMs when possible * Correct log the node UUID on failure * Updated from global requirements * Change 'writeable' to 'writable' * Add the way to get the deploy ram disks * Remove use of 'vconfig' command in devstack ironic script * Imported Translations from Zanata * Updated from global requirements * Revert "Set SUBNETPOOL\_PREFIX\_V4 to FIXED\_RANGE" * Fix typo in release note filename * Use function import\_versioned\_module from oslo.utils * Updated from global requirements * Remove "dhcp" command from the iPXE template * Fixes a small documentation typo in snmp * IPMI command should depend on console type * Trivial fix of notifications doc * Mock ironic-lib properly in test\_deploy\_utils * Remove ..agent.build\_instance\_info\_for\_deploy() in Pike * Trivial: fix typo in docstring * Add a missing error check in ipmitool driver's reboot * Adding Timing metrics for DRAC drivers * Remove 'agent\_last\_heartbeat' from node.driver\_internal\_info * Add power state change notifications * Skip create\_ovs\_taps() for multitenancy case * Remove unnecessary '.' before ':' in ironic rst * Updated from global requirements * Imported Translations from Zanata * Replace parse\_root\_device\_hints with the ironic-lib version one * Fixes parameters validation in SSH power manager * Fix API docs to include API version history * fix a typo in document * Updated from global requirements * Update guide for PXE multi-architecture setup * Remove "agent\_last\_heartbeat" internal field from agent drivers * No need to clear "target\_provision\_state" again from conductor * Trivial: fix warning message formatting * Updated from global requirements * Fix some typos * Add docs about releasing ironic projects * Fix unit tests failing with ironic-lib 2.1.1 * Do not hide unexpected exceptions in inspection code * Avoid name errors in oneview periodics * A few fixes in Multitenancy document * Introduce default\_boot\_option configuration option * Fix broken xenial job * Fix setting custom IRONIC\_VM\_NETWORK\_BRIDGE * Update configure\_tenant\_networks * Remove wrong check from conductor periodic task * Remove reservation from sync power states db filter * Fix a typo in deploy.py * Updated from global requirements * Fix some PEP8 issues and Openstack Licensing * Clarify when oneview node can be managed by ironic * Add tense guide to release note FAQ * Refactor \_test\_build\_pxe\_config\_options tests * Imported Translations from Zanata * OneView driver docs explaining hardware inspection * Enable release notes translation * Clean up provision ports when reattempting deploy * Remove unnecessary option from plugin settings * Cleanup unused (i)PXE kernel parameters * Set SUBNETPOOL\_PREFIX\_V4 to FIXED\_RANGE * Enable DeprecationWarning in test environments * Fix \_lookup() method for node API routing * Log node state transitions at INFO level * Update ironic config docs for keystone v3 * Clean exceptions handling in conductor manager * Move build\_instance\_info\_for\_deploy to deploy\_utils * Fix undisplayed notes in Quick-Start * Keep numbering of list in Install Guide * Add description for vendor passthru methods * [install-guide] describe pxe.ipxe\_swift\_tempurl * Fix docstrings in tempest plugin baremetal json client * Add entry\_point for oslo policy scripts * Remove unneeded exception handling from conductor * Remove unused methods in common/utils.py * Do not use mutable object as func default param * Trivial: Fix some typos in comments and docstring * doc: Add oslo.i18n usage link * Replace assertTrue(isinstance()) with assertIsInstance() * Fix typo: remove redundant 'the' * Support multi arch deployment * Updated from global requirements * Use method delete\_if\_exists from oslo.utils * Use assertRaises() instead of fail() * Cleanup get\_ilo\_license() * Fix grenade jobs * Add a missing whitespace to an error message * Invalid URL and Typo in enrollment.rst * Update configuration reference link to latest draft * Update external links to developer documentation * Fail test if excepted error was not raised * Add inspection feature for the OneView drivers * Use correct option value for standalone install * Move flavor create under 'VIRT\_DRIVER == ironic' * Change links to point to new install guide * Fix inexact config option name in multitenancy.rst * Fix typos in docstring/comments * Have bashate run for entire project * Change 'decom' to clean/cleaning * Fix docstring typo in test\_common.py * Fix invalid git url in devstack/local.conf sample * Fix absolute links to install-guide.rst in developer docs * Update developer's guide "Installation Guide" link * Add link to new guide in old install guide * Fixing Typo * [install-guide] Import "Setup the drivers for the Bare Metal service" * [install-guide] Import "Trusted boot with partition image" * [install-guide] Import "Building or downloading a deploy ramdisk image" * [install-guide] Import "Appending kernel parameters to boot instances" * [install-guide] Import configdrive * [install-guide] Import HTTPS, standalone and root device hints * [install-guide] Import "Enrollment" and "Troubleshooting" sections * [install-guide] Import "Local boot with partition images" * [install-guide] Import "Flavor creation" * [install-guide] Import "Image requirements" * [install-guide] Import "integration with other OpenStack components" * [install-guide] Import Install and configure sections * [install-guide] Import "Bare Metal service overview" * Remove unused method is\_valid\_ipv6\_cidr * Support https in devstack plugin * Use six.StringIO instead of six.moves.StringIO * Remove unneeded try..except in heartbeat * Fix a typo in helper.py * Add more details to MIGRATIONS\_TIMEOUT note * Fixes wrong steps to perform migration of nodes * Increase timeout for migration-related tests * Update reno index for Newton * Add i18n \_() to string * Change the logic of selecting image for tests * Always return chassis UUID in node's API representation * Updated from global requirements * Fix iLO drivers to not clear local\_gb if its not detected 6.2.0 ----- * Clean up release notes for 6.2.0 * Fix DRAC passthru 'list\_unfinished\_jobs' desc * DevStack: Use Jinja2 for templating when creating new VMs * DRAC: list unfinished jobs * Fix broken unit tests for get\_ilo\_object * Sync ironic-lib.filters from ironic-lib * Documentation change for feature updates in iLO drivers * Remove websockify from requirements * Add a note about security groups in install guide * Remove unnecessary setUp * Adds a missing space in a help string * Remove duplicated line wrt configdrive * Notification event types have status 'error' * Refactor common checks when instantiating the ipmitool classes * Grub2 by default for PXE + UEFI * Support configdrive in iscsi deploy for whole disk images * Remove NotificationEventTypeError as not needed * Mark untested drivers as unsupported * [trivial] Fix typo in docstring * Replace "phase" with "status" in notification base * Updated from global requirements * Fix test syntax error in devstack/lib/ironic * Separate WSGIService from RPCService * Fix link from doc index to user guide * Update proliantutils version required for Newton release * Remove unused argument in Tempest Plugin * Fix docstrings in Tempest Plugin REST client for Ironic API * Fix docstrings to match with method arguments * Remove cyclic import between rpcapi and objects.base * Fix nits on DRAC OOB inspection patch * Fix DRAC failure during automated cleaning * Replace six iteration methods with standard ones * Timing metrics: iLO drivers * Use assertEqual() instead of assertDictEqual() * Configure clean network to provision network * Updated from global requirements * \_\_ne\_\_() unit tests & have special methods use (self, other) * Add metrics to administrator guide * Add \_\_ne\_\_() function for API Version object * Update unit tests for neutron interface * Update ironic/ironic.conf.sample * Allow using TempURLs for deploy images * Log a warning for unsupported drivers and interfaces * Add a basic install guide * [api-ref] Remove temporary block in conf.py * Deny chassis with too long description * Update the string format * [api-ref] Correcting type of r\_addresses parameter * Remove unused file: safe\_utils.py * DRAC OOB inspection * Remove neutron client workarounds * Update driver requirement for iRMC * Refresh fsm in task when a shared lock is upgraded * Updated from global requirements * Fix exception handling in NodesController.\_lookup * Remove unused LOG and CONF * Fix updating port.portgroup\_uuid for node * Add a newline at the end of release note files * Replace DOS line endings with Unix * Fix ironic-multitenant-network job * Update test\_update\_portgroup\_address\_no\_vif\_id test * Use assertIsInstance/assertNotIsInstance in tests * Add standalone\_ports\_supported to portgroup - DB * Config logABug feature for Ironic api-ref * DevStack: Configure retrieving logs from the deploy ramdisk * DRAC RAID configuration * Metrics for ConductorManager * Option to enroll nodes with drac driver * Allow suppressing ramdisk logs collection * Fix pep8 on Python3.5 * Fix incorrect order of params of assertEqual() * Updated from global requirements * Fix for check if dynamic allocation model is enabled * Add multi-tenancy section to security doc * Fix formatting strings in LOG.error * Mask instance secrets in API responses * Update documentation for keystone policy support * Fix typo in policy.json.sample * Add node serial console documentation * Prevent URL collisions with sub-controllers: nodes/ports * Centralize Config Options - patch merge, cleanup * Update the webapi version history reference * Fix fall back to newer keystonemiddleware options * OneView test nodes to use dynamic allocation * Updated from global requirements * Fix issues in dev-quickstart and index * Updated from global requirements * Add notification base classes and docs * Update hacking test-requirement * Documentation update * Removed unneeded vlan settings from neutron config * iLO drivers documentation update * Move console documentation to separate file * Switch Inspector interface to pass keystoneauth sessions * Adds instructions to perform nodes migration * Replace DB API call to object's method in iLO drivers * Move "server\_profile\_template\_uri" to REQUIRED\_ON\_PROPERTIES * Using assertIsNone() is preferred over assertEqual() * Updated from global requirements * Update api-ref for v1.22 * Updated from global requirements * Pass swiftclient header values as strings * Get ready for os-api-ref sphinx theme change * Log node uuid rather than id when acquiring node lock * Allow changing lock purpose on lock upgrade * Fix typo: interations -> iterations * Update code to use Pike as the code name * Operator documentation for multitenancy * Always set DEFAULT/host in devstack * Fix AgentDeploy take\_over() docstring * Clean imports in code * Copy iPXE script over only when needed * Fix incorrect order of params of assertEqual() * Fix iLO drivers inconsistent boot mode default value * Update readme file * Bring upgrade documentation up to date * Fix test\_find\_node\_by\_macs test * Use memory mode for sqlite in db test * Fix key word argument interface\_type -> interface * Use upper-constraints for all tox targets * Add nova scheduler\_host\_subset\_size option to docs * Fix the description of inspection time fields * DevStack: No need to change the ramdisk filesystem type * Fix incorrect order of params of assertEqual() in test\_objects.py * Fix assertEqual(10, 10) in unit/api/v1/test\_utils.py * Adding InfiniBand Support * Doc: Recommend users to update their systems * Centralize config options - [iscsi] * Centralize config options - [pxe] * Add "erase\_devices\_metadata\_priority" config option * Updated from global requirements * Update renos for fix to ipmi's set-boot-device * Remove unused [pxe]disk\_devices option * IPMINative: Check the boot mode when setting the boot device * IPMITool: Check the boot mode when setting the boot device * Fix ssh credential validation message * Remove CONF.import\_opt() from api/controllers/v1/node.py * Document retrieving logs from the deploy ramdisk * Fix updating port MAC address for active nodes * Remove incorrect CONF.import\_opt() from test\_ipmitool.py 6.1.0 ----- * Rename some variables in test\_ipminative.py * Update proliantutils version required for Newton release * Refactor OneView dynamic allocation release notes * Clean up release notes for 6.1.0 * Refactor multitenant networking release notes * DevStack guide: Bump IRONIC\_VM\_SPECS\_RAM to 1280 * Deprecate ClusteredComputeManager * 'As of' in documentation is incorrect * Updated Dev quickstart for viewing doc changes * Remove duplicate parameters from local.conf example * Check keyword arguments * Deprecate putting periodic tasks on a driver object * Updated from global requirements * Add metrics for the ipminative driver * test\_console\_utils: using mock\_open for builtin open() * Update devstack configure\_ironic\_ssh\_keypair * Trivial: Remove useless function call in glance service test * Simplify code by using mask\_dict\_password (again) * Officially deprecate agent passthru classes and API * Timing metrics: pxe boot and iscsi deploy driver * Fix the mistakes in Installation Guide doc * Use devstack test-config phase * Rename BaseApiTest.config to app\_config * Documentation fixes for iLO SSL Certificate feature * Metrics for agent client * Simplify code by using mask\_dict\_password * OneView driver docs explaining Dynamic Allocation * Docs: Run py34 tox test before py27 * Collect deployment logs from IPA * Fix typo * Remove oslo-incubator references * Promote agent vendor passthru to core API * Update add nova user to baremetal\_admin behaviour * Fix typo in Install-guide.rst file * Replacing generic OneViewError w/ InvalidNodeParameter * Add Dynamic Allocation feature for the OneView drivers * Fix \_\_all\_\_ module attributes * Fix tempest realted exceptions during docs build * Add keystone policy support to Ironic * Follow up to keystoneauth patch * Add a data migration to fill node.network\_interface * Test that network\_interface is explicitly set on POST/PATCH * Updated from global requirements * Create a custom StringField that can process functions * Revert "Devstack should use a prebuilt ramdisk by default" * Fix for "db type could not be determined" error message * Update devstack plugin with new auth options * Migrate to using keystoneauth Sessions * Updating dev quickstart to include compatiblity for newest distros * Update nova scheduler\_host\_manager config docs * Extend the "configuring ironic-api behind mod\_wsgi" guide * Add metrics for the ipmitool driver * Timing metrics for agent deploy classes * Pass agent metrics config via conductor * Minor docstring and unittests fixes for IPMIConsole * Move default network\_interface logic in node object * Updated from global requirements * Devstack should use a prebuilt ramdisk by default * Updated tests for db migration scripts * Centralize config options - [agent] * Log full config only once in conductor * Add node.resource\_class field * Add api-ref for new port fields * Add support for the audit middleware * Change comment regarding network\_interface * Fix rendering for version 1.14 * Use 'UUID', not 'uuid' in exception strings * IPMITool: add IPMISocatConsole and IPMIConsole class * Use assertEqual() instead of assertDictEqual() * Remove unused code when failing to start console * Trivial: Fix a trivial flake8 error * Centralize config options - [deploy] * Centralize config options - [api] * Added note to local.conf addressing firewall/proxy blocking Git protocol * Bug fixes and doc updates for adoption * Do the VM setup only when requested * Remove unused import * Remove duplicate copyright * Add build-essential to required packages for development * Implement new heartbeat for AgentDeploy * Add Python 3.5 tox venv * Updated from global requirements * Doc update for in-band cleaning support on more drivers * Updated from global requirements * Support to validate iLO SSL certificate in iLO drivers * Update {configure|cleanup}ironic\_provision\_network * Add test to verify ironic multitenancy * Add multitenancy devstack configuration examples * Following the hacking rule for string interpolation at logging * Centralize config options - [DEFAULT] * Add py35 to tox environments * Metric chassis, driver, node, and port API calls * Fix fake.FakeBoot.prepare\_ramdisk() signature * Follow-up to 317392 * Follow-up patch of 0fcf2e8b51e7dbbcde6d4480b8a7b9c807651546 * Updated from global requirements * Expose node's network\_interface field in API * Update devstack section of quickstart to use agent\_ipmitool * Grammar fix in code contribution guide * Deprecate [ilo]/clean\_priority\_erase\_devices config * Add configure\_provision\_network function * Update Ironic VM network connection * Centralize config options - [neutron] * Follow-up fixes to 206244 * Nova-compatible serial console: socat console\_utils * Updated from global requirements * Add multitenancy-related fields to port API object * Update the deploy drivers with network flipping logic * Add 'neutron' network interface * Fix docstring warnings * Add and document the "rotational" root device hint * Add network interface to base driver class * Increase devstack BM VM RAM for coreos to boot * Config variable to configure [glance] section * Add support for building ISO for deploy ramdisk * Add a doc about appending kernel parameters to boot instances * Trivial grammar fixes to the upgrade guide * Remove unused expected\_filter in the unit test * Updated from global requirements * Remove white space between print and () * Remove IBootOperationError exception * Delete bios\_wsman\_mock.py from DRAC driver * Correct reraising of exception * Allow to enroll nodes with oneview driver * Add internal\_info field to ports and portgroups * Centralize config options - [glance] * Document API max\_limit configuration option * Fix two types in ironic.conf.sample * Remove unused LOG * Remove iterated form of side effects * Improve the readability of configuration drive doc part * Drop IRONIC\_DEPLOY\_DRIVER\_ISCSI\_WITH\_IPA from documentation * Allow to use network interfaces in devstack * Updated from global requirements * Centralize config options - [virtualbox] * Centralize config options - [swift] * Centralize config options - [ssh] * Centralize config options - [snmp] * Add Ironic specs process to the code contribution guide * Add network\_interface node field to DB and object * Fix typo in inspection.rst * Add missing translation marker to clear\_node\_target\_power\_state * Throwing an exception when creating a node with tags * Follow-up patch of 9a1aeb76da2ed53e042a94ead8640af9374a10bf * Fix releasenotes formatting error * Improve tests for driver's parse\_driver\_info() * Centralize config options - [seamicro] * Centralize config options - [oneview] * Centralize config options - [keystone] * Centralize config options - [irmc] * Centralize config options - [ipmi] * Centralize config options - [inspector] * Centralize config options - [ilo] * Introduce new driver call and RPC for heartbeat * Remove unnecessary calls to dict.keys() * Fail early if ramdisk type is dib, and not building * Add dbapi and objects functions to get a node by associated MAC addresses * Drop references to RPC calls from user-visible errors * Centralize config options - [iboot] * Updated from global requirements * Replace dict.get(key) in api & conductor tests * Use PRIVATE\_NETWORK\_NAME for devstack plugin * Create common neutron module * Updated from global requirements * Properly set ephemeral size in agent drivers * Add validation of 'ilo\_deploy\_iso' in deploy.validate() * Restore diskimage-builder install 6.0.0 ----- * Updated from global requirements * Mask password on agent lookup according to policy * Clear target\_power\_state on conductor startup * Replace assertRaisesRegexp with assertRaisesRegex * Fix test in test\_agent\_client.py * Replace dict.get(key) in drivers unit tests * Docs: Fix some typos in the documentation * Removes the use of mutables as default args * Follow-up to Active Node Creation * Fix parameter create-node.sh * Replace dict.get(key) in drivers/modules/\*/ tests * Change port used for Ironic static http to 3928 * Centralize config options - [dhcp] * Centralize config options - [database] * Centralize config options - [conductor] * Centralize config options - [cisco\_ucs] * Centralize config options - [cimc] * Centralize config options - [console] * No need for 'default=None' in config variable * Fix typo in agent driver * Use assertIn and assertNotIn * Document testing an in-review patch with devstack * Replace vif\_portgroup\_id with vif\_port\_id * Use assert\_called\_once\_with in test\_cleanup\_cleanwait\_timeout * Trivial comments fix * Add Link-Local-Connection info to ironic port * Remove workaround for nova removing instance\_uuid during cleaning * Document support for APC AP7921 * Updated from global requirements * Add cleanwait timeout cleanup process * Add restrictions for changing portgroup-node association * Imported Translations from Zanata * Support for APC AP7922 * fix sed strings in developer doc * Replace dict.get(key) with dict[key] in unit tests * Fix JSON error in documentation * Remove support for the old ramdisk (DIB deploy-ironic element) * Updated from global requirements * Document packing and unpacking the deploy ramdisk * Fix nits related to Ports api-ref * Gracefully degrade start\_iscsi\_target for Mitaka ramdisk * Update the api-ref documentation for Drivers * Update comment from NOTE to TODO * Active Node Creation via adopt state * Update resources subnet CIDR * remove neutron stuff from devstack deb packages * Keep original error message when cleaning tear down fails * Add config option for ATA erase fallback in agent * Fix markup in documentation * Imported Translations from Zanata * Updated from global requirements * Add debug environment to tox * Correct RAID documentation JSON * Added ironic-ui horizon dashboard plugin to ironic docs * Updated from global requirements * Disable disk\_config compute-feature-enabled in tempest * Make sure create\_ovs\_taps creates unique taps * NOTIFICATION\_TRANSPORT should be global * Remove links to github for OpenStack things * Update the api-ref documentation for Ports * Add one use case for configdrive * Updated from global requirements * Remove hard-coded keystone version from setup * Use a single uuid parameter in api-ref * Use correct iscsi portal port in continue\_deploy * Fix raises to raise an instance of a class * Fix formatting of a release note * Remove support for 'hexraw' iPXE type * Use messaging notifications transport instead of default * Updated from global requirements * tempest: start using get\_configured\_admin\_credentials * Fix signature for request method * Remove backward compatibility code for agent url * Add 'How to get a decision on something' to FAQ * Follow-up patch of 8e5e69869df476788b3ccf7e5ba6c2210a98fc8a * Introduce provision states: AVAILABLE, ENROLL * minor changes to security documentation * Add support for API microversions in Tempest tests * Make use of oslo-config-generator * Mention RFEs in README * Make the ssh driver work on headless VirtualBox machines * Allow to specify node arch * Remove unused is\_valid\_cidr method * Updated from global requirements * Restart n-cpu after Ironic install * Move all cleanups to cleanup\_ironic * Keep backward compatibility for openstack port create * Revert "Run smoke tests after upgrade" * Add some docs about firmware security * Change HTTP\_SERVER's default value to TFTPSERVER\_IP * Update the api-ref documentation for Root and Nodes * Read the Sphinx html\_last\_updated\_fmt option correctly in py3 * devstack: Configure console device name * Updated from global requirements * Replace project clients calls with openstack client * Stop unit-testing processutils internals * Fix start order for Ironic during upgrade * Run smoke tests after upgrade * Add ironic to enabled\_services * Remove link to Liberty configs * Updated from global requirements * Fix shutdown.sh & upgrade.sh for grenade * add mitaka configuration reference link to the index page * Remove "periodic\_interval" config option * Remove verbose option * Updated from global requirements * Eliminate warnings about rm in api-ref build * Remove deprecated driver\_periodic\_task * Remove backward compat for Liberty cleaning * Remove [conductor]/clean\_nodes config option * Remove "message" attribute support from IronicException * Setup for using the Grenade 'early\_create' phase * Add support for dib based agent ramdisk in lib/ironic * Remove deprecated [pxe]/http\_\* options * Remove [agent]/manage\_tftp option * Remove "discoverd" configuration group * Regenerate sample config * Doc: Replace nova image-list * Migrate to os-api-ref library * Add require\_exclusive\_lock decorators to conductor methods * Fix syntax error in devstack create-node script * Updated from global requirements * Fix formatting error in releasenotes * Allow vendor drivers to acquire shared locks * Modify doc for RAID clean steps in manual cleaning * Make iPXE + TinyIPA the defaults for devstack * Only install DIB if going to use DIB * Add some docs/comments to devstack/plugin.sh * devstack: Fetch tarball images via https * DevStack: Support to install virtualbmc from source * Regenerate sample configuration * Allow configuring shred's final overwrite with zeros * Updated from global requirements * Deployment vmedia operations to run when cleaning * Extend IRONIC\_RAMDISK\_TYPE to support 'dib' * Cleanup unused conf variables * Adds RAID interface for 'iscsi\_ilo' * Pass environment through to create-node.sh * DevStack: Support to install pyghmi from source * RAID interface to support JBOD volumes * Remove ClusteredComputeManager docs * API: Check for reserved words when naming a node * File download fails with swift pseudo folder * Migrate api-ref into our tree * Updating dev-quickstart.rst file links * Devstack: allow extra PXE params * Updated from global requirements * Update resources only for specific node during deletion * Fix tox cover command * Fix VirtualBox cannot set boot device when powered on * Set root hints for disks less than 4Gb and IPA * Use Ironic node name for VM * Allow to sepecify VM disk format * Update compute\_driver in documentation * Replace logging constants with oslo.log * iscsi: wipe the disk before deployment * Joined 'tags' column while getting node * FIX: IPMI bmc\_reset() always executed as "warm" * Fix API node name updates * DevStack: Parametrize automated\_clean * Very important single character typo fix * Remove two DEPRECATED config options from [agent] * Allow to set Neutron port setup delay from config * Update ironic.config.sample * Fix usage of rest\_client expected\_success() in tests * Fixed nits in the new inspection doc page * Imported Translations from Zanata * Updated from global requirements * Document how to run the tempest tests * Update the inspection documentation * ipxe: retry on failure * Add note on prerequisite of 'rpm' file extraction * Follow-up patch of 0607226fc4b4bc3c9e1738dc3f78ed99e5d4f13d * Devstack: Change to use 'ovs-vsctl get port tag' * Restart consoles on conductor startup * Remove backwards compat for CLEANING * Make sure Cisco drivers are documented on IRONIC\_DEPLOY\_DRIVER * Remove two deprecated config option names from [agent] section * Updated from global requirements * Add support for Cisco drivers in Ironic devstack * Updated from global requirements * [docstring] Update ironic/api/controllers/v1/\_\_init\_\_.py comment * add new portal\_port option for iscsi module * Fix tinyipa initrd tarballs.openstack.org file name * Remove description of 'downgrade' for ironic-dbsync * In node\_power\_action() add node.UUID to log message * Rename juno name state modification method * Prepare for transition to oslo-config-generator * Updated from global requirements * Reduce amount of unhelpful debug logging in the API service * Correct api version check conditional for node.name * Updated from global requirements * Enable download of tinyipa prebuilt image * Follow-up to I244c3f31d0ad26194887cfb9b79f96b5111296c6 * Use get\_admin\_context() to create the context object * Updated from global requirements * Don't power off non-deploying iLO nodes in takeover * deployment vmedia ops should not be run when not deploying * Fix NamedTemporaryFile() OSError Exception * Updated from global requirements * Fix \_do\_next\_clean\_step\_fail\_in\_tear\_down\_cleaning() * Make tox respect upper-constraints.txt * Adopt Ironic's own context * Allow fetching IPA ramdisk with branch name * Tune interval for node provision state check * Fix typo in devstack script * Note on ilo firmware update swift url scheme * Force iRMC vmedia boot from remotely connected CD/DVD * Normalize MAC OctetString to fix InvalidMAC exception * Enable Grenade usage as a plugin * Readability fixes for cleaning\_reboot code * Support reboot\_requested bool on agent clean\_steps * Update tempest compute flavor\_ref/flavor\_ref\_alt * Move testcases related to parse\_instance\_info() * Improve check for ssh-key to include public and private files * Assign valid values to UUIDFields in unit tests * Fix typos in some source files * Follow up patch of 843ce0a16160f2e2710ef0901028453cd9a0357c * Clean up test node post data * Fix: Duplicated driver causes conductor to fail * Use trueorfalse function instead of specific value * Update reno for stable/mitaka * Doc update to enable HTTPS in Glance and Ironic comm * Fix race in hash ring refresh unit test * Addressing nits on I2984cd9d469622a65201fd9d50f964b144cce625 * Config to stop powering off nodes on failure 5.1.0 ----- * Documentation update for partition image support * Delete bridge "brbm" in devstack/unstack.sh * Remove unneeded use of task.release\_resources() * [Devstack]Add ability to enable shellinabox SSL certificate * Append 'Openstack-Request-Id' header to the response * Add disk\_label and node\_uuid for agent drivers * Fix sphinx docs build * Update authorized\_keys with new key only * Agent: Out-of-band power off on deploy * Document partition image support with agent\_ilo * Add support for partition images in agent drivers * Update the text in user guide of ironic * Translate requests exception to IronicException * Extend the Conductor RPC object * Make sure target state is cleared on stable states * Removes redundant "to" * Install apparmor b/c Docker.io has undeclared dep * Don't depend on existing file perm for qemu hook * Move \_normalize\_mac to driver utils * Devstack: add check of chassis creating * Allow user to specify cleaning network * Update ironic\_ssh\_check method * Adds doc - firmware update(iLO) manual clean step * Add ensure\_thread\_contain\_context() to task\_manager * [devstack] Do not die if neutron is disabled * Follow-up of firmware update(iLO) as manual cleaning step * Updating driver docs with DL hardwares requirements * Remove unneeded 'wait=False' to be more clean and consistent * Pass region\_name to SwiftAPI * Uses jsonschema library to verify clean steps * Fix important typo in the ipmitool documentation * DevStack: Allow configuring the authentication strategy * Add documentation for RAID 5.0.0 ----- * Add documentation about the disk\_label capability * SSH driver: Remove pipes from virsh's list\_{all, running} * Add documentation for the IPMITool driver * Fix error in cleaning docs * Replace depricated tempest-lib with tempest.lib * Add new 'disk\_label' capability * Fix JSON string in example of starting manual cleaning * Remove 'grub2' option in creating whole-disk-images * Update iRMC driver doc for inspection * Don't use token for glance & check for some unset vars * Use 'baremetal' flavor in devstack * [devstack] Fix IPA source build on Fedora * DevStack: Enable VirtualBMC logs * Support for passing CA certificate in Ironic Glance Communication * Updated from global requirements * Firmware update(iLO) as manual cleaning step * Updated from global requirements * Remove code duplication * Update iLO documentation for clean step 'reset\_ilo' * Refactor the management verbs check to utils * Updated from global requirements * Remove duplicate doc in ironic.conf.sample * Prep for 5.0 release * Fix unittests after new releases of libraries * Updating docs with support for DL class servers * Update CIMC driver docs to install ImcSdk from PyPi * Add returns to send\_raw() ipmitool function * Add function for dump SDR to ipmitool driver * Add clean step in iLO drivers to activate iLO license * Update proliantutils version to 2.1.7 for Mitaka release * ipxe: add --timeout parameter to kernel and initrd * Updated iLO driver documentation to recommend ipmitool version * Refactor driver loading to load a driver instance per node * Clean up driver loading in init\_host * add wipefs to ironic-lib.filters * Updated from global requirements * Use assertEqual/Greater/Less/IsNone * Follow up nits of 3429e3824c060071e59a117c19c95659c78e4c8b * API to list nodes using the same driver * [devstack] set ipa-debug=1 for greater debugability * Loose python-oneviewclient version requirement * Set node last\_error in TaskManager * Add possible values for config options * Follow up nits of irmc oob inspection * Enable removing name when updating node * Make some agent functions require exclusive lock * Add db api layer for CRUD operations on node tags * Update proliantutils version required for Mitaka release * Add deprecated\_for\_removal config info in ironic.conf.sample * Update ironic.conf.sample * Tolerate roles in context.RequestContext * Switch to Futurist library for asynchronous execution and periodic tasks * Move \_from\_db\_object() into base class * Add ironic\_tempest\_plugin to the list of packages in setup.cfg * Fix gate broken by sudden remove of SERVICE\_TENANT\_NAME variable * Add manual cleaning to documentation * Import host option in base test module * Fixes automated cleaning failure in iLO drivers * Updated from global requirements * DevStack: Add support for deploying nodes with pxe\_ipmitool * Change the libvirt NIC driver to virtio * DevStack: Support to install diskimage-builder from source * [Devstack]Add ability to enable ironic node pty console * Use 'node' directly in update\_port() * Add links to the standalone configdrive documentation * DevStack: Install squashfs-tools * [DevStack] fix restart of nova compute * Use http\_{root, url} config from "deploy" instead of "pxe" * During cleaning, store clean step index * Use oslo\_config.fixture in unit tests * Introduce driver\_internal\_info in code-contribution-guide * Updated from global requirements * Correct instance parameter description * Add node.uuid to InstanceDeploy error message * Set existing ports pxe\_enabled=True when adding pxe\_enabled column * Augmenting the hashing strategy * Add hardware inspection module for iRMC driver * Document possible access problems with custom IRONIC\_VM\_LOG\_DIR path * Add documentation for proxies usage with IPA * Updated from global requirements * Devstack: create endpoint in catalog unconditionally * Comment out test options that already exists on tempest's tree * Replace config 'clean\_nodes' with 'automated\_clean' * Remove 'zapping' from code * Cache agent clean steps on node * API to manually clean nodes * Replace ifconfig with ip * Updated iLO documentation for boot mode capability * Agent vendor handles manual cleaning * Remove downgrade support from migrations * Enable tinyipa for devstack Ironic * Disable clean step 'reset\_ilo' for iLO drivers by default * Add proxy related parameters to agent driver * Update ironic.conf.samle * Fix genconfig "tempdir" inconsistency * Update the home page * Follow-up on dracclient refactor * Log warning if ipmi\_username/ipmi\_password missing * Add portgroups to support LAG interfaces - net * Add portgroups to support LAG interfaces - RPC * Add portgroups to support LAG interfaces - objs * Add portgroups to support LAG interfaces - DB * Fix missing lookup() vendor method error for pxe\_drac * Refresh ssh verification mechanism * Refactor install-guide to configure API/Conductor seperately * Enable Ironic Inspector for Cisco Drivers * Fix doc8's "duplicated target names" (D000) error * Remove conditional checking the auth\_strategy values * Extend root device hints to support device name * Fix spawn error hook in "continue\_node\_clean" RPC method * Enable doc8 style checker for \*.rst files * Updated from global requirements * Show transitions initiated by API requests * Remove hard-coded DEPLOYWAIT timeout from Baremetal Scenario * Fix tiny format issue with install\_guide * Add priority to manual clean step example * Use node uuid in some exception log * Fix error message in devstack * Updated from global requirements * [devstack] Restart nova compute before checking hypervisor stats * Imported Translations from Zanata * Fix minor typo * DRAC: cleanup after switch to python-dracclient * API service logs access requests again * Updated from global requirements * Correct port\_id parameter description * Remove duplicate words in API version history * Remove unneeded enable\_service in dev-quickstart.rst * Clarify that size in root device hints and local\_gb are often different * Update ImcSdk requirement to use PyPi * Clean up 'no\_proxy' unit tests * Add more unit tests for NO\_PROXY validation * Add ability to cache swift temporary URLs * DRAC: switch to python-dracclient on vendor-passthru * Migrate Tempest tests into Ironic tree * Use Tempest plugin interface * Fix issues with uefi-ipxe booting * Update links to OpenStack manuals * Fix issue where system hostname can impact genconfig * Add choices option to several options * Add xinetd and its TFTP configuration in Install Guide * Reorganize the developer's main page * Document backwards compat for passthru methods * Drop MANIFEST.in - it's not needed pbr * Clean up unneeded deprecated\_group * Devstack: replace 'http' with SERVICE\_PROTOCOL * Clarify rejected status in RFE contribution docs * Bring UP baremetal bridge * Adjust ipminative.\_reboot to comply with pyghmi contract * Document the process of proposing new features * Updated from global requirements * Use assertTrue/False instead of assertEqual(T/F) * devstack 'cleanup-node' script should delete OVS bridges * Change default IRONIC\_VM\_SPECS\_RAM to 1024 * Remove release differences from flavor creation docs * Add documentation for standalone ilo drivers * Devstack: Make sure libvirt's hooks directory exists * Update the ironic.conf.sample file * Follow-up on refactor DRAC management interface * Allow user to set arch for the baremetal flavor and ironic node * tox: make it possible to run pep8 on current patch only * Devstack: Use [deploy] erase\_devices\_priority config option * Remove bashate from envlist * Use ironic-lib's util methods * Refactor objects into a magic registry * Don't return tracebacks in API response in debug mode * Updated from global requirements * Change assertTrue(isinstance()) by optimal assert * Remove \*/openstack/common\* in tox * Remove vim headers in source files * Trival: Remove unused logging import * Use ironic-lib's qemu\_img\_info() & convert\_image() * Update "Developer Quick-Start" guide for Fedora 23+ * Enable ironic devstack plugin in local.conf sample * Correct a tiny issue in install-guide * Install 'shellinabox' package for Ironic * Fix translations in driver base * Run flake8 against the python scripts under tools/ and devstack/tools * Add UEFI support for iPXE * Add console feature to ssh driver * Conductor handles manual cleaning * Add extensions to the scripts at devstack/tools/ironic/scripts * Fix "No closing quotation" error when building with tox * Devstack: Remove QEMU hook at ./unstack * Run bashate as part of the pep8 command * Fix bashate errors in grenade plugin * Fix syntax errors in the shell scripts under devstack/tools * Use the apache-ironic.template from our tree * Fix typo in ironic/conductor/manager.py * genconfig: Debug info for unknown config types * Keep the console logs for all boots * Use imageutils from oslo.utils * Add documentation for user inputs as HTTPS URLs * Add bashate tox command * Updated from global requirements * Add documentation for swiftless intermediate images * DRAC: switch to python-dracclient on management interface * DRAC: switch to python-dracclient on power interface * Follow up nits of Exception to str type conversion * Clean up variables in plugin.sh * Replace assertEqual(None, \*) with assertIsNone in tests * Add utility function to validate NO\_PROXY * Add bifrost as an option projects in Service overview * Sequence diagrams for iLo driver documentation * Refactor ilo documentation for duplicate information * Update swift HTTPs information in ilo documentation * Updated from global requirements * Deprecated tox -downloadcache option removed * Remove override-defaults * Use 'service\_type' of 'network'. Not 'neutron' * Update ironic.conf.sample by applying the bug fix #1522841 * Add grenade plugin * Follow up patch to correct code-contribute-guide * Fix iPXE template for whole disk image * Add devstack plugin * Copy devstack code to ironic tree * Add FSM.is\_stable() method * Explicitly depend on WebTest>=2.0 * Always pass keystone credentials to neutronclient * Remove extra space in 'host' config comment * Add oslo\_config.Opt support in Ironic config generator * Refactor disk partitioner code from ironic and use ironic-lib * Simplifies exception message assurance for oneview.common tests * Use node.uuid directly in stop\_console() * Correct NotImplemented to NotImplementedError in rpcapi.py * Adding oneview.common tests for some method not well tested * Add port option support for ipmitool * Numerous debug messages due to iso8601 log level * Handle deprecated opts' group correctly * Updated from global requirements * Clarify what changes need a release note * Remove wsgi reset\_pool\_size\_to\_default test * Add Mitaka release notes page * Update python-scciclient version number * Add release notes from Icehouse to Liberty * Add Code Contribution Guide for Ironic * Replace HTTP 'magic numbers' with constants * Documentation points to official release notes 4.3.0 ----- * Fix awake AMT unit test * Fix bug where clean steps do not run * Add reno for AMT wakeup patch * Updating OneView driver requirements and docs * Correct the db connection string in dev-quickstart * Split BaseConductorManager from ConductorManager * Validate arguments to clean\_step() decorator * test: Remove \_BaseTestCase * Wake up AMT interface before send request * Fall back to old boot.ipxe behaviour if inc command is not found * Only mention IPA in the quick start and user guides for DevStack * Improve options help for image caching * Add troubleshooting docs for "no valid host found" * change mysql url in dev-quickstart doc * Extend FAQ with answer of how to create a new release note * Sync ironic.conf sample * Comment spelling error in ironic-images.filters file * Updated from global requirements * Add a developer FAQ * Add tests for RequestContextSerializer * Add a test to enforce object version bump correctly * force releasenotes warnings to be treated as errors * Avoid RequestContextSerializer from oslo.messaging * Follow up patch for the first commit of iRMC new boot I/F * Move iso8601 as a test dependency only * Catch up release notes for Mitaka * Move common code from ironic.conductor.manager to ironic.conductor.utils * Add deprecated config info in ironic.conf.sample * Add switch to enable/disable streaming raw images for IPA * SwiftAPI constructor should read CONF variables at runtime * Take over console session if enabled * Drop some outdated information from our quick start guide * Refactor IRMCVirtualMediaAgentDeploy by applying new BootInterface * Refactor IRMCVirtualMediaIscsiDeploy by applying new BootInterface * Updated from global requirements * Fix: Next cleaning hangs if the previous cleaning was aborted * Add clean up method for the DHCP factory * Add missing packages to dev-quickstart * Support arguments for clean step methods * Validate all tcp/udp port numbers * Add manual cleaning to state machine * Specifying target provision states in fsm * Use server\_profile\_template\_uri at scheduling * Check shellinabox started successfully or not * Add SSL support to the Ironic API * Updated from global requirements * Use wsgi from oslo.service for Ironic API * Remove duplicated unit tests in test\_manager * Get mandatory patch attrs from WSME properties * Add and document two new root device hints: wwn\_{with, vendor}\_extension * Sort root device hints when parsing * add "unreleased" release notes page * Follow up patch for 39e40ef12b016a1aeb37a3fe755b9978d3f9934f * Document 'erase\_devices\_iterations' config option * Update iLO documentation * Adds test case for the iscsi\_ilo recreate boot iso * Refactor agent\_ilo driver to use new boot interface * Updated from global requirements * Refactor iLO driver console interface into new module * Add reno for release notes management * Add choices to temp\_url\_endpoint\_type config option * Fix oslo namespace in default log level * Remove \_\_name\_\_ attribute from WSME user types * refine the ironic installation guide * Revert "Add Pillow to test-requirements.txt" * Update etc/ironic/ironic.conf.sample * Make task parameter mandatory in get\_supported\_boot\_devices * Follow up patch for Ib8968418a1835a4131f2f22fb3e4df5ecb9b0dc5 * Check shellinabox process during stopping console * Add whole disk image creation command to Installation Guide * Fix docker.io bug in the Install Guide * Updated from global requirements * Node's last\_error to show the actual error from sync\_power\_state * Updated from global requirements * Rename test\_conductor\_utils.py to test\_utils.py * Follow up patch for 8c3e102fc5736bfcf98525ebab59b6598a69b428 * Add agent\_iboot entrypoint * Validate console port number in a valid range * iboot: add wait loop for pstate to activate * Don't reraise the exception in \_set\_console\_mode * Check seamicro terminal port as long as it specified * Add missing unit tests for some PXE drivers * Validate the input of properties of nodes * Add documentation for Ceph Object Gateway support * Refactor iscsi\_ilo driver to use new boot interface * Fix comments on DRAC BIOS vendor\_passthru * cautiously fail on unhandled heartbeat exception * Add "agent\_wol" (AgentAndWakeOnLanDriver) * Added unit tests for CORS middleware * Use oslo\_config new type PortOpt for port options * Fix markup error in deploy/drivers.rst * Update the Configuration Reference to Liberty in doc * Updated from global requirements * Use self.\_\_class\_\_.X instead of self.X * Rename utils.py to mgr\_utils.py to avoid namespace collision * XenAPI: Add support for XenServer VMs * Add PortOpt to config generator * Imported Translations from Zanata * Move hash\_ring refresh logic out of sync\_local\_state * Move ironic.tests.unit.base to ironic.tests.base * Change required version of ImcSdk to 0.7.2 * Add an iboot reboot\_delay setting * iPXE document about the existence of prebuilt images * Fix a typo * Switched order of CORS middleware * DRAC BIOS vendor\_passthru: enable rebooting the node * Replace deprecated LOG.warn with warning * Add db migration and model for tags table * Add OneView driver documentation * Fix snmp property descriptions * Updated from global requirements * Slightly reword README * Remove unused functions from agent driver * mocking syscalls to make the tests run on OS X * Enable cmd/api & cmd/conductor to be launched directly * Add reboot\_delay option to snmp driver * Add self.raid for iSCSI based drivers * Move test\_pxe.py inside unit/drivers/modules directory * Move pxe.\_parse\_instance\_info() to deploy\_utils * Add note about driver API breakage * Fix a missing detail in install guide * Enable radosgw support in ironic * Updated from global requirements * Add agent\_amt docs * Add release notes for 4.2.1 * Convert set() to list in ListType * remove lxml requirement * Update python-oneviewclient version * Fix an annoying detail in the developer quick-start * Updated from global requirements * Expose versioning information on GET / endpoint * Fixes logging of failure in deletion of swift temporary object * ucs\_hostname changed to ucs\_address * Updated from global requirements * Remove functions: \_cleanse\_dict & format\_message * Move FakeOneViewDriver to the fake.py module * Add testresources and testscenarios used by oslo.db fixture * Add agent\_amt driver * Imported Translations from Zanata * Stop adding translation function to builtins * Fix tests giving erroneous output during os-testr run * OneView Driver for Ironic * Fix agent\_ilo to remove temporary images * Updated from global requirements * iPXE: Fix assumption that ${mac} is the MAC of the NIC it's booting * Prevent iRMC unit test from potential failure at the gate * Add secret=True to password option * Fix a bug error by passwords only includes numbers * Add support for in-band cleaning in ISCSIDeploy * Fix typo in document * Remove unused import of oslo\_log * Use power manager to reboot in agent deployments * Add retries to ssh.\_get\_hosts\_name\_for\_node * Refactor deploy\_utils methods * Fix irmc driver unit test * PXE: Support Extra DHCP Options for IPv6 * Use standard locale when executing 'parted' command * Updated from global requirements * To run a specific unit test with ostestr use -r * Add .eggs to gitignore * Fix log formatting issue in agent base * Add notes to functions which are in ironic-lib * Allow empty password for ipmitool console * Update help string on tftp\_root option * Updated from global requirements * Fix conductor deregistration on non init conductor * Imported Translations from Zanata * Add Pillow to test-requirements.txt * Add agent inspection support for IPMI and SSH drivers * Python 3.4 unit tests fail with LANG=C * Fix ubuntu install command in install guide * Move unit tests to correct directory * Add 'whitelist\_externals = bash' for two testenvs * Rename 'message' attribute to '\_msg\_fmt' in IronicException * Follow up for: Prepare for functional testing patch * Fix documentation for installing mariaDB * Update help strings for DRAC configs * Switch tox unit test command to use ostestr * Use standard locale when executing 'dd' command * Imported Translations from Zanata * Fix typo: add a missing white space * Prepare for functional testing * Fix some iBoot strings * Replace six.iteritems() with .items() * Make generation of ironic.conf.sample deterministic * Cached file should not be deleted if time equal to master 4.2.0 ----- * Cleanup of Translations * Update architecture docs to mention new driver interfaces * Add 4.2.0 release notes * Update docs for Fedora 22 * Add i18n \_ import to cimc common * Update proliantutils version required for L release * Use of 'the Bare Metal service' in guide * Update install guide to reflect latest code * Implement indirection\_api * Add 'abort' to state machine diagram * Unit test environment setup clarification * Make end-points discoverable via Ironic API * Updated from global requirements * Allow unsetting node.target\_raid\_config * Allow abort for CLEANWAIT states * Clean up CIMC driver docs and comments * Add Cisco IMC PXE Driver * Fix final comments in RAID commits * Refactor agent {prepare,tear\_down}\_cleaning into deploy\_utils * Handle unquoted node names from virt types * Fix iRMC vmedia deploy failure due to already attached image * Implement take\_over for iscsi\_ilo driver * Fix typo in vendor method dev documentation * Fix incorrect urls * Check image size before provisioning for agent driver * Help patch authors to remember to update version docs * Add constraint target to tox.ini * Add IPMINative vendor methods to \*IPMINative drivers * Fix string formatting issues * Remove DictMatches custom matcher from unit tests * Imported Translations from Zanata * Remove unused object function * Use oslo.versionedobjects remotable decorators * Base IronicObject on VersionedObject * Update descriptions in RAID config schema * Document GET ...raid/logical\_disk\_properties * Convert functools.wraps() usage to six.wraps() * Remove comment about exception decorator * Replace metaclass registry with explicit opt-in registry from oslo * Add config option to override url for links * Fix iBoot test\_\_switch\_retries test to not waste time sleeping * Allow tftpd usage of '--secure' by using symlinks * Add support for inband raid configuration agent ramdisk * Agent supports post-clean-step operations * Update 'Installation Guide' for RHEL7/CentOS7/Fedora * Fix docs about --is-public parameter for glance image-create * Fix indentation of the console docs * Fix heading levels in the install-guide * Cache the description of RAID properties * Remove the hard dependency of swift from ilo drivers * Fix mistakes in comments * Updated from global requirements * Fix object field type calling conventions * Add version info for pyghmi in driver-requirements.txt 4.1.0 ----- * Add 4.1.0 release notes * Try to standardize retrieval of an Exception's description * Add description how to restart ironic services in Fedora/RHEL7/CentOS7 * Improve the ability to resolve capability value * Add supported environment 'VMware' to comments * Updated from global requirements * Remove policy 'admin' rule support * Handle missing is\_whole\_disk\_image in pxe.\_build\_pxe\_config\_options * Raise InvalidPrameterValue when ipmi\_terminal\_port is '' * Fix doc typo * Remove executable permission from irmc.py * Add APIs for RAID configuration * agent\_ilo fails to bring up instance * Updated from global requirements * Remove 'is\_valid\_event' method * Set boot device in PXE Boot interface method prepare\_instance() * Revert "Do not overwrite the iPXE boot script on every deployment" * Add vendor interface to ipminative driver * When boot option is not persisted, set boot on next power on * Document nodes in enroll state, in install guide * Added CORS support middleware to Ironic * Refactor map\_color() * Removes unused posix-ipc requirement * Add retry options to iBoot power driver * Trusted boot doc * Prevent ilo drivers powering off active nodes during take over * Add release notes for 4.0.0 * Clean up cleaning error handling on heartbeats * Use vendor mixin in IPMITool drivers * Use oslo.messaging serializers * Add RPC APIs for RAID configuration * Add new method validate\_raid\_config to RAIDInterface * Fix docker package name in Ubuntu 14.04 in Install Guide * Updated from global requirements * Do not overwrite the iPXE boot script on every deployment * Reset tempdir config option after NestedTempfile fixture applied * Remove unused dep discover from test reqs * Add deprecation warning to periodic tasks with parallel=False * Use six.text\_type in parse\_image\_ref * Ensure that pass\_deploy\_info() always calls boot.prepare\_instance() * Add minimum and maximum on port option * Update ironic.conf.sample with tox -egenconfig * Update documentation to install grub2 when creating the user image * Fix logging and exceptions messages in ipminative driver * Fix minor spelling/grammar errors * Put py34 first in the env order of tox * format links in the readme to work with the release notes tools * Periodically checks for nodes being cleaned * Add links for UEFI secure boot support to iLO driver documentation * Add cleanup in console utils tests * Follow up the nits in iRMC vmedia driver merged patch * Refactor agent driver with pxe boot interface * Update tests to reflect WSME 0.8 fixes * Remove ObjectListBase * Remove broken workaround code for old mock * Create a versions.py file * Improve comparison operators for api/controllers/base.py * Switch to post-versioning 4.0.0 ----- * Fix improper exception catching * Fix nits from 'HTTP constants' patch * Use JsonEncoded{Dict,List} from oslo\_db * Move tests into correct directories * Fix logging levels in do\_node\_deploy * Fix misspelling from "applicatin" to "application" * Updated from global requirements * Remove unneeded module variable '\_\_all\_\_' * Updated from global requirements * Change and edit of Ironic Installation Guide * Remove the --autofree option from boot.ipxe * Switch from deprecated timeutils.isotime * Fix "tox -egenconfig" by avoiding the MODULEPATH env variable * Improve logging for agent driver * Refactor the essential prop list of inspect driver * Reset clean\_step if error occurs in CLEANWAIT * Fix bug sending sensor data for drivers w/o management * Replace HTTP 'magic numbers' with constants * Address final comments on update image cache based on update time * 'updated\_at' field shows old value after resource is saved * Increase size of nodes.driver column * Add better dbapi support for querying reservation * Allow digits in IPA driver names * Updated from global requirements * Add documentation for iRMC virtual media driver * Add copyright notice to iRMC driver source code * Remove CONF.agent.agent\_pxe\_bootfile\_name * Update single letter release names to full names * Enforce flake8 E711 * Update docstring for agent deploy's take\_over * Update cached images based on update time * Updated from global requirements * Add RAIDInterface for RAID configuration * get\_supported\_boot\_devices() returns static device list * add ironic client and ironic inspector projects into contribution list * Updated from global requirements * Use the oslo\_utils.timeutils 'StopWatch' class * Update the documentation to use IPA as deploy ramdisk * Inspector inspection fails due to node locked error * Prevent power actions when the node is in CLENWAIT state * Imported Translations from Transifex * Remove unnecessary trailing backslash in Installation Guide * Refactor some minor issues to improve code readability * Fix misspelling in comment * Make app.wsgi more like ironic.cmd.api * Migrate IronicObjectSerializer to subclass from oslo * Updated from global requirements * Fix warnings on doc builds * Change vagrant.yml to vagrant.yaml * Developer quickstart documentation fixes * Document configuring ironic-api behind mod\_wsgi * Updated from global requirements * Add deprecation messages on the bash ramdisk endpoints * Document API versioning * Log configuration values as DEBUG, not INFO * Update ironic.conf.sample * Update ironic.conf.sample * Add information 'node\_uuid' in debug logs to facilitate the reader's life * Clean up instance\_uuid as part of the node's tear down * Fix a trusted boot test bug * Add more info level log to deploy\_utils.work\_on\_disk() method * Fix broken agent virtual media drivers * Updated from global requirements * Fix apache wsgi import * Add raises docstring tag into object.Ports methods * Only take exclusive lock in sync\_power\_state if node is updated * Secure boot support for pxe\_ilo driver * UCS: node-get-boot-device is failing for Cisco servers * grub2 bootloader support for uefi boot mode * Add Nova scheduler\_tracks\_instance\_changes config to docs * Use automaton's converters/pydot * enroll/verify/cleanwait in state machine diagram * Save and re-raise exception * Cache Keystone client instance * Refactor pxe - New PXEBoot and ISCSIDeploy interfaces * Don't prevent updates if power transition is in progress * Follow-on to b6ed09e297 to fix docstrings/comments * Make inspector driver test correctly * Allow inspector driver to work in standalone mode * Remove outdated TODO.rst file * Updated from global requirements * Introduce support for APC MasterSwitchPlus and Rack PDU * Allow agent lookup to directly accept node UUID * Add CLEANWAIT state * Allow updates in VERIFYING state * Allow deleting nodes in ENROLL state * Updated from global requirements * Fixes a testcase related to trusted boot in UEFI boot mode * Clarify inspection upgrade guide * Refactor refresh method in objects for reuse * Imported Translations from Transifex * Use utils.mkfs directly in deploy\_utils * Updated from global requirements * Migrate ObjectListBase to subclass from the Oslo one * Clean up tftp files if agent deployed disk image * Don't do a premature reservation check in the provision API * Move the http\_url and http\_root to deploy config * Allow upgrading shared lock to an exclusive one * Fix the DEPLOYWAIT check for agent\_\* drivers * Add a missing comma in Vendor Methods of Developer Guide * Replacing dict.iteritems() with dict.items() * Updated from global requirements * db: use new EngineFacade feature of oslo.db * Address minor comments on the ENROLL patch * Remove requirements.txt from tox.ini deps * Updated from global requirements * Replace common.fileutils with oslo\_utils.fileutils * Updated from global requirements * Switch to the oslo\_utils.fileutils * Start using new ENROLL state * Add .idea to .gitignore * Periodically checks the status of nodes in DEPLOYING state * Add IPA support for iscsi\_irmc driver * Updated from global requirements * Vagrant configuration generation now uses pymysql * Remove deprecated code for driver vendor passthru * Add DRAC BIOS config vendor passthru API * Use DEPLOYWAIT while waiting for agent to write image * Fix unittests due mock 1.1.0 release * Migrate RPC objects to oslo.versionedobjects Fields * Imported Translations from Transifex * Updated from global requirements * Mock the file creation for the GetConfigdriveTestCase tests * Address follow-up comments * Clear ilo\_boot\_iso before deploy for glance images * Enable translation for config option help messages * Replace is\_hostname\_safe with a better check * Initial oslo.versionedobjects conversion * Add whole disk image support for iscsi\_irmc driver * Add localboot support for iscsi\_irmc driver * Add iRMC Virtual Media Deploy module for iRMC Driver * add python-scciclient version number requirement * Remove db connection string env variable from tox.ini * Make use of tempdir configuration * Updated from global requirements * Fix failing unit tests under py34 * Allow vendor methods to serve static files * Allow updates when node is on ERROR provision state * Add sequence diagrams for pxe\_ipmi driver * Fix logging for soft power off failures * Mute ipmi debug log output * Validate IPMI protocol version for IPMIShellinaboxConsole * Image service should not be set in ImageCache constructor * Clean nodes stuck in DEPLOYING state when ir-cond restarts * Add ability to filter nodes by provision\_state via API * Refactor check\_allow\_management\_verbs * Add node fields for raid configuration * Switch to oslo.service * Fix "boot\_mode\_support" hyper link in Installation Guide * Log configuration options on ironic-conductor startup * Allow deleting even associated and active node in maintenance mode * Use oslo\_log * Replace self.assertEqual(None,\*) to self.assertIsNone() * Improve warning message in conductor.utils.node\_power\_action() * Add a new boot section 'trusted\_boot' for PXE * use versionutils from oslo\_utils * Make task\_manager logging more helpful * Add IPMI 1.5 support for the ipmitool power driver * Add iBoot driver documentation * Updated from global requirements * Add unit test for ilo\_deploy \_configure\_vmedia\_boot() * Do not use "private" attribute in AuthTokenMiddleware * API: Get a subset of fields from Ports and Chassis * Save disk layout information when deploying * Add ENROLL and related states to the state machine * Refactor method to add or update capability string * Use LOGDIR instead of SCREEN\_LOGDIR in docs * Always allow removing instance\_uuid from node in maintenance mode * API: Get a subset of fields from Nodes * Switch from MySQL-python to PyMySQL * Updated from global requirements * copy editing of ironic deploy docs * Transition state machine to use automaton oslo lib * Finish switch to inspector and inspector-client * Rename ilo\_power.\_attach\_boot\_iso to improve readability * Expose current clean step in the API * Fix broken ACL tests * Add option to configure passes in erase\_devices * Refactor node's and driver's vendor passthru to a common place * Change return value of [driver\_]vendor\_passthru to dict * Add Wake-On-Lan driver documentation * Fixes a bug on the iLO driver tutorial * Address follow-up comments on ucs drivers * Added documentation to Vagrantfile * Updated from global requirements * Addresses UcsSdk install issue * Don't raise exception from set\_failed\_state() * Add disk layout check on re-provisioning * Add boot interface in Ironic * Fix Cisco UCS slow tests * Validate capability in properties and instance\_info * Pass environment variables of proxy to tox * DRAC: fix set/get boot device for 11g * Enable flake8 checking of ironic/nova/\* * Remove tools/flakes.py * Wake-On-Lan Power interface * IPA: Do a soft power off at the end of deployment * Remove unnecessary validation in PXE * Add additional logging around cleaning * remove unneeded sqlalchemy-migrate requirement * Add vendor-passthru to attach and boot an ISO * Updated from global requirements * Sync with latest oslo-incubator * Add pxe\_ucs and agent\_ucs drivers to manage Cisco UCS servers * Doc: Use --notest for creating venv * Updated from global requirements * Fix DRAC driver job completion detection * Add additional required RPMs to dev instructions * Update docs for usage of python-ironicclient * Install guide reflects changes on master branch * Remove auth token saving from iLO driver * Don't support deprecated drivers' vendor\_passthru * Updated from global requirements * Enforce flake8 E123/6/7/8 in ironic * Change driver\_info to driver\_internal\_info in conductor * Use svg as it looks better/scales better than png * Updated from global requirements * Use oslo config import methods for Keystone options * Add documentation for getting a node's console * fix node-get-console returns url always start with http * Update the config drive doc to replace deprecated value * Updated from global requirements * Remove bogus conditional from node\_update * Prevent node delete based on provision, not power, state * Revert "Add simplegeneric to py34 requirements" * Do not save auth token on TFTP server in PXE driver * Updated from global requirements * Update iLO documentation for UEFI secure boot * ironic-discoverd is being renamed to ironic-inspector * Update doc "install from packages" section to include Red Hat * Improve strictness of iLO test cases error checking * Remove deprecated pxe\_deploy\_{kernel, ramdisk} * Get admin auth token for Glance client in image\_service * Fix: iSCSI iqn name RFC violation * Update documentation index.rst * Update AMT Driver doc * Refactor ilo.common.\_prepare\_floppy\_image() * Do not add auth token in context for noauth API mode * DRAC: config options for retry values * Disable meaningless sort keys in list command * Update pyremotevbox documentation * Fix drac implementation of set\_boot\_device * Update to hacking 0.10.x * Prepare for hacking 0.10.x * Rename gendocs tox environment * Add simplegeneric to py34 requirements * Reduce AMT Driver's dependence on new release of Openwsman * Fixes some docstring warnings * Slight changes to Vagrant developer configs * Delete neutron ports when the node cleaning fails * Update docstring DHCPNotFound -> DHCPLoadError * Wrap all DHCP provider load errors * Add partition number to list\_partitions() output fields * Added vagrant VM for developer use * Execute "parted" from root in list\_partitions() * Remove unused CONF variable in test\_ipminative.py * Ironic doesn't use cacert while talking to Swift * Fix chainloading iPXE (undionly.kpxe) * Updated from global requirements * Improve root partition size check in deploy\_partition\_image * ironic/tests/drivers: Add autospec=True and spec\_set= * Fix and enhance "Exercising the Services Locally" docs * Fix typos in Ironic docs * Fix spelling error in docstring * Remove deprecated exceptions * Check temp dir is usable for ipmitool driver * Improve strictness of AMT test cases error checking * Improve strictness of iRMC test cases error checking * Fix Python 3.4 test failure * Remove unneeded usage of '# noqa' * Drop use of 'oslo' namespace package * Updated from global requirements * Specify environment variables needed for a standalone usage * Adds OCS Power and Management interfaces * Run tests in py34 environment * Adds docstrings to some functions in ironic/conductor/manager.py * Add section header to state machines page * Update config generator to use oslo released libs * Use oslo\_log lib * Include graphviz in install prerequisites * Link to config reference in our docs * Adopt config generator * Remove cleanfail->cleaning from state diagram * Imported Translations from Transifex * Return HTTP 400 for invalid sort\_key * Update the Vendor Passthru documentation * Add maintenance mode example with reason * Add logical name example to install-guide * Improve strictness of DRAC test cases error checking * Add a venv that can generate/write/update the states diagram * Log attempts while trying to sync power state * Disable clean\_step if config option is set to 0 * Improve iSCSI deployment logs * supports alembic migration for db2 * Updated from global requirements * Update iLO documentation for capabilities 2015.1.0 -------- * ironic/tests/drivers/amt: Add autospec=True to mocks * ironic/tests/drivers/irmc: Add spec\_set & autospec=True * Updated from global requirements * ironic/tests/drivers/drac: Add spec\_set= or autospec=True * Create a 3rd party mock specs file * Release Import of Translations from Transifex * Document how to configure Neutron with iPXE * Remove state transition: CLEANFAIL -> CLEANING * Remove scripts for migrating nova baremetal * Add a missing comma and correct some typos * Remove API reboot from cleaning docs * Remove scripts for migrating nova baremetal * Fixed is\_glance\_image(image\_href) predicate logic * Rearrange some code in PXEDeploy.prepare * Fixes typo in ironic/api/hooks.py and removes unnecessary parenthesis * update .gitreview for stable/kilo * Add cleaning network docs * Remove ironic compute driver and sched manager * ironic/tests/drivers/ilo: Add spec= & autospec=True to mocks * Replace 'metrics' with 'meters' in option * Update some config option's help strings * document "scheduler\_use\_baremetal\_filters" option in nova.conf * Fix heartbeat when clean step in progress * Fix heartbeat when clean step in progress * Update ilo drivers documentation for inspection * Open Liberty development 2015.1.0rc1 ----------- * Local boot note about updated deploy ramdisk * Convert internal RPC continue\_node\_cleaning to a "cast" * iLO driver documentation for node cleaning * Fix typos in vendor-passthru.rst * Add Ceilometer to Ironic's Conceptual Architecture * Improve AMT driver doc * iLO driver documentation for UEFI secure boot * Fix for automated boot iso issue with IPA ramdisk * Update session headers during initialization of AgentClient * Agent driver fails without Ironic-managed TFTP * Add notes about upgrading juno->kilo to docs * Address comments on I5cc41932acd75cf5e9e5b626285331f97126932e * Use mock patch decorator for eventlet.greenthread.sleep * Cleanup DHCPFactory.\_dhcp\_provider after tests * Follow-up to "Add retry logic to \_exec\_ipmitool" * Nit fixes for boot\_mode being overwritten * Update installation service overview * Don't pass boot\_option: local for whole disk images * Fixup post-merge comments on cleaning document * Use hexhyp instead of hexraw iPXE type * Fix exception handling in Glance image service * Update proliantutils version required for K release * Fix type of value in error middleware response header * Imported Translations from Transifex * Fix mocks not being stopped as intended * Add maintenance check before call do\_node\_deploy * Fix VM stuck when deploying with pxe\_ssh + local boot * Fix bad quoting in quickstart guide * Set hash seed to 0 in gendocs environment * boot\_mode is overwritten in node properties * Add retry logic to \_exec\_ipmitool * Check status of bootloader installation for DIB ramdisk * Add missing mock for test\_create\_cleaning\_ports\_fail * Shorten time for unittest test\_download\_with\_retries * Disable XML now that we have WSME/Pecan support * tests/db: Add autospec=True to mocks * Sync with oslo.incubator * Enable cleaning by default * Improve error handling when JSON is not returned by agent * Fix help string for glance auth\_strategy option * Document ports creating configuration for in-band inspection * Remove DB tests workarounds * Fix formatting issue in install guide * Add missing test for DB migration 2fb93ffd2af1 * Regenerate states diagram after addition of CLEANING * Fix UnicodeEncodeError issue when the language is not en\_US * pxe deploy fails for whole disk images in UEFI * Remove setting language to en\_US for 'venv' * Add config drive documentation * Refactor test code to reduce duplication * Mock time.sleep() for two unittests * Clarify message for power action during cleaning * Add display-name option to example apache2 configuration * New field 'name' not supported in port REST API * Update doc for test database migrations * Add PXE-AMT driver's support of IPA ramdisk * Fix cleaning nits * Update docs: No power actions during cleaning * Prevent power actions on node in cleaning * Followup to comments on Cleaning Docs * Remove inspect\_ports from ilo inspection * Removed hardcoded IDs from "chassis" test resources * Fix is\_hostname\_safe for RFC compliance * Enable pxe\_amt driver with localboot * Improve backwards compat on API behaviour * Use node UUID in logs instead of node ID * Add IPA to enable drivers doc's page * Top level unit tests: Use autospec=True for mocks * DRAC: power on during reboot if powered off * Update pythonseamicroclient package version * A wrong variable format used in msg of ilo: * Add documentation for Cleaning * Explictly state that reboot is expected to work with powered off nodes * Prevent updating the node's driver if console is enabled * Agent driver: no-op heartbeat for maintenanced node * Deploys post whole disk image deploy fails * Allow node.instance\_uuid to be removed during cleaning * Attach ilo\_boot\_iso only if node is active * Ensure configdrive isn't mounted for ilo drivers * Ensure configdrive isn't mounted for ipxe/elilo * Correct update\_dhcp\_opts methods * Fix broken unittests usage of sort() * Add root device hints documentation * Ensure configdrive isn't mounted in CoreOS ramdisks * Add local boot with partition images documentation * Add a return after saving node power state * Fix formatting error in states\_to\_dot * pxe partition image deploy fails in UEFI boot mode * Updated from global requirements * Fix common misspellings * Ilo drivers sets capabilities:boot\_mode in node * Add whole disk image support for iscsi\_ilo using agent ramdisk * Fixed nits for secure boot support for iLO Drivers * Fix typos in ironic/ironic/drivers/modules * fix invalid asserts in tests * Fail deploy if root uuid or disk id isn't available * Hide new fields via single method * Update "Ironic as a standalone service" documentation * DRAC: add retry capability to wsman client operations * Secure boot support for agent\_ilo driver * Secure boot support for iscsi\_ilo driver * Changes for secure boot support for iLO drivers 2015.1.0b3 ---------- * follow up patch for ilo capabilities * Support agent\_ilo driver to perform cleaning * Implement cleaning/zapping for the agent driver * Add Cleaning Operations for iLO drivers * Automate uefi boot iso creation for iscsi\_ilo driver * Generate keystone\_authtoken options in sample config file * Use task.spawn\_after to maintain lock during cleaning * is\_whole\_disk\_image might not exist for previous instances * Hide inspection\_\*\_at fields if version < 1.6 * Disable cleaning by default * Suppress urllib3.connection INFO level logging * Allow periods (".") in hostnames * iscsi\_ilo driver do not validate boot\_option * Sync from oslo.incubator * Common changes for secure boot support * Add pxe\_irmc to the sending IPMI sensor data driver list * iLO driver updates node capabilities during inspection * iLO implementation for hardware inspection * Address nits in uefi agent iscsi deploy commit * Raise exception for Agent Deploy driver when using partition images * Add uefi support for agent iscsi deploy * Enable agent\_ilo for uefi-bios switching * Fixup log message for discoverd * Update unittests and use NamedTemporaryFile * Rename \_continue\_deploy() to pass\_deploy\_info() * Write documentation for hardware inspection * Start using in-band inspection * Log message is missing a blank space * Address comments on cleaning commit * IPA: Add support for root device hints * Use Mock.patch decorator to handle patching amt management module * iscsi\_ilo driver to support agent ramdisk * Enhance AMT driver documentation, pt 2 * Implement execute clean steps * Add missing exceptions to destroy\_node docstrings * Force LANGUAGE=en\_US in test runs * Add validations for root device hints * Add localboot support for uefi boot mode * ironic port deletion fails even if node is locked by same process * Add whole disk image support in iscsi\_ilo driver * Enhance AMT driver documentation * Use oslo\_policy package * Use oslo\_context package * Adds support for deploying whole disk images * Add AMT-PXE driver doc * Fix two typos * Add node UUID to deprecated log message * Fix wrong chown command in deployment guide * PXE driver: Deprecate pxe\_deploy\_{ramdisk, kernel} * Add label to virtual floppy image * Make sure we don't log the full content of the config drive * Update API doc to reflect node uuid or name * Fix typo agaist->against * Use strutils from oslo\_utils * Updated from global requirements * Add AMT-PXE-Driver Power&Management&Vendor Interface * Fix wrong log output in ironic/ironic/conductor/manager.py * Refactor agent iscsi deploy out of pxe driver * Tiny improvement of efficient * Make try block shorter for \_make\_password\_file * Add module for in-band inspection using ironic-discoverd * Fix take over for agent driver * Add server-supported min and max API version to HTTPNotAcceptable(406) * Updated from global requirements * Add tftp mapfile configuration in install-guide * Fix nits in cleaning * Fix nits for supporting non-glance images * Follow-up patch for generic node inspection * Add a note to dev-quickstart * Add iter\_nodes() helper to the conductor manager * Implement Cleaning in DriverInterfaces * Update install-guide for Ubuntu 14.10 package changes * Use mock instead of fixtures when appropriate * Generic changes for Node Inspection * Fix typo in "Enabling Drivers" * Support for non-Glance image references * Create new config for pecan debug mode * Local boot support for IPA * PXE drivers support for IPA * Update documentation on VirtualBox drivers * Add localboot support for iscsi\_ilo driver * Improve last\_error for async exceptions * Fix IPMI support documentation * Root partition should be bootable for localboot * Updated from global requirements * Add iRMC Management module for iRMC Driver * Spelling error in Comment * Remove unused code from agent vendor lookup() * Add documentation for VirtualBox drivers * Implement Cleaning States * Missing mock causing long tests * Add support for 'latest' in microversion header * Add tests for ilo\_deploy driver * Fix reboot logic of iRMC Power Driver * Update the states generator and regenerate the image * Ensure state values are 15 characters or less * Minor changes to InspectInterface * INSPECTFAIL value is more readable * Disable n-novnc, heat, cinder and horizon on devstack * Return required properties for agent deploy driver * Remove unused modules from ironic/openstack/common * Use functions from oslo.utils * Update Ilo drivers to use REST API interface to iLO * Add dhcp-all-interfaces to get IP to NIC other than eth0 * Log exception on tear\_down failure * Fix PEP8 E124 & E125 errors * Mock sleep function for OtherFunctionTestCase * Log node UUID rather than node object * Updated from global requirements * Add InspectInterface for node-introspection * Correctly rebuild the PXE file during takeover of ACTIVE nodes * Fix PEP8 E121 & E122 errors * Add documentation for the IPMI retry timeout option * Use oslo\_utils replace oslo.utils * Avoid deregistering conductor following SIGUSR1 * Add states required for node-inspection * For flake8 check, make the 'E12' ignore be more granular * add retry logic to is\_block\_device function * Imported Translations from Transifex * Move oslo.config references to oslo\_config * Add AMT-PXE-Driver Common Library * Fix typos in documentation: Capabilities * Removed unused image file * Address final comments of a4cf7149fb * Add concept of stable states to the state machine * Fix ml2\_conf.ini settings * Vendorpassthru doesn't get correct 'self' * Remove docs in proprietary formats * Fix file permissions in project * Imported Translations from Transifex * Updated from global requirements * Remove deploy\_is\_done() from AgentClient * AgentVendorInterface: Move to a common place * Stop console at first if console is enabled when destroy node * fixed typos from eligable to eligible and delition to deletion * Add logical name support to Ironic * Add support for local boot * Fix chown invalid option -- 'p' * ipmitool drivers fail with integer passwords * Add the subnet creation step to the install guide 2015.1.0b2 ---------- * improve iSCSI connection check * Remove min and max from base.Version * Add list of python driver packages * Add policy show\_password to mask passwords in driver\_info * Conductor errors if enabled\_drivers are not found * Add MANAGEABLE state and associated transitions * Raise minimum API version to 1.1 * Correct typo in agent\_client * Fix argument value for work\_on\_disk() in unit test * Documentation: Describe the 'spacing' argument * update docstring for driver\_periodic\_task's parallel param * Use prolianutils module for ilo driver tests * Add documentation on parallel argument for driver periodic tasks * Rename provision\_state to power\_state in test\_manager.py * Refactor ilo.deploy.\_get\_single\_nic\_with\_vif\_port\_id() * Update agent driver with new field driver\_internal\_info * Updated from global requirements * Add support for driver-specific periodic tasks * Partial revert of 4606716 until we debug further * Clean driver\_internal\_info when changes nodes' driver * Add Node.driver\_internal\_info * Move oslo.config references to oslo\_config * Move oslo.db references to oslo\_db * Revert "Do not pass PXE net config from bootloader to ramdisk" * Bump oslo.rootwrap to 1.5.0 * Drop deprecated namespace for oslo.rootwrap * Add VirtualBox drivers and its modules * region missing in endpoint selection * Add :raises: for Version constructor docstring * Improve testing of the Node's REST API * Rename NOSTATE to AVAILABLE * Add support for API microversions * Address final comments of edf532db91 * Add missing exceptions into function docstring * Fix typos in commit I68c9f9f86f5f113bb111c0f4fd83216ae0659d36 * Add logic to store the config drive passed by Nova * Do not POST conductor\_affinity in tests * Add 'irmc\_' prefix to optional properties * Actively check iSCSI connection after login * Updated from global requirements * Add iRMC Driver and its iRMC Power module * Fix drivers.rst doc format error * Improve test assertion for get\_glance\_image\_properties * Do not pass PXE net config from bootloader to ramdisk * Adds get\_glance\_image\_properties * Fix filter\_query in drac/power interface * Updated from global requirements * Simplify policy.json * Replace DIB installation step from git clone to pip * Add a TODO file * Updated from global requirements * Fix function docstring of \_get\_boot\_iso\_object\_name() * Improve ironic-dbsync help strings * Clear locks on conductor startup * Remove argparse from requirements * Use oslo\_serialization replace oslo.serialization * Agent driver fails with Swift Multiple Containers * Add ipmitool to quickstart guide for Ubuntu * Allow operations on DEPLOYFAIL'd nodes * Allow associate an instance independent of the node power state * Improve docstrings about TaskManager's spawning feature * DracClient to handle ReturnValue validation * Fix instance\_info parameters clearing * DRAC: Fix wsman host verification * Updated from global requirements * Clean up ilo's parse\_driver\_info() * Fix ssh \_get\_power\_status as it returned status for wrong node * Fix RPCService and Ironic Conductor so they shut down gracefully * Remove jsonutils from openstack.common * Remove lockfile from dependencies * Remove IloPXEDeploy.validate() * Force glance recheck for kernel/ramdisk on rebuild * iboot power driver: unbound variable error * Remove unused state transitions * PXE: Add configdrive support * Rename localrc for local.conf * DracClient to handle ClientOptions creation * Ensure we don't have stale power state in database after power action * Remove links autogenerated from module names * Make DD block size adjustable * Improve testing of state transitions * Convert drivers to use process\_event() * Update service.py to support graceful Service shutdown * Ensure that image link points to the correct image * Raise SSH failure messages to the error level * Make 'method' explicit for VendorInterface.validate() * Updated from global requirements * Provided backward compat for enforcing admin policy * Allow configuration of neutronclient retries * Convert check\_deploy\_timeout to use process\_event * Add requests to requirements.txt * Enable async callbacks from task.process\_event() * Document dependency on \`fuser\` for pxe driver * Distinguish between prepare + deploy errors * Avoid querying the power state twice * Add state machine to documentation * Updated from global requirements * Adjust the help strings to better reflect usage * Updated from global requirements * Updated from global requirements * Update etc/ironic/ironic.conf.sample * Fix policy enforcement to properly detect admin * Minor changes to state model * Add documentation to create in RegionOne * Delete unnecessary document files * Updated from global requirements * display error logging should be improved * Refactor async helper methods in conductor/manager.py * Hide oslo.messaging DEBUG logs by default * add comments for NodeStates fields * Stop conductor if no drivers were loaded * Fix typo in install-guide.rst * Reuse methods from netutils * Use get\_my\_ipv4 from oslo.utils * improve the neutron configuration in install-guide * Refactoring for Ironic policy * PXE: Pass root device hints via kernel cmdline * Extend API multivalue fields * Add a fsm state -> dot diagram generator * Updated from global requirements * Update command options in the Installation Guide 2015.1.0b1 ---------- * Improve Agent deploy driver validation * Add new enrollment and troubleshooting doc sections * Begin using the state machine for node deploy/teardown * Add base state machine * Updated from global requirements * Get rid of set\_failed\_state duplication * Remove Python 2.6 from setup.cfg * Updated from global requirements * Update dev quick-start for devstack * Updated from global requirements * Correct vmware ssh power manager * rename oslo.concurrency to oslo\_concurrency * Remove duplicate dependencies from dev-quickstart docs * Do not strip 'glance://' prefix from image hrefs * Updated from global requirements * Fix image\_info passed to IPA for image download * Use Literal Blocks to write code sample in docstring * Workflow documentation is now in infra-manual * Add tests to iscsi\_deploy.build\_deploy\_ramdisk\_options * Fix for broken deploy of iscsi\_ilo driver * Updated from global requirements * Add info on creating a tftp map file * Add documentation for SeaMicro driver * Fixed typo in Drac management driver test * boot\_devices.PXE value should match with pyghmi define * Add decorator that requires a lock for Drac management driver * Remove useless deprecation warning for node-update maintenance * Ilo tests refactoring * Change some exceptions from invalid to missing * Add decorator that requires a lock for Drac power driver * Change methods from classmethod to staticmethod * iLO Management Interface * Improve docs for running IPA in Devstack * Update 'Introduction to Ironic' document * Avoid calling \_parse\_driver\_info in every test * Updated from global requirements * Correct link in user guide * Minor fix to install guide for associating k&r to nodes * Add serial console feature to seamicro driver * Support configdrive in agent driver * Add driver\_validate() * Update drivers VendorInterface validate() method * Adds help for installing prerequisites on RHEL * Add documentation about Vendor Methods * Make vendor methods discoverable via the Ironic API * Fix PXEDeploy class docstring * Updated from global requirements * Vendor endpoints to support different HTTP methods * Add ipmitool as dependency on RHEL/Fedora systems * dev-quickstart.rst update to add required packages * Add gendocs tox job for generating the documentation * Add gettext to packages needed in dev quickstart * Convert qcow2 image to raw format when deploy * Update iLO driver documentation * Disable IPMI timeout before setting boot device * Updated from global requirements * ConductorManager catches Exceptions * Remove unused variable in agent.\_get\_interfaces() * Enable hacking rule E265 * Add sync and async support for passthru methods * Fix documentation on Standard driver interfaces * Add a mechanism to route vendor methods * Remove redundant FunctionalTest usage in API tests * Use wsme.Unset as default value for API objects * Fix traceback on rare agent error case * Make \_send\_sensor\_data more cooperative * Updated from global requirements * Add logging to driver vendor\_passthru functions * Support ipxe with Dnsmasq * Correct "returns" line in PXE deploy method * Remove all redundant setUp() methods * Update install guide to install tftp * Remove duplicated \_fetch\_images function * Change the force\_raw\_image config usage * Clear maintenance\_reason when setting maintenance=False * Removed hardcoded IDs from "port" test resources * Switch to oslo.concurrency * Updated from global requirements * Use docstrings for attributes in api/controllers * Put nodes-related API in same section * Fix get\_test\_node attributes set incorrectly * Get new auth token for ramdisk if old will expire soon * Delete unused 'use\_ipv6' config option * Updated from global requirements * Add maintenance to RESTful web API documentation * Updated from global requirements * Iterate over glance API servers * Add API endpoint to set/unset the node maintenance mode * Removed hardcoded IDs from "node" test resources * Add maintenance\_reason when setting maintenance mode * Add Node.maintenance\_reason * Fix F811 error in pep8 * Improve hash ring value conversion * Add SNMP driver for Aten PDU's * Update node-validate error messages * Store image disk\_format and container\_format * Continue heartbeating after DB connection failure * TestAgentVendor to use the fake\_agent driver * Put a cap on our cyclomatic complexity * More helpful failure for tests on noexec /tmp * Update doc headers at end of Juno * Fix E131 PEP8 errors 2014.2 ------ * Add the PXE VendorPassthru interface to PXEDracDriver * Add documentation for iLO driver(s) * Enable E111 PEP8 check * Updated from global requirements * Fix F812 PEP8 error * Enable H305 PEP8 check * Enable H307 PEP8 check * Updated from global requirements * Enable H405 PEP8 check * Enable H702 PEP8 check * Enable H904 PEP8 check * Migration to oslo.serialization * Add the PXE VendorPassthru interface to PXEDracDriver * Adds instructions for deploying instances on real hardware * Fix pep8 test * Add missing attributes to sample API objects * Fix markup-related issues in documentation * Add documentation for PXE UEFI setup 2014.2.rc2 ---------- * Clear hash ring cache in get\_topic\_for\* * Fix exceptions names and messages for Keystone errors * Remove unused change\_node\_maintenance\_mode from rpcapi * Imported Translations from Transifex * Clear hash ring cache in get\_topic\_for\* * Move database fixture to a separate test case * KeyError from AgentVendorInterface.\_heartbeat() * Validate the power interface before deployment * Cleans up some Sphinx rST warnings in Ironic * Remove kombu as a dependency for Ironic 2014.2.rc1 ---------- * Make hash ring mapping be more consistent * Add periodic task to rebuild conductor local state * Open Kilo development * Add "affinity" tracking to nodes and conductors * ilo\* drivers to use only ilo credentials * Update hacking version in test requirements * Add a call to management.validate(task) * Replace custom lazy loading by stevedore * Updated from global requirements * Remove useless variable in migration * Use DbTestCase as test base when context needed * For convention rename the first classmethod parameter to cls * Always reset target\_power\_state in node\_power\_action * Imported Translations from Transifex * Stop running check\_uptodate in the pep8 testenv * Add HashRingManager to wrap hash ring singleton * Fix typo in agent validation code * Conductor changes target\_power\_state before starting work * Adds openSUSE support for developer documentation * Updated from global requirements * Remove untranslated PO files * Update ironic.conf.sample * Remove unneeded context initialization in tests * Force the SSH commands to use their default language * Add parameter to override locale to utils.execute * Refactor PXE clean up tests * Updated from global requirements * Don't reraise Exceptions from agent driver * Add documentation for ironic-dbsync command * Do not return 'id' in REST API error messages * Separate the agent driver config from the base localrc config * pxe\_ilo driver to call iLO set\_boot\_device * Remove redundant context parameter * Update docs with new dbsync command * Update devstack docs, require Ubuntu 14.04 * Do not use the context parameter on refresh() * Pass ipa-driver-name to agent ramdisk * Do not set the context twice when forming RPC objects * Make context mandatory when instantiating a RPC object * Neutron DHCP implementation to raise exception if no ports have VIF * Do not cache auth token in Neutron DHCP provider * Imported Translations from Transifex * add\_node\_capability and rm\_node\_capability unable to save changes to db * Updated from global requirements * Handle SNMP exception error.PySnmpError * Use standard locale in list\_partitions * node\_uuid should not be used to create test port * Revert "Revert "Search line with awk itself and avoid grep"" * Fix code error in pxe\_ilo driver * Add unit tests for SNMPClient * Check whether specified FS is supported * Sync the doc with latest code * Add a doc note about the vendor\_passthru endpoint * Remove 'incubated' documentation theme * Import modules for fake IPMINative/iBoot drivers * Allow clean\_up with missing image ref * mock.called\_once\_with() is not a valid method * Fix Devstack docs for zsh users * Fix timestamp column migration * Update ironic states and documentation * Stop using intersphinx * Updated from global requirements * Remove the objectify decorator * Add reserve() and release() to Node object * Add uefi boot mode support in IloVirtualMediaIscsiDeploy * Don't write python bytecode while testing * Support for setting boot mode in pxe\_ilo driver * Remove bypassing of H302 for gettextutils markers * Revert "Search line with awk itself and avoid grep" * Search line with awk itself and avoid grep * Add list\_by\_node\_id() to Port object * Remove unused modules from openstack-common.conf * Sync the document with the current implementation * Unify the sensor data format * Updated from global requirements * Deprecate Ironic compute driver and sched manager * Log ERROR power state in node\_power\_action() * Fix compute\_driver and scheduler\_host\_manager in install-guide * Use oslo.utils instead of ironic.openstack.common * Use expected, actual order for PXE template test * Fix agent PXE template * Translator functions cleanup part 3 * Translator functions cleanup part 2 * Imported Translations from Transifex * Updated from global requirements * Remove XML from api doc samples * Update ironic.conf.sample * Fix race conditions running pxe\_utils tests in parallel * Switch to "incubating" doc theme * Minor fixes for ipminative console support * Translator functions cleanup part 4 * Translator functions cleanup part 1 * Remove unnecessary mapping from Agent drivers * mock.assert\_called\_once() is not valid method * Use models.TimestampMixin from oslo.db * Updated from global requirements 2014.2.b3 --------- * Driver merge review comments from 111425 * Nova review updates for \_node\_resource * Ignore backup files * IloVirtualMediaAgent deploy driver * IloVirtualMediaIscsi deploy driver * Unbreak debugging via testr * Interactive console support for ipminative driver * Add UEFI based deployment support in Ironic * Adds SNMP power driver * Control extra space for images conversion in image\_cache * Use metadata.create\_all() to initialise DB schema * Fix minor issues in the DRAC driver * Add send-data-to-ceilometer support for pxe\_ipminative driver * Reduce redundancy in conductor manager docstrings * Fix typo in PXE driver docstrings * Update installation guide for syslinux 6 * Updated from global requirements * Imported Translations from Transifex * Avoid deadlock when logging network\_info * Implements the DRAC ManagementInterface for get/set boot device * Rewrite images tests with mock * Add boot\_device support for vbox * Remove gettextutils \_ injection * Make DHCP provider pluggable * DRAC wsman\_{enumerate, invoke}() to return an ElementTree object * Remove futures from requirements * Script to migrate Nova BM data to Ironic * Imported Translations from Transifex * Updated from global requirements * Fix unit tests with keystoneclient master * Add support for interacting with swift * properly format user guide in RST * Updated from global requirements * Fix typo in user-guide.rst * Add console interface to agent\_ipmitool driver * Add support for creating vfat and iso images * Check ERROR state from driver in \_do\_sync\_power\_state * Set PYTHONHASHSEED for venv tox environment * Add iPXE Installation Guide documentation * Add management interface for agent drivers * Add driver name on driver load exception * Take iSCSI deploy out of pxe driver * Set ssh\_virt\_type to vmware * Update nova driver's power\_off() parameters * return power state ERROR instead of an exception * handle invalid seamicro\_api\_version * Imported Translations from Transifex * Nova ironic driver review update requests to p4 * Allow rebuild of node in ERROR and DEPLOYFAIL state * Use cache in node\_is\_available() * Query full node details and cache * Add in text for text mode on trusty * Add Parallels virtualisation type * IPMI double bridging functionality * Add DracDriver and its DracPower module * use MissingParameterValue exception in iboot * Update compute driver macs\_for\_instance per docs * Update DevStack guide when querying the image UUID * Updated from global requirements * Fix py3k-unsafe code in test\_get\_properties() * Fix tear\_down a node with missing info * Remove d\_info param from \_destroy\_images * Add docs for agent driver with devstack * Removes get\_port\_by\_vif * Update API document with BootDevice * Replace incomplete "ilo" driver with pxe\_ilo and fake\_ilo * Handle all exceptions from \_exec\_ipmitool * Remove objectify decorator from dbapi's {get, register}\_conductor() * Improve exception handling in console code * Use valid exception in start\_shellinabox\_console * Remove objectify decorator from dbapi.update\_\* methods * Add list() to Chassis, Node, Port objects * Raise MissingParameterValue when validating glance info * Mechanism to cleanup all ImageCaches * Driver merge review comments from 111425-2-3 * Raise MissingParameterValue instead of Invalid * Import fixes from the Nova driver reviews * Imported Translations from Transifex * Use auth\_token from keystonemiddleware * Make swift tempurl key secret * Add method for deallocating networks on reschedule * Reduce running time of test\_different\_sizes * Remove direct calls to dbapi's get\_node\_by\_instance * Add create() and destroy() to Port object * Correct \`op.drop\_constraint\` parameters * Use timeutils from one place * Add create() and destroy() to Chassis object * Add iPXE support for Ironic * Imported Translations from Transifex * Add posix\_ipc to requirements * backport reviewer comments on nova.virt.ironic.patcher * Move the 'instance\_info' fields to GenericDriverFields * Migration to oslo.utils library * Fix self.fields on API Port object * Fix self.fields on API Chassis object * Sync oslo.incubator modules * Updated from global requirements * Expose {set,get}\_boot\_device in the API * Check if boot device is persistent on ipminative * Sync oslo imageutils, strutils to Ironic * Add charset and engine settings to every table * Imported Translations from Transifex * Remove dbapi calls from agent driver * Fix not attribute '\_periodic\_last\_run' * Implements send-data-to-ceilometer * Port iBoot PDU driver from Nova * Log exception with translation * Add ironic-python-agent deploy driver * Updated from global requirements * Imported Translations from Transifex * Clean up calls to get\_port() * Clean up calls to get\_chassis() * Do not rely on hash ordering in tests * Update\_port should expect MACAlreadyExists * Imported Translations from Transifex * Adding swift temp url support * Push the image cache ttl way up * Imported Translations from Transifex * SSH virsh to use the new ManagementInterface * Split test case in ironic.tests.conductor.test\_manager * Tune down node\_locked\_retry\_{attempts,interval} config for tests * Add RPC version to test\_get\_driver\_properties 2014.2.b2 --------- * Import fixes from the Nova driver reviews * Generalize exception handling in Nova driver * Fix nodes left in an incosistent state if no workers * IPMINative to use the new ManagementInterface * Backporting nova host manager changes into ironic * Catch oslo.db error instead of sqlalchemy error * Add a test case for DB schema comparison * remove ironic-manage-ipmi.filters * Implement API to get driver properties * Add drivers.base.BaseDriver.get\_properties() * Implement retry on NodeLocked exceptions * SeaMicro to use the new ManagementInterface * Import fixes from Nova scheduler reviews * Rename/update common/tftp.py to common/pxe\_utils.py * Imported Translations from Transifex * Factor out deploy info from PXE driver * IPMITool to use the new ManagementInterface * Use mock.assert\_called\_once\_with() * Add missing docstrings * Raise appropriate errors on duplicate Node, Port and Chassis creation * Add IloDriver and its IloPower module * Add methods to ipmitool driver * Use opportunistic approach for migration testing * Use oslo.db library * oslo.i18n migration * Import a few more fixes from the Nova driver * Set a more generous default image cache size * Fix wrong test fixture for Node.properties * Make ComputeCapabilitiesFilter work with Ironic * Add more INFO logging to ironic/common/service.py * Clean up nova virt driver test code * Fix node to chassis and port to node association * Allow Ironic URL from config file * Imported Translations from Transifex * Update webapi doc with link and console * REST API 'limit' parameter to only accept positive values * Update docstring for api...node.validate * Document 'POST /v1/.../vendor\_passthru' * ManagementInterface {set, get}\_boot\_device() to support 'persistent' * Use my\_ip for neutron URL * Updated from global requirements * Add more INFO logging to ironic/conductor * Specify rootfstype=ramfs deploy kernel parameter * Add set\_spawn\_error\_hook to TaskManager * Imported Translations from Transifex * Updates the Ironic on Devstack dev documentation * Simplify error handling * Add gettextutils.\_L\* to import\_exceptions * Fix workaround for the "device is busy" problem * Allow noauth for Neutron * Minor cleanups to nova virt driver and tests * Update nova rebuild to account for new image * Updated from global requirements * pep8 cleanup of Nova code * PEP fixes for the Nova driver * Fix glance endpoint tests * Update Nova's available resources at termination * Fix the section name in CONTRIBUTING.rst * Add/Update docstrings in the Nova Ironic Driver * Update Nova Ironic Driver destroy() method * Nova Ironic driver get\_info() to return memory stats in KBytes * Updates Ironic Guide with deployment information * Add the remaining unittests to the ClientWrapper class * Wait for Neutron port updates when using SSHPower * Fix 'fake' driver unable to finish a deploy * Update "Exercising the Services Locally" doc * Fixing hardcoded glance protocol * Remove from\_chassis/from\_nodes from the API doc * Prevent updating UUID of Node, Port and Chassis on DB API level * Imported Translations from Transifex * Do not delete pxe\_deploy\_{kernel, ramdisk} on tear down * Implement security groups and firewall filtering methods * Add genconfig tox job for sample config file generation * Mock pyghmi lib in unit tests if not present * PXE to pass hints to ImageCache on how much space to reclaim * Add some real-world testing on DiskPartitioner * Eliminate races in Conductor \_check\_deploy\_timeouts * Use temporary dir for image conversion * Updated from global requirements * Move PXE instance level parameters to instance\_info * Clarify doc: API is admin only * Mock time.sleep for the IPMI tests * Destroy instance to clear node state on failure * Add 'context' parameter to get\_console\_output() * Cleanup virt driver tests and verify final spawn * Test fake console driver * Allow overriding the log level for ironicclient * Virt driver logging improvements * ipmitool driver raises DriverLoadError * VendorPassthru.validate()s call \_parse\_driver\_info * Enforce a minimum time between all IPMI commands * Remove 'node' parameter from the validate() methods * Test for membership should be 'not in' * Replace mknod() with chmod() * Factoring out PXE and TFTP functions * Let ipmitool natively retry commands * Sync processutils from oslo code * Driver interface's validate should return nothing * Use .png instead of .gif images * Fix utils.execute() for consistency with Oslo code * remove default=None for config options 2014.2.b1 --------- * Stop ipmitool.validate from touching the BMC * Set instance default\_ephemeral\_device * Add unique constraint to instance\_uuid * Add node id to DEBUG messages in impitool * Remove 'node' parameter from the Console and Rescue interfaces * TaskManager: Only support single node locking * Allow more time for API requests to be completed * Add retry logic to iscsiadm commands * Wipe any metadata from a nodes disk * Rework make\_partitions logic when preserve\_ephemeral is set * Fix host manager node detection logic * Add missing stats to IronicNodeState * Update IronicHostManager tests to better match how code works * Update Nova driver's list\_instance\_uuids() * Remove 'fake' and 'ssh' drivers from default enabled list * Work around iscsiadm delete failures * Mock seamicroclient lib in unit tests if not present * Cleanup mock patch without \`with\` part 2 * Add \_\_init\_\_.py for nova scheduler filters * Skip migrations test\_walk\_versions instead of pass * Improving unit tests for \_do\_sync\_power\_state * Fix AttributeError when calling create\_engine() * Reuse validate\_instance\_and\_node() Nova ironic Driver * Fix the logging message to identify node by uuid * Fix concurrent deletes in virt driver * Log exceptions from deploy and tear\_down * PXE driver to validate the requested image in Glance * Return the HTTP Location for accepted requestes * Return the HTTP Location for newly created resources * Fix tests with new keystoneclient * list\_instances() to return a list of instances names * Pass kwargs to ClientWrapper's call() method * Remove 'node' parameter from the Power interface * Set the correct target versions for the RPC methods * Consider free disk space before downloading images into cache * Change NodeLocked status code to a client-side error * Remove "node" parameter from methods handling power state in docs * Add parallel\_image\_downloads option * Synced jsonutils from oslo-incubator * Fix chassis bookmark link url * Remove 'node' parameter from the Deploy interface * Imported Translations from Transifex * Remove all mostly untranslated PO files * Cleanup images after deployment * Fix wrong usage of mock methods * Using system call for downloading files * Run keepalive in a dedicated thread * Don't translate debug level logs * Update dev quickstart guide for ephemeral testing * Speed up Nova Ironic driver tests * Renaming ironicclient exceptions in nova driver * Fix bad Mock calls to assert\_called\_once() * Cleanup mock patch without \`with\` part 1 * Corrects a typo in RESTful Web API (v1) document * Updated from global requirements * Clean up openstack-common.conf * Remove non-existent 'pxe\_default\_format' parameter from patcher * Remove explicit dependency on amqplib * Pin RPC client version min == max * Check requested image size * Fix 'pxe\_preserve\_ephemeral' parameter leakage * RPC\_API\_VERSION out of sync * Simplify calls to ImageCache in PXE module * Implement the reboot command on the Ironic Driver * Place root partition last so that it can always be expanded * Stop creating a swap partition when none was specified * Virt driver change to use API retry config value * Implement more robust caching for master images * Decouple state inspection and availability check * Updated from global requirements * Fix ironic node state comparison * Add create() and destroy() to Node * Fix typo in rpcapi.driver\_vendor\_passthru * Support serial console access * Remove 'node' parameter from the VendorPassthru interface * Updated from global requirements * Synced jsonutils from oslo-incubator * Fix chassis-node relationship * Implement instance rebuild in nova.virt.driver * Sync oslo logging * Add ManagementInterface * Clean oslo dependencies files * Return error immediately if set\_console\_mode is not supported * Fix bypassed reference to node state values * Updated from global requirements * Port to oslo.messaging * Drivers may expose a top-level passthru API * Overwrite instance\_exists in Nova Ironic Driver * Update Ironic User Guide post landing for 41af7d6b * Spawn support for TaskManager and 2 locking fixes * Document ClusteredComputeManager * Clean up calls to get\_node() * nova.virt.ironic passes ephemeral\_gb to ironic * Implement list\_instance\_uuids() in Nova driver * Modify the get console API * Complete wrapping ironic client calls * Add worker threads limit to \_check\_deploy\_timeouts task * Use DiskPartitioner * Better handling of missing drivers * Remove hardcoded node id value * cleanup docstring for drivers.utils.get\_node\_mac\_addresses * Update ironic.conf.sample * Make sync\_power\_states yield * Refactor sync\_power\_states tests to not use DB * Add DiskPartitioner * Some minor clean up of various doc pages * Fix message preventing overwrite the instance\_uuid * Install guide for Ironic * Refactor the driver fields mapping * Imported Translations from Transifex * Fix conductor.manager test assertion order * Overwriting node\_is\_available in IronicDriver * Sync oslo/common/excutils * Sync oslo/config/generator * Cherry pick oslo rpc HA fixes * Add Ironic User Guide * Remove a DB query for get\_ports\_by\_node() * Fix missed stopping of conductor service * Encapsulate Ironic client retry logic * Do not sync power state for new invalidated nodes * Make tests use Node object instead of dict * Sync object list stuff from Nova * Fix Node object version * Cleanup running conductor services in tests * Factor hash ring management out of the conductor * Replace sfdisk with parted * Handling validation in conductor consistently * JsonPatch add operation on existing property * Updated from global requirements * Remove usage of Glance from PXE clean\_up() * Fix hosts mapping for conductor's periodic tasks * Supports filtering port by address * Fix seamicro power.validate() method definition * Update tox.ini to also run nova tests * Updated from global requirements * Fix messages formatting for \_sync\_power\_states * Refactor nova.virt.ironic.driver get\_host\_stats * Use xargs -0 instead of --null * Change admin\_url help in ironic driver * Sync base object code with Nova's * Add Node.instance\_info field * Fix self.fields on API Node object * Show maintenance field in GET /nodes * Move duplicated \_get\_node(s)\_mac\_addresses() * Fix grammar in error string in pxe driver * Reduce logging output from non-Ironic libraries * Open Juno development 2014.1.rc1 ---------- * Fix spelling error in conductor/manager * Improved coverage for ironic API * Manually update all translated strings * Check that all po/pot files are valid * If no swap is specified default to 1MB * Fix Nova rescheduling tear down problem * Remove obsolete po entries - they break translation jobs * Add note to ssh about impact on ci testing * Adds exact match filters to nova scheduler * Clean up IronicNodeStates.update\_from\_compute\_node * ironic\_host\_manager was missing two stats * Imported Translations from Transifex * Fix seamicro validate() method definition * Remove some obsolete settings from DevStack doc * Raise unexpected exceptions during destroy() * Start using oslosphinx theme for docs * Provide a new ComputeManager for Ironic * Nova Ironic driver to set pxe\_swap\_mb in Ironic * Fix strings post landing for c63e1d9f6 * Run periodic\_task in a with a dynamic timer * Update SeaMicro to use MixinVendorInterface * Run ipmi power status less aggressively * Avoid API root controller dependency on v1 dir * Update Neutron if mac address of the port changed * Replace fixtures with mock in test\_keystone.py * Decrease running time of SeaMicro driver tests * Remove logging of exceptions from controller's methods * Imported Translations from Transifex * Fix missed exception raise in \_add\_driver\_fields * Speed up ironic tests * Pass no arguments to \_wait\_for\_provision\_state() * Adds max retry limit to sync\_power\_state task * Updated from global requirements * Imported Translations from Transifex * Stop incorrectly returning rescue: supported * Correct version.py and update current version string * Documentation for deploying DevStack /w Ironic * Hide rescue interface from validate() output * Change set\_console\_mode() to use greenthreads * Fix help string for a glance option * Expose API for fetching a single driver * Change JsonEncodedType.impl to TEXT * Fix traceback hook for avoid duplicate traces * Fix 'spacing' parameters for periodic tasks * Permit passing SSH keys into the Ironic API * Better instance-not-found handling within IronicDriver * Make sure auth\_url exists and is not versionless * Conductor de-registers on shutdown * Change deploy validation exception handling * Suppress conductor logging of expected exceptions * Remove unused method from timeutils * Add admin\_auth\_token option for nova driver * Remove redundant nova virt driver test * Process public API list as regular expressions * Enable pep8 tests for the Nova Ironic Driver * Fix typo tenet -> tenant * Stop logging paramiko's DEBUG and INFO messages * Set boot device to PXE when deploying * Driver utils should raise unsupported method * Delete node while waiting for deploy * Check BMC availability in ipmitool 'validate' method * SeaMicro use device parameter for set\_boot\_device * Make the Nova Ironic driver to wait for ACTIVE * Fix misspelled impi to ipmi * Do not use \_\_builtin\_\_ in python3 * Use range instead xrange to keep python 3.X compatibility * Set the database.connection option default value * PXE validate() to fail if no Ironic API URL * Improve Ironic Conductor threading & locks * Generic MixinVendorInterface using static mapping * Conductor logs better error if seamicroclient missing * Add TaskManager lock on change port data * Nova ironic driver to retry on HTTP 503 * Mark hash\_replicas as experimental * do\_node\_deploy() to use greenthreads * Move v1 API tests to separate v1 directory * Pin iso8601 logging to WARN * Only fetch node once for vif actions * Fix how nova ironic driver gets flavor information * Imported Translations from Transifex * API: Add sample() method to remaining models * Import Nova "ironic" driver * Remove errors from API documentation * Add libffi-dev(el) dependency to quickstart * Updated from global requirements * Remove redundant default value None for dict.get 2014.1.b3 --------- * Refactor vendor\_passthru to use conductor async workers * Fix wrong exception raised by conductor for node * Fix params order in assertEqual * Sync the log\_handler from oslo * Fix SeaMicro driver post landing for ba207b4aa0 * Implements SeaMicro VendorPassThru functionality * Implement the SeaMicro Power driver * Fix provision\_updated\_at deserialization * Remove jsonutils from test\_rpcapi * Do not delete a Node which is not powered off * Add provision\_updated\_at to node's resource * Prevent a node in maintenance from being deployed * Allow clients to mark a node as in maintenance * Support preserve\_ephemeral * Updated from global requirements * API: Expose a way to start/stop the console * Add option to sync node power state from DB * Make the PXE driver understand ephemeral disks * Log deploy\_utils.deploy() erros in the PXE driver * Removing get\_node\_power\_state, bumping RPC version * Add timeout for waiting callback from deploy ramdisk * Prevent GET /v1/nodes returning maintenance field * Suggested improvements to \_set\_boot\_device * Move ipminative \_set\_boot\_device to VendorPassthru * Sync common db code from Oslo * PXE clean\_up() to remove the pxe\_deploy\_key parameter * Add support for custom libvirt uri * Python 3: replace "im\_self" by "\_\_self\_\_" * Fix race condition when deleting a node * Remove extraneous vim configuration comments for ironic * Do not allow POST ports and chassis internal attributes * Do not allow POST node's internal attributes * Unused 'pxe\_key\_data' & 'pxe\_instance\_name' info * Add provision\_updated\_at field to nodes table * Exclude nodes in DEPLOYWAIT state from \_sync\_power\_states * Sync common config module from Oslo * Get rid object model \`dict\` methods part 4 * Sync Oslo rpc module to Ironic * Clarify and fix the dev-quickstart doc some more * Do not use CONF as a default parameter value * Simplify locking around acquiring Node resources * Improve help strings * Remove shebang lines from code * Use six.moves.urllib.parse instead of urlparse * Add string representation method to MultiType * Fix test migrations for alembic * Sync Oslo gettextutils module to Ironic * NodeLocked returns 503 error status * Supports OPERATOR priv level for ipmitool driver * Correct assertEqual order from patch e69e41c99fb * PXE and SSH validate() method to check for a port * Task object as paramater to validate() methods * Fix dev-quick-start.rst post landing for 9d81333fd0 * API validates driver name for both POST and PATCH * Sync Oslo service module to Ironic * Move ipmitool \_set\_boot\_device to VendorPassthru * Use six.StringIO/BytesIO instead of StringIO.StringIO * Add JSONEncodedType with enforced type checking * Correct PXEPrivateMethodsTestCase.setUp * Don't raise MySQL 2013 'Lost connection' errors * Use the custom wsme BooleanType on the nodes api * Add wsme custom BooleanType type * Fix task\_manager acquire post landing for c4f2f26ed * Add common.service config options to sample * Removes use of timeutils.set\_time\_override * Replace assertEqual(None, \*) with assertIsNone in tests * Replace nonexistent mock assert methods with real ones * Log IPMI power on/off timeouts * Remove None as default value for dict get() * Fix autodoc formatting in pxe.py * Fix race condition when changing node states * Use StringType from WSME * Add testing and doc sections to docs/dev-quickstart * Implement \_update\_neutron in PXE driver * Remove \_load\_one\_plugin fallback * SSHPower driver support VMware ESXi * Make ironic-api not single threaded * Remove POST calls in tests for resource creation * Add topic to the change\_node\_maintenance\_mode() RPC method * Fix API inconsistence when changing node's states * Add samples to serve API through Apache mod\_wsgi * Add git dependency to quickstart docs * Add get\_console() method * Remove unnecessary json dumps/loads from tests * Add parameter for filtering nodes by maintenance mode * Rename and update ironic-deploy-helper rootwrap * Remove tox locale overrides * Updated from global requirements * Move eventlent monkeypatch out of cmd/ * Fix misspellings in ironic * Ensure parameter order of assertEqual correct * Return correct HTTP response codes for create ops * Fix broken doc links on the index page * Allow to tear-down a node waiting to be deployed * Improve NodeLocked exception message * Expose 'reservation' field of a node via API * Implement a multiplexed VendorPassthru example * Fix log and test for NeutronAPI.update\_port\_dhcp\_opts * Fix 'run\_as\_root' parameter check in utils * Handle multiple exceptions raised by jsonpatch * API tests to check for the return codes * Imported Translations from Transifex * Move test\_\_get\_nodes\_mac\_addresses * Removed duplicated function to create a swap fs * Updated from global requirements * Add futures to requirements * Fix missing keystone option in ironic.conf.sample * Adds Neutron support to Ironic * Replace CONF.set\_default with self.config * Fix ssh\_port type in \_parse\_driver\_info() from ssh.py * Improve handling of invalid input in HashRing class * Sync db.sqlalchemy code from Oslo * Add lockfile>=0.8 to requirements.txt * Remove net\_config\_template options * Remove deploy kernel and ramdisk global config * Update docstrings in ssh.py * SSHPower driver raises IronicExceptions * mock's return value for processutils.ssh\_execute * API: Add sample() method on Node * Update method doc strings in pxe.py * Minor documentation update * Removed unused exceptions * Bump version of sphinxcontrib-pecanwsme * Add missing parameter in call to \_load\_one\_plugin * Docstrings for ipmitool * alembic with initial migration and tests * Update RPC version post-landing for 9bc5f92fb * ipmitool's \_power\_status raises IPMIFailure 2014.1.b2 --------- * Add [keystone\_authtoken] to ironic.conf.sample * Updated from global requirements * Add comment about node.instance\_uuid * Run mkfs as root * Remove the absolute paths from ironic-deploy-helper.filters * PXE instance\_name is no longer mandatory * Remove unused config option - pxe\_deploy\_timeout * Delete the iscsi target * Imported Translations from Transifex * Fix non-unique tftp dir instance\_uuid * Fix non-unique pxe driver 'instance\_name' * Add missing "Filters" section to the ironic-images.filters * Use oslo.rootwrap library instead of local copy * Replace assertTrue with explicit assertIsInstance * Disallow new provision for nodes in maintenance * Add RPC method for node maintenance mode * Fix keystone get\_service\_url filtering * Use same MANAGER\_TOPIC variable * Implement consistent hashing of nodes to conductors * PXEAndSSH driver lacked vendor\_passthru * Use correct auth context inside pxe driver * sync\_power\_states handles missing driver info * Enable $pybasedir value in pxe.py * Correct SSHPowerDriver validate() exceptions * API to check the requested power state * Improve the node driver interfaces validation output * Remove copyright from empty files * Make param descriptions more consistent in API * Imported Translations from Transifex * Fix wrong message of pxe validator * Remove unused dict BYTE\_MULTIPLIERS * Implement API for provisioning * API to validate UUID parameters * Make chassis\_uuid field of nodes optional * Add unit tests for get\_nodeinfo\_list * Improve error handling in PXE \_continue\_deploy * Make param names more consistent in API * Sync config module from oslo * Fix wrong message of MACAlreadyExists * Avoid a race when associating instance\_uuid * Move and rename ValidTypes * Convert trycmd() to oslo's processutils * Improve error handling in validate\_vendor\_action * Passing nodes more consistently * Add 'next' link when GET maximum number of items * Check connectivity in SSH driver 'validate' method * GET /drivers to show a list of active conductors * Improve method to get list of active conductors * Refactor /node//state * Reworks Chassis validations * Reworks Node validations * Developer doc index page points to correct API docs * Fix auto-generated REST API formatting * Method to generate PXE options for Neutron ports * Strip '/' from api\_url string for PXE driver * Add driver interfaces validation * Command call should log the stdout and stderr * Add prepare, clean\_up, take\_over methods to deploy * PEP8-ify imports in test\_ipmitool * API: Add sample() method on Port and PortCollection * API: Validate and normalize address * Handle DBDuplicateEntry on Ports with same address * Imported Translations from Transifex * removed wrap\_exception method from ironic/common/exception.py * Rework patch validation on Ports * Add JsonPatchType class * Change default API auth to keystone-based * Clean up duplicated change-building code in objects * Add -U to pip install command in tox.ini * Updated from global requirements * Add config option for # of conductor replicas * Port StringType class from WSME trunk * Add tools/conf/check\_uptodate to tox.ini 2014.1.b1 --------- * Correct error with unicode mac address * Expose created\_at/updated\_at properties in the REST API * Import heartbeat\_interval opt in API * Add power control to PXE driver * Implement sync\_power\_state periodic task * Set the provision\_state to DEPLOYFAIL * Save PKI token in a file for PXE deploy ramdisk * API ports update for WSME 0.5b6 compliance * Add heartbeat\_interval to new 'conductor' cfg group * Add missing hash\_partition\_exponent config option * If no block devices abort deployment * Add missing link for drivers resource * Apply comments to 58558/4 post-landing * Replace removed xrange in Python3 * Imported Translations from Transifex * Use addCleanup() in test\_deploy\_utils * Allow Pecan to use 'debuginfo' response field * Do not allow API to expose error stacktrace * Add port address unique constraint for sqlite * Implement consistent hashing common methods * Sync some db changes from Oslo * Bump required version of sqlalchemy-migrate * Update ironic.conf.sample * Import uuidutils unit tests from oslo * Allow FakePower to return node objects power\_state * Adds doc strings to API FunctionalTest class * Use oslo's execute() and ssh\_execute() methods * Remove openstack.common.uuidutils * Sync common.context changes from olso * Remove oslo uuidutils.is\_uuid\_like call * Remove oslo uuidutils.generate\_uuid() call * Add troubleshoot option to PXE template * Imported Translations from Transifex * Add tftp\_server pattern in ironic.conf * Import HasLength object * ipmitool SHOULD accept empty username/password * Imported Translations from Transifex * Add missing ConfigNotFound exception * Imported Translations from Transifex * Add hooks to auto-generate REST API docs * Imported Translations from Transifex * Redefined default value of allowed\_rpc\_exception\_modules * Add last\_error usage to deploy and teardown methods * Support building wheels (PEP-427) * Import missing gettext \_ to fix Sphinx error * sync common.service from oslo * sync common.periodic\_task from oslo * sync common.notifier.\* from oslo * sync common.log from oslo * sync common.local from oslo * Sync common utils from Oslo * Rename parameters * Accessing a subresource that parent does not exist * Imported Translations from Transifex * Changes power\_state and adds last\_error field * Update openstack/common/lockutils * sync common.context from oslo * sync common.config.generator from oslo * Remove sqlalchemy-migrate 0.7.3 patching * Fix integer division compatibility in middleware * Fix node lock in PXE driver * Imported Translations from Transifex * Register API options under the 'api' group * Supporting both Python 2 and Python 3 with six * Supports get node by instance uuid in API * Imported Translations from Transifex * Check invalid uuid for get-by-instance db api * Fix error handling in ssh driver * Replace \_\_metaclass\_\_ * Supporting both Python 2 and Python 3 with six * Pass Ironic API url to deploy ramdisk in PXE driver * Remove 'basestring' from objects utils * Allows unicode description for chassis * Fix a typo in the name of logger method exception * Don't use deprecated module commands * Comply with new hacking requirements * Improve the API doc spec for chassis * Improve the API doc spec for node * Updated from global requirements * Fix i18N compliance * Add wrapper for keystone service catalog * Fix test node manager * Expose /drivers on the API * Update mailmap for Joe Gordon * Add mailmap file * Implement /nodes/UUID/vendor\_passthru in the API * Add context to TaskManager * Regenerate the sample config file * Conductors maintan driver list in the DB * Group and unify ipmi configurations * Fix a few missing i18n * Fix status codes in node controller * Fix exceptions handling in controllers * Updated from global requirements * Support uniform MAC address with colons * Remove redundant test stubs from conductor/manager * Remove several old TODO messages * Supports paginate query for two get nodes DB APIs * Remove \_driver\_factory class attribute * Fixes RootController to allow URL without version tag * Don't allow deletion of associated node * Remove duplicated db\_api.get\_instance() from tests * Updated from global requirements * Do not use string concatenation for localized strings * Remove the NULL state * Add DriverFactory * Adjust native ipmi default wait time * Be more patient with IPMI and BMC * Implement db get\_[un]associated\_nodes * Remove unused nova specific files * Removes unwanted mox and fixture files * Removes stubs from unit tests * Remove unused class/file * Remove driver validation on node update * Consolidates TestCase and BaseTestCase * Fix policies * Improve error message for ssh * Fix datetime format in FakeCache * Fix power\_state set to python object repr * Updated from global requirements * Replaces mox with mock for test\_deploy\_utils * Replaces mox with mock in api's unit tests * Replaces mox with mock in objects' unit tests * Replaces mox with mock for conductor unit tests * fix ssh driver exec command issues * Fix exceptions error codes * Remove obsolete redhat-eventlet.patch * Replaces mox with mock for test\_utils * Replaces mox with mock for ssh driver unit tests * Remove nested 'ipmi' dict from driver\_info * Replace tearDown with addCleanup in unit tests * Remove nested 'ssh' dict from driver\_info * Remove nested 'pxe' dict from driver\_info * Save and validate deployment key in PXE driver * Implement deploy and tear\_down conductor methods * Use mock to do unit tests for pxe driver * Code clean in node controller * Use mock to do unit tests for ipminative driver * Replaces mox with mock for ipmitool driver unit tests * Fix parameter name in wsexpose * Rename start\_power\_state\_change to change\_node\_power\_state * Mount iSCSI target and 'dd' in PXE driver * Add tests for api/utils.py * Check for required fields on ports * Replace Cheetah with Jinja2 * Update from global requirements * Upgrade tox to 1.6 * Add API uuid <-> id mapping * Doc string and minor clean up for 41976 * Update error return code to match new Pecan release * Add vendor\_passthru method to RPC API * Integer types support in api * Add native ipmi driver * API GET to return only minimal data * Fix broken links * Collection named based on resource type * Remove nova specific tests * Changes documentation hyperlinks to be relative * Replace OpenStack LLC with OpenStack Foundation * Force textmode consoles * Implemented start\_power\_state\_change In Conductor * Updates documentation for tox use * Drop setuptools\_git dependency * Fix tests return codes * Fix misused assertTrue in unit tests * Prevent updates while state change is in progress * Use localisation where user visible strings are used * Update only the changed fields * Improve parameters validate in PXE driver * Rename ipmi driver to ipmitool * Remove jsonutils from PXE driver * Expose the vendor\_passthru resource * Driver's validation during node update process implemented * Public API * Remove references for the 'task\_state' property * Use 'provision\_state' in PXE driver * Updating resources with PATCH * Add missing unique constraint * Fix docstring typo * Removed templates directory in api config * Added upper version boundry for six * Sync models with migrations * Optimization reserve and release nodes db api methods * Add missing foreign key * Porting nova pxe driver to ironic * API Nodes states * Fix driver loading * Move glance image service client from nova and cinder into ironic * Implement the root and v1 entry points of the API * Expose subresources for Chassis and Node * Add checks locked nodes to db api * Update the dev docs with driver interface description * Add missing tests for chassis API * Delete controller to make code easy to read and understood * Disable deleting a chassis that contains nodes * Update API documentation * Add Pagination of collections across the API * Fix typo in conductor manager * Remove wsme validate decorator from API * Add missing tests for ports API * Modify is\_valid\_mac() for support unicode strings * Add DB and RPC method doc strings to hook.py * Delete unused templates * Use fixture from Oslo * Move "opportunistic" db migrations tests from Nova * Build unittests for nodes api * make api test code more readable * Add links to API Objects * Delete Ironic context * Add tests for existing db migrations * Add common code from Oslo for db migrations test * Remove extra pep8/flake8/pyflakes requirements * Sync requirements with OpenStack/requirements * Fix up API tests before updating hacking checks * Add RPC methods for updating nodes * Run extract\_messages * Keystone authentiation * Add serializer param to RPC service * Import serialization and nesting from Nova Objects * Implement chassis api actions * update requires to prevent version cap * Change validate() to raise instead of returning T/F * Add helpers for single-node tasks * Implement port api action * Modify gitignore to ignore sqlite * Update resource manager for fixed stevedore issue * Add dbapi functions * Remove suds requirement * Sync install\_venv\_common from oslo * Move mysql\_engine option to [database] group * Re-define 'extra' as dict\_or\_none * Added Python-2.6 to the classifier * Rename "manager" to "conductor" * Port from nova: Fix local variable 'root\_uuid' ref * Created a package for API controllers V1 * Sync requirements with OpenStack/requirements * Remove unused APICoverage class * Sync fileutils from oslo-incubator * Sync strutils from oslo-incubator * Add license header * Update get\_by\_uuid function doc in chassis * Fix various Python 2.x->3.x compat issues * Improve unit tests for API * Add Chassis object * Add Chassis DB model and DB-API * Delete associated ports after deleting a node * Virtual power driver is superceded by ssh driver * Add conf file generator * Refactored query filters * Add troubleshoot to baremetal PXE template * Add err\_msg param to baremetal\_deploy\_helper * Retry the sfdisk command up to 3 times * Updated API Spec for new Drivers * Improve IPMI's \_make\_password\_file method * Remove spurious print statement from update\_node * Port middleware error handler from ceilometer API * Add support for GET /v1/nodes to return a list * Add object support to API service * Remove the unused plugin framework * Improve tests for Node and Port DB objects * SSH driver doesn't need to query database * Create Port object * Add uuid to Port DB model * Delete Flask Dependence * Writing Error: nodess to nodes * Create the Node object * Restructuring driver API and inheritance * Remove explicit distribute depend * Bump version of PBR * Remove deleted[\_at] from base object * Make object actions pass positional arguments * Fix relative links in architecture doc * Reword architecture driver description * Remove duplication from README, add link to docs * Port base object from Nova * Fix ironic-rootwrap capability * Add ssh power manager * Prevent IPMI actions from colliding * Add TaskManager tests and fix decorator * Mocked NodeManager can load and mock real drivers * Add docs for task\_manager and tests/manager/utils * Fix one typo in index.rst * Add missing 'extra' field to models.nodes * More doc updates * Remove the old README * More doc updates * Minor fixes to sphinx docs * Added API v1 Specification * Add initial sphinx docs, based on README * Initial skeleton for an RPC layer * Log configuration values on API startup * Don't use pecan to configure logging * Move database.backend option import * Remove unused authentication CLI options * Rename TestCase.flags() to TestCase.config() * Copy the RHEL6 eventlet workaround from Oslo * Sync new database config group from oslo-incubator * Minor doc change for manager and resorce\_manager * Add support for Sphinx Docs * Update IPMI driver to work with resource manager * Add validate\_driver\_info to driver classes * Implement Task and Resource managers * Update [reserve|release]\_nodes to accept a tag * More updates to the README * Reimplement reserve\_nodes and release\_nodes * Rename the 'ifaces' table to 'ports' * Change 'nodes' to use more driver-specific JSON * Update driver names and base class * Stop creating a new db IMPL for every request * Fix double "host" option * Sync safe changes from oslo-incubator * Sync rpc changes from oslo-incubator * Sync log changes from oslo-incubator * Sync a rootwrap KillFilter fix from oslo-incubator * Sync oslo-incubator python3 changes * Add steps to README.rst * Fix fake bmc driver * move ironic docs to top level for ease of discovery * Update the README file development section * Add some API definitions to the README * Update the distribute dependency version * Add information to the project README * Fixes test\_update\_node by testing updated node * Fix pep8 errors and make it pass Jenkins tests * Update IPMI driver for new base class * Add new base and fake driver classes * Delete old base and fake classes * Add a few fixes for the API * Move strong nova depenencies into temporary dir * Update IPMI for new DB schema * Add unit tests for DB API * Remove tests for old DB * Add tests for ironic-dbsync * Remove ironic\_manage * Implement GET /node/ifaces/ in API * Update exception.py * Update db models and API * Implement skeleton for a new DB backend * Remove the old db implementation * Implement initial skeleton of a manager service * Implement initial draft of a Pecan-based API * Fix IPMI tests * Move common things to ironic.common * Fix failing db and deploy\_helper tests * un-split the db backend * Rename files and fix things * Import add'l files from Nova * update openstack-common.conf and import from oslo * Added .testr.conf * Renamed nova to ironic * Fixed hacking, pep8 and pyflakes errors * Added project infrastructure needs * Fix baremetal get\_available\_nodes * Improve Python 3.x compatibility * Import and convert to oslo loopingcall * baremetal: VirtualPowerDriver uses mac addresses in bm\_interfaces * baremetal: Change input for sfdisk * baremetal: Change node api related to prov\_mac\_address * Remove "undefined name" pyflake errors * Remove unnecessary LOG initialisation * Define LOG globally in baremetal\_deploy\_helper * Only call getLogger after configuring logging * baremetal: Integrate provisioning and non-provisioning interfaces * Move console scripts to entrypoints * baremetal: Drop unused columns in bm\_nodes * Remove print statements * Delete tests.baremetal.util.new\_bm\_deployment() * Adds Tilera back-end for baremetal * Change type of ssh\_port option from Str to Int * Virtual Power Driver list running vms quoting error * xenapi: Fix reboot with hung volumes * Make bm model's deleted column match database * Correct substring matching of baremetal VPD node names * Read baremetal images from extra\_specs namespace * Compute manager should remove dead resources * Add ssh port and key based auth to VPD * Add instance\_type\_get() to virt api * Don't blindly skip first migration * BM Migration 004: Actually drop column * Update OpenStack LLC to Foundation * Sync nova with oslo DB exception cleanup * Fix exception handling in baremetal API * BM Migrations 2 & 3: Fix drop\_column statements * Remove function redefinitions * Move some context checking code from sqlalchemy * Baremetal driver returns accurate list of instance * Identify baremetal nodes by UUID * Improve performance of baremetal list\_instances * Better error handling in baremetal spawn & destroy * Wait for baremetal deploy inside driver.spawn * Add better status to baremetal deployments * Use oslo-config-2013.1b4 * Delete baremetal interfaces when their parent node is deleted * VirtualPowerDriver catches ProcessExecutionError * Don't modify injected\_files inside PXE driver * Remove nova.db call from baremetal PXE driver * Add a virtual PowerDriver for Baremetal testing * Recache or rebuild missing images on hard\_reboot * Use oslo database code * Fixes 'not in' operator usage * Make sure there are no unused import * Enable N302: Import modules only * Correct a format string in virt/baremetal/ipmi.py * Add REST api to manage bare-metal nodes * Baremetal/utils should not log certain exceptions * PXE driver should rmtree directories it created * Add support for Option Groups in LazyPluggable * Remove obsolete baremetal override of MAC addresses * PXE driver should not accept empty kernel UUID * Correcting improper use of the word 'an' * Export the MAC addresses of nodes for bare-metal * Break out a helper function for working with bare metal nodes * Keep self and context out of error notification payload * Tests for PXE bare-metal provisioning helper server * Change ComputerDriver.legacy\_nwinfo to raise by default * fix new N402 errors * Remove unused baremetal PXE options * Move global service networking opts to new module * Fix N402 for nova/virt * Cope better with out of sync bm data * Fix baremetal VIFDriver * CLI for bare-metal database sync * attach/detach\_volume() take instance as a parameter * Convert short doc strings to be on one line * Check admin context in bm\_interface\_get\_all() * Provide a PXE NodeDriver for the Baremetal driver * Refactor periodic tasks * Add helper methods to nova.paths * Move global path opts in nova.paths * Removes unused imports * Improve baremetal driver error handling * baremetal power driver takes \*\*kwargs * Implement IPMI sub-driver for baremetal compute * Fix tests/baremetal/test\_driver.py * Move baremetal options to [BAREMETAL] OptGroup * Remove session.flush() and session.query() monkey patching * Remove unused imports * Removed unused imports * Parameterize database connection in test.py * Baremetal VIF and Volume sub-drivers * New Baremetal provisioning framework * Move baremetal database tests to fixtures * Add exceptions to baremetal/db/api * Add blank nova/virt/baremetal/\_\_init\_\_.py * Move sql options to nova.db.sqlalchemy.session * Use CONF.import\_opt() for nova.config opts * Remove nova.config.CONF * remove old baremetal driver * Remove nova.flags * Fix a couple uses of FLAGS * Added separate bare-metal MySQL DB * Switch from FLAGS to CONF in tests * Updated scheduler and compute for multiple capabilities * Switch from FLAGS to CONF in nova.virt * Make ComputeDrivers send hypervisor\_hostname * Introduce VirtAPI to nova/virt * Migrate to fileutils and lockutils * Remove ComputeDriver.update\_host\_status() * Rename imagebackend arguments * Move ensure\_tree to utils * Keep the ComputeNode model updated with usage * Don't stuff non-db data into instance dict * Making security group refresh more specific * Use dict style access for image\_ref * Remove unused InstanceInfo class * Remove list\_instances\_detail from compute drivers * maint: remove an unused import in libvirt.driver * Fixes bare-metal spawn error * Refactoring required for blueprint xenapi-live-migration * refactor baremetal/proxy => baremetal/driver * Switch to common logging * Make libvirt LoopingCalls actually wait() * Imports cleanup * Unused imports cleanup (folsom-2) * convert virt drivers to fully dynamic loading * cleanup power state (partially implements bp task-management) * clean-up of the bare-metal framework * Added a instance state update notification * Update pep8 dependency to v1.1 * Alphabetize imports in nova/tests/ * Make use of openstack.common.jsonutils * Alphabetize imports in nova/virt/ * Replaces exceptions.Error with NovaException * Log instance information for baremetal * Improved localization testing * remove unused flag: baremetal\_injected\_network\_template baremetal\_uri baremetal\_allow\_project\_net\_traffic * Add periodic\_fuzzy\_delay option * HACKING fixes, TODO authors * Add pybasedir and bindir options * Only raw string literals should be used with \_() * Remove unnecessary setting up and down of mox and stubout * Remove unnecessary variables from tests * Move get\_info to taking an instance * Exception cleanup * Backslash continuations (nova.tests) * Replace ApiError with new exceptions * Standardize logging delaration and use * remove unused and buggy function from baremetal proxy * Backslash continuations (nova.virt.baremetal) * Remove the last of the gflags shim layer * Implements blueprint heterogeneous-tilera-architecture-support * Deleting test dir from a pull from trunk * Updated to remove built docs * initial commit ironic-15.0.0/driver-requirements.txt0000664000175000017500000000123513652514273017670 0ustar zuulzuul00000000000000# This file lists all python libraries which are utilized by drivers, # but not listed in global-requirements. # It is intended to help package maintainers to discover additional # python projects they should package as optional dependencies for Ironic. # These are available on pypi proliantutils>=2.9.1 pysnmp>=4.3.0,<5.0.0 python-scciclient>=0.8.0 python-dracclient>=3.1.0,<5.0.0 python-xclarityclient>=0.1.6 # The Redfish hardware type uses the Sushy library sushy>=3.2.0 # Ansible-deploy interface ansible>=2.7 # HUAWEI iBMC hardware type uses the python-ibmcclient library python-ibmcclient>=0.1.0 # Dell EMC iDRAC sushy OEM extension sushy-oem-idrac<=1.0.0 ironic-15.0.0/setup.py0000664000175000017500000000137613652514273014633 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ironic-15.0.0/AUTHORS0000664000175000017500000005256113652514442014171 0ustar zuulzuul00000000000000119Vik Abhishek Kekane Adam Gandelman Adam Kimball Aeva Black Aija Jaunteva Akhila Kishore Akilan Pughazhendi Alberto Planas Alessandro Pilotti Alex Meade Alexander Gordeev Alexandra Settle Alexandra Settle Alexey Galkin Alexis Lee Aline Bousquet Ana Krivokapic Andrea Frittoli Andreas Jaeger Andreas Jaeger Andrew Bogott Andrey Kurilin Andrey Shestakov Angus Thomas Anh Tran Anita Kuno Ankit Kumar Anne Gentle Annie Lezil Anshul Jain Anson Y.W Anton Arefiev Anup Navare Anusha Ramineni Anusha Ramineni Aparna Arata Notsu Armando Migliaccio Arne Wiebalck Arne Wiebalck Artem Rozumenko Arun S A G Atsushi SAKAI Bernard Van De Walle Bertrand Lallau Bharath kumar Bill Dodd Bob Ball Bob Fournier Boris Pavlovic Brian Elliott Brian Waldon Bruno Cornec Béla Vancsics Caio Oliveira Cameron.C Cao Shufeng Cao Xuan Hoang Carmelo Ragusa Carol Bouchard Chang Bo Guo ChangBo Guo(gcb) Charlle Daniel Charlle Dias Chris Behrens Chris Dearborn Chris Jones Chris Krelle Chris Krelle Chris Krelle Chris St. Pierre Christian Berendt Christopher Dearborn Christopher Dearborn Chuck Short Chuck Short Clark Boylan Claudiu Belu Clenimar Filemon Clif Houck Clint Byrum Colleen Murphy Corey Bryant Cuong Nguyen D G Lee Dan Prince Dan Smith Dan Smith Daniel Abad Dao Cong Tien Daryl Walleck Davanum Srinivas Davanum Srinivas David Edery David Hewson David Kang David McNally David Shrewsbury Davide Guerri Debayan Ray Derek Higgins Devananda van der Veen Dima Shulyak Dirk Mueller Dmitry Galkin Dmitry Nikishov Dmitry Tantsur Dmitry Tantsur Dmitry Tantsur DongCan Dongcan Ye Dongdong Zhou Doug Hellmann Edan David Edwin Zhai Eli Qiao Elizabeth Elwell Ellen Hui Emilien Macchi Erhan Ekici Eric Fried Eric Guo Eric Windisch Faizan Barmawer Fang Jinxing Fellype Cavalcante Fengqian Gao Flavio Percoco Félix Bouliane Gabriel Assis Bezerra Galyna Zholtkevych Gary Kotton Gaëtan Trellu Ghe Rivero Ghe Rivero Ghe Rivero Gleb Stepanov Gonéri Le Bouder Graham Hayes Gregory Haynes Grzegorz Grasza Gábor Antal Ha Van Tu Hadi Bannazadeh Hamdy Khader Hans Lindgren Haomeng, Wang Harald Jensas Harald Jensås Harshada Mangesh Kakad He Yongli Hieu LE Hironori Shiina Hoang Trung Hieu Honza Pokorny Hugo Nicodemos Hugo Nicodemos IWAMOTO Toshihiro Ian Wienand Igor Kalnitsky Ihar Hrachyshka Ilya Etingof Ilya Pekelny Imre Farkas Ionut Balutoiu Iury Gregory Melo Ferreira Iury Gregory Melo Ferreira Jacek Tomasiak Jakub Libosvar James E. Blair James E. Blair James Slagle Jan Gutter Jan Horstmann Jason Kölker Javier Pena Jay Faulkner Jens Harbott Jeremy Stanley Jerry Jesse Andrews Jesse Pretorius Jim Rollenhagen Jing Sun Joanna Taryma Joe Gordon Johannes Erdfelt John Garbutt John Garbutt John L. Villalovos John L. Villalovos John Trowbridge Jonathan Provost Josh Gachnang Joshua Harlow Joshua Harlow Juan Antonio Osorio Robles Julia Kreger Julian Edwards Julien Danjou Junya Akahira KATO Tomoyuki Kaifeng Wang Kan Ken Igarashi Ken'ichi Ohmichi Kobi Samoray Kun Huang Kurt Taylor Kurt Taylor Kyle Stevenson Kyrylo Romanenko Lance Bragstad Lars Kellogg-Stedman Laura Moore Lenny Verkhovsky LiYucai Lilia Lilia Sampaio Lin Tan Lin Tan Lokesh S Lucas Alvares Gomes Luong Anh Tuan M V P Nitesh Madhuri Kumari Madhuri Kumari Manuel Buil MaoyangLiu Marc Methot Marcin Juszkiewicz Marco Morais Marcus Rafael Mario Villaplana Mark Atwood Mark Beierl Mark Goddard Mark Goddard Mark McLoughlin Mark Silence Martin Kletzander Martin Roy Martyn Taylor Mathieu Gagné Mathieu Mitchell Matt Joyce Matt Keeann Matt Riedemann Matt Riedemann Matt Wagner Matthew Gilliard Matthew Thode Matthew Treinish Mauro S. M. Rodrigues Max Lobur Max Lobur Michael Davies Michael Kerrin Michael Krotscheck Michael Still Michael Tupitsyn Michael Turek Michael Turek Michal Arbet Michey Mehta michey.mehta@hp.com Mike Bayer Mike Turek MikeG451 Mikhail Durnosvistov Mikyung Kang Miles Gould Mitsuhiro SHIGEMATSU Mitsuhiro SHIGEMATSU Monty Taylor Moshe Levi Motohiro OTSUKA Motohiro Otsuka Nam Nguyen Hoai Naohiro Tamura Ngo Quoc Cuong Nguyen Hai Nguyen Hung Phuong Nguyen Phuong An Nguyen Van Duc Nguyen Van Trung Nikolay Fedotov Nisha Agarwal Nisha Brahmankar Noam Angel OctopusZhang Oleksiy Petrenko Om Kumar Ondřej Nový OpenStack Release Bot Pablo Fernando Cargnelutti Paul Belanger Pavlo Shchelokovskyy Pavlo Shchelokovskyy Peeyush Gupta Peng Yong Peter Kendall Phil Day Philippe Godin Pierre Riteau PollyZ Pradip Kadam Pádraig Brady Qian Min Chen Qianbiao NG Qianbiao.NG R-Vaishnavi Rachit7194 Rafi Khardalian Rakesh H S Ramakrishnan G Ramamani Yeleswarapu Raphael Glon Raphael Glon Ricardo Araújo Santos Riccardo Pittau Richard Pioso Rick Harris Robert Collins Robert Collins Rohan Kanade Rohan Kanade Roman Bogorodskiy Roman Dashevsky Roman Podoliaka Roman Prykhodchenko Roman Prykhodchenko Ruby Loo Ruby Loo Ruby Loo Ruby Loo Ruby Loo Rushil Chugh Russell Bryant Russell Haering Ryan Bridges SHIGEMATSU Mitsuhiro Sam Betts Sana Khan Sandhya Balakrishnan Sandy Walsh Sanjay Kumar Singh Sascha Peilicke Sascha Peilicke Sasha Chuzhoy Satoru Moriya Sean Dague Sean Dague Sean McGinnis Serge Kovaleff Sergey Lukjanov Sergey Lupersolsky Sergey Lupersolsky Sergey Nikitin Sergey Vilgelm Sergii Golovatiuk Shane Wang Shilla Saebi Shinn'ya Hoshino Shivanand Tendulker Shivanand Tendulker Shuangtai Tian Shuichiro MAKIGAKI Shuquan Huang Sinval Vieira Sirushti Murugesan SofiiaAndriichenko Solio Sarabia Srinivasa Acharya Stanislaw Pitucha Stenio Araujo Stephen Finucane Steve Baker Steven Dake Steven Hardy Stig Telfer Sukhdev Kapur Sukhdev Kapur Surya Seetharaman Takashi NATSUME Tan Lin Tang Chen Tao Li Thiago Paiva Thierry Carrez Thomas Bechtold Thomas Goirand Thomas Herve TienDC Tim Burke Tom Fifield Tony Breeds Tran Ha Tuyen Tuan Do Anh TuanLAF Tushar Kalra Tzu-Mainn Chen Vadim Hmyrov Vanou Ishii Varsha Varun Gadiraju Vasyl Saienko Vic Howard Victor Lowther Victor Sergeyev Vikas Jain Vinay B S Vincent S. Cojot Vishvananda Ishaya Vladyslav Drok Vu Cong Tuan Wang Jerry Wang Wei Wanghua Wei Du Will Szumski Xavier Xian Dong, Meng Xian Dong, Meng Xiaobin Qu XiaojueGuan XieYingYun Yaguo Zhou Yatin Kumbhare Yibo Cai Yolanda Robla Yolanda Robla Mota Yuiko Takada Yuiko Takada Mori Yuiko Takada Mori Yun Mao Yuriy Taraday Yuriy Yekovenko Yuriy Zveryanskyy Yushiro FURUKAWA Zachary Zane Bitter Zenghui Shi Zhang Yang Zhao Lei Zhenguo Niu Zhenguo Niu Zhenzan Zhou ZhiQiang Fan ZhiQiang Fan ZhongShengping Zhongyue Luo Zhongyue Luo akhiljain23 anascko ankit baiwenteng baiyuan bin yu blue55 brandonzhao caoyuan chao liu chenaidong1 chenghang chenglch chenjiao chenxiangui chenxing daz dekehn digambar divakar-padiyar-nandavar dparalen ericxiett fpxie gaoxiaoyong gaozx gecong1973 gengchc2 ghanshyam ghanshyam ghanshyam houming-wang huang.zhiping jiang wei jiangfei jiangwt100 jiapei jinxingfang jinxingfang junbo jxiaobin kesper kesper klyang lei-zhang-99cloud licanwei lijunjie lin shengrong linggao liumk liusheng liushuobj lukasz lvdongbing maelk mallikarjuna.kolagatla max_lobur melissaml michaeltchapman mkumari mpardhi23 mvpnitesh nishagbkar noor_muhammad_dell paresh-sao pawnesh.kumar pengyuesheng poojajadhav pradeepcsekar rabi rajinir rajinir ricolin root ryo.kurahashi saripurigopi shangxiaobj shenjiatong shenxindi shuangyang.qian sjing sonu.kumar spranjali srobert stephane suichangyin sunqingliang6 takanorimiyagishi tanlin tianhui tiendc tonybrad vishal mahajan vmud213 vsaienko wangdequn wanghao wanghongtaozz wangkf wangkf wangqi wangxiyuan wangzhengwei weizhao whaom whitekid whoami-rajat wu.chunyang wudong xgwang5843 xiexs yangxurong yatin yuan liang yufei yuhui_inspur yunhong jiang yushangbin yuyafei zackchen zhang.lei zhangbailin zhangdebo zhangjl zhangyanxian zhangyanxian zhangyanying zhu.fanglei zhufl zhurong zouyee zshi 翟小君 ironic-15.0.0/setup.cfg0000664000175000017500000001734313652514443014742 0ustar zuulzuul00000000000000[metadata] name = ironic summary = OpenStack Bare Metal Provisioning description-file = README.rst author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = https://docs.openstack.org/ironic/latest/ python-requires = >=3.6 classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 [files] data_files = etc/ironic = etc/ironic/rootwrap.conf etc/ironic/rootwrap.d = etc/ironic/rootwrap.d/* packages = ironic [entry_points] oslo.config.opts = ironic = ironic.conf.opts:list_opts oslo.config.opts.defaults = ironic = ironic.conf.opts:update_opt_defaults oslo.policy.enforcer = ironic = ironic.common.policy:get_oslo_policy_enforcer oslo.policy.policies = ironic.api = ironic.common.policy:list_policies console_scripts = ironic-api = ironic.cmd.api:main ironic-dbsync = ironic.cmd.dbsync:main ironic-conductor = ironic.cmd.conductor:main ironic-rootwrap = oslo_rootwrap.cmd:main ironic-status = ironic.cmd.status:main wsgi_scripts = ironic-api-wsgi = ironic.api.wsgi:initialize_wsgi_app ironic.dhcp = neutron = ironic.dhcp.neutron:NeutronDHCPApi none = ironic.dhcp.none:NoneDHCPApi ironic.hardware.interfaces.bios = fake = ironic.drivers.modules.fake:FakeBIOS idrac-wsman = ironic.drivers.modules.drac.bios:DracWSManBIOS ilo = ironic.drivers.modules.ilo.bios:IloBIOS irmc = ironic.drivers.modules.irmc.bios:IRMCBIOS no-bios = ironic.drivers.modules.noop:NoBIOS redfish = ironic.drivers.modules.redfish.bios:RedfishBIOS ironic.hardware.interfaces.boot = fake = ironic.drivers.modules.fake:FakeBoot idrac-redfish-virtual-media = ironic.drivers.modules.drac.boot:DracRedfishVirtualMediaBoot ilo-pxe = ironic.drivers.modules.ilo.boot:IloPXEBoot ilo-ipxe = ironic.drivers.modules.ilo.boot:IloiPXEBoot ilo-virtual-media = ironic.drivers.modules.ilo.boot:IloVirtualMediaBoot ipxe = ironic.drivers.modules.ipxe:iPXEBoot irmc-pxe = ironic.drivers.modules.irmc.boot:IRMCPXEBoot irmc-virtual-media = ironic.drivers.modules.irmc.boot:IRMCVirtualMediaBoot pxe = ironic.drivers.modules.pxe:PXEBoot redfish-virtual-media = ironic.drivers.modules.redfish.boot:RedfishVirtualMediaBoot ironic.hardware.interfaces.console = fake = ironic.drivers.modules.fake:FakeConsole ilo = ironic.drivers.modules.ilo.console:IloConsoleInterface ipmitool-shellinabox = ironic.drivers.modules.ipmitool:IPMIShellinaboxConsole ipmitool-socat = ironic.drivers.modules.ipmitool:IPMISocatConsole no-console = ironic.drivers.modules.noop:NoConsole ironic.hardware.interfaces.deploy = ansible = ironic.drivers.modules.ansible.deploy:AnsibleDeploy direct = ironic.drivers.modules.agent:AgentDeploy fake = ironic.drivers.modules.fake:FakeDeploy iscsi = ironic.drivers.modules.iscsi_deploy:ISCSIDeploy ramdisk = ironic.drivers.modules.pxe:PXERamdiskDeploy ironic.hardware.interfaces.inspect = fake = ironic.drivers.modules.fake:FakeInspect idrac = ironic.drivers.modules.drac.inspect:DracInspect idrac-redfish = ironic.drivers.modules.drac.inspect:DracRedfishInspect idrac-wsman = ironic.drivers.modules.drac.inspect:DracWSManInspect ilo = ironic.drivers.modules.ilo.inspect:IloInspect inspector = ironic.drivers.modules.inspector:Inspector irmc = ironic.drivers.modules.irmc.inspect:IRMCInspect no-inspect = ironic.drivers.modules.noop:NoInspect redfish = ironic.drivers.modules.redfish.inspect:RedfishInspect ironic.hardware.interfaces.management = fake = ironic.drivers.modules.fake:FakeManagement ibmc = ironic.drivers.modules.ibmc.management:IBMCManagement idrac = ironic.drivers.modules.drac.management:DracManagement idrac-redfish = ironic.drivers.modules.drac.management:DracRedfishManagement idrac-wsman = ironic.drivers.modules.drac.management:DracWSManManagement ilo = ironic.drivers.modules.ilo.management:IloManagement ilo5 = ironic.drivers.modules.ilo.management:Ilo5Management intel-ipmitool = ironic.drivers.modules.intel_ipmi.management:IntelIPMIManagement ipmitool = ironic.drivers.modules.ipmitool:IPMIManagement irmc = ironic.drivers.modules.irmc.management:IRMCManagement noop = ironic.drivers.modules.noop_mgmt:NoopManagement redfish = ironic.drivers.modules.redfish.management:RedfishManagement xclarity = ironic.drivers.modules.xclarity.management:XClarityManagement ironic.hardware.interfaces.network = flat = ironic.drivers.modules.network.flat:FlatNetwork neutron = ironic.drivers.modules.network.neutron:NeutronNetwork noop = ironic.drivers.modules.network.noop:NoopNetwork ironic.hardware.interfaces.power = fake = ironic.drivers.modules.fake:FakePower ibmc = ironic.drivers.modules.ibmc.power:IBMCPower idrac = ironic.drivers.modules.drac.power:DracPower idrac-redfish = ironic.drivers.modules.drac.power:DracRedfishPower idrac-wsman = ironic.drivers.modules.drac.power:DracWSManPower ilo = ironic.drivers.modules.ilo.power:IloPower ipmitool = ironic.drivers.modules.ipmitool:IPMIPower irmc = ironic.drivers.modules.irmc.power:IRMCPower redfish = ironic.drivers.modules.redfish.power:RedfishPower snmp = ironic.drivers.modules.snmp:SNMPPower xclarity = ironic.drivers.modules.xclarity.power:XClarityPower ironic.hardware.interfaces.raid = agent = ironic.drivers.modules.agent:AgentRAID fake = ironic.drivers.modules.fake:FakeRAID idrac = ironic.drivers.modules.drac.raid:DracRAID idrac-wsman = ironic.drivers.modules.drac.raid:DracWSManRAID ilo5 = ironic.drivers.modules.ilo.raid:Ilo5RAID irmc = ironic.drivers.modules.irmc.raid:IRMCRAID no-raid = ironic.drivers.modules.noop:NoRAID ironic.hardware.interfaces.rescue = agent = ironic.drivers.modules.agent:AgentRescue fake = ironic.drivers.modules.fake:FakeRescue no-rescue = ironic.drivers.modules.noop:NoRescue ironic.hardware.interfaces.storage = fake = ironic.drivers.modules.fake:FakeStorage noop = ironic.drivers.modules.storage.noop:NoopStorage cinder = ironic.drivers.modules.storage.cinder:CinderStorage external = ironic.drivers.modules.storage.external:ExternalStorage ironic.hardware.interfaces.vendor = fake = ironic.drivers.modules.fake:FakeVendorB ibmc = ironic.drivers.modules.ibmc.vendor:IBMCVendor idrac = ironic.drivers.modules.drac.vendor_passthru:DracVendorPassthru idrac-wsman = ironic.drivers.modules.drac.vendor_passthru:DracWSManVendorPassthru ilo = ironic.drivers.modules.ilo.vendor:VendorPassthru ipmitool = ironic.drivers.modules.ipmitool:VendorPassthru no-vendor = ironic.drivers.modules.noop:NoVendor ironic.hardware.types = fake-hardware = ironic.drivers.fake_hardware:FakeHardware ibmc = ironic.drivers.ibmc:IBMCHardware idrac = ironic.drivers.drac:IDRACHardware ilo = ironic.drivers.ilo:IloHardware ilo5 = ironic.drivers.ilo:Ilo5Hardware intel-ipmi = ironic.drivers.intel_ipmi:IntelIPMIHardware ipmi = ironic.drivers.ipmi:IPMIHardware irmc = ironic.drivers.irmc:IRMCHardware manual-management = ironic.drivers.generic:ManualManagementHardware redfish = ironic.drivers.redfish:RedfishHardware snmp = ironic.drivers.snmp:SNMPHardware xclarity = ironic.drivers.xclarity:XClarityHardware ironic.database.migration_backend = sqlalchemy = ironic.db.sqlalchemy.migration [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [compile_catalog] directory = ironic/locale domain = ironic [update_catalog] domain = ironic output_dir = ironic/locale input_file = ironic/locale/ironic.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = ironic/locale/ironic.pot [extras] guru_meditation_reports = oslo.reports>=1.18.0 # Apache-2.0 i18n = oslo.i18n>=3.15.3 # Apache-2.0 ironic-15.0.0/lower-constraints.txt0000664000175000017500000000273313652514273017355 0ustar zuulzuul00000000000000alembic==0.9.6 automaton==1.9.0 Babel==2.3.4 bandit==1.1.0 bashate==0.5.1 coverage==4.0 ddt==1.0.1 doc8==0.6.0 eventlet==0.18.2 fixtures==3.0.0 flake8-import-order==0.17.1 futurist==1.2.0 hacking==3.0.0 ironic-lib==2.17.1 iso8601==0.1.11 Jinja2==2.10 jsonpatch==1.16 jsonschema==2.6.0 keystoneauth1==3.18.0 keystonemiddleware==4.17.0 mock==3.0.0 openstackdocstheme==1.31.2 openstacksdk==0.37.0 os-api-ref==1.4.0 os-traits==0.4.0 oslo.concurrency==3.26.0 oslo.config==5.2.0 oslo.context==2.19.2 oslo.db==4.40.0 oslo.i18n==3.15.3 oslo.log==3.36.0 oslo.messaging==5.29.0 oslo.middleware==3.31.0 oslo.policy==1.30.0 oslo.reports==1.18.0 oslo.rootwrap==5.8.0 oslo.serialization==2.18.0 oslo.service==1.24.0 oslo.upgradecheck==0.1.0 oslo.utils==3.38.0 oslo.versionedobjects==1.31.2 oslotest==3.2.0 osprofiler==1.5.0 pbr==2.0.0 pecan==1.0.0 pika==0.10.0 psutil==3.2.2 psycopg2==2.7.3 Pygments==2.2.0 PyMySQL==0.7.6 pysendfile==2.0.0 python-cinderclient==3.3.0 python-glanceclient==2.8.0 python-neutronclient==6.7.0 python-swiftclient==3.2.0 pytz==2013.6 reno==2.5.0 requests==2.14.2 requestsexceptions==1.4.0 retrying==1.2.3 rfc3986==0.3.1 Sphinx==1.6.2 sphinxcontrib-httpdomain==1.6.1 sphinxcontrib-pecanwsme==0.10.0 sphinxcontrib-seqdiag==0.8.4 sphinxcontrib-svg2pdfconverter==0.1.0 sphinxcontrib-websupport==1.0.1 SQLAlchemy==1.0.10 sqlalchemy-migrate==0.11.0 stestr==1.0.0 stevedore==1.20.0 testresources==2.0.0 testscenarios==0.4 testtools==2.2.0 tooz==1.58.0 WebOb==1.7.1 WebTest==2.0.27 WSME==0.9.3 ironic-15.0.0/ironic.egg-info/0000775000175000017500000000000013652514443016066 5ustar zuulzuul00000000000000ironic-15.0.0/ironic.egg-info/pbr.json0000664000175000017500000000006013652514442017537 0ustar zuulzuul00000000000000{"git_version": "39bcb00f3", "is_release": true}ironic-15.0.0/ironic.egg-info/top_level.txt0000664000175000017500000000000713652514442020614 0ustar zuulzuul00000000000000ironic ironic-15.0.0/ironic.egg-info/entry_points.txt0000664000175000017500000001460113652514442021365 0ustar zuulzuul00000000000000[console_scripts] ironic-api = ironic.cmd.api:main ironic-conductor = ironic.cmd.conductor:main ironic-dbsync = ironic.cmd.dbsync:main ironic-rootwrap = oslo_rootwrap.cmd:main ironic-status = ironic.cmd.status:main [ironic.database.migration_backend] sqlalchemy = ironic.db.sqlalchemy.migration [ironic.dhcp] neutron = ironic.dhcp.neutron:NeutronDHCPApi none = ironic.dhcp.none:NoneDHCPApi [ironic.hardware.interfaces.bios] fake = ironic.drivers.modules.fake:FakeBIOS idrac-wsman = ironic.drivers.modules.drac.bios:DracWSManBIOS ilo = ironic.drivers.modules.ilo.bios:IloBIOS irmc = ironic.drivers.modules.irmc.bios:IRMCBIOS no-bios = ironic.drivers.modules.noop:NoBIOS redfish = ironic.drivers.modules.redfish.bios:RedfishBIOS [ironic.hardware.interfaces.boot] fake = ironic.drivers.modules.fake:FakeBoot idrac-redfish-virtual-media = ironic.drivers.modules.drac.boot:DracRedfishVirtualMediaBoot ilo-ipxe = ironic.drivers.modules.ilo.boot:IloiPXEBoot ilo-pxe = ironic.drivers.modules.ilo.boot:IloPXEBoot ilo-virtual-media = ironic.drivers.modules.ilo.boot:IloVirtualMediaBoot ipxe = ironic.drivers.modules.ipxe:iPXEBoot irmc-pxe = ironic.drivers.modules.irmc.boot:IRMCPXEBoot irmc-virtual-media = ironic.drivers.modules.irmc.boot:IRMCVirtualMediaBoot pxe = ironic.drivers.modules.pxe:PXEBoot redfish-virtual-media = ironic.drivers.modules.redfish.boot:RedfishVirtualMediaBoot [ironic.hardware.interfaces.console] fake = ironic.drivers.modules.fake:FakeConsole ilo = ironic.drivers.modules.ilo.console:IloConsoleInterface ipmitool-shellinabox = ironic.drivers.modules.ipmitool:IPMIShellinaboxConsole ipmitool-socat = ironic.drivers.modules.ipmitool:IPMISocatConsole no-console = ironic.drivers.modules.noop:NoConsole [ironic.hardware.interfaces.deploy] ansible = ironic.drivers.modules.ansible.deploy:AnsibleDeploy direct = ironic.drivers.modules.agent:AgentDeploy fake = ironic.drivers.modules.fake:FakeDeploy iscsi = ironic.drivers.modules.iscsi_deploy:ISCSIDeploy ramdisk = ironic.drivers.modules.pxe:PXERamdiskDeploy [ironic.hardware.interfaces.inspect] fake = ironic.drivers.modules.fake:FakeInspect idrac = ironic.drivers.modules.drac.inspect:DracInspect idrac-redfish = ironic.drivers.modules.drac.inspect:DracRedfishInspect idrac-wsman = ironic.drivers.modules.drac.inspect:DracWSManInspect ilo = ironic.drivers.modules.ilo.inspect:IloInspect inspector = ironic.drivers.modules.inspector:Inspector irmc = ironic.drivers.modules.irmc.inspect:IRMCInspect no-inspect = ironic.drivers.modules.noop:NoInspect redfish = ironic.drivers.modules.redfish.inspect:RedfishInspect [ironic.hardware.interfaces.management] fake = ironic.drivers.modules.fake:FakeManagement ibmc = ironic.drivers.modules.ibmc.management:IBMCManagement idrac = ironic.drivers.modules.drac.management:DracManagement idrac-redfish = ironic.drivers.modules.drac.management:DracRedfishManagement idrac-wsman = ironic.drivers.modules.drac.management:DracWSManManagement ilo = ironic.drivers.modules.ilo.management:IloManagement ilo5 = ironic.drivers.modules.ilo.management:Ilo5Management intel-ipmitool = ironic.drivers.modules.intel_ipmi.management:IntelIPMIManagement ipmitool = ironic.drivers.modules.ipmitool:IPMIManagement irmc = ironic.drivers.modules.irmc.management:IRMCManagement noop = ironic.drivers.modules.noop_mgmt:NoopManagement redfish = ironic.drivers.modules.redfish.management:RedfishManagement xclarity = ironic.drivers.modules.xclarity.management:XClarityManagement [ironic.hardware.interfaces.network] flat = ironic.drivers.modules.network.flat:FlatNetwork neutron = ironic.drivers.modules.network.neutron:NeutronNetwork noop = ironic.drivers.modules.network.noop:NoopNetwork [ironic.hardware.interfaces.power] fake = ironic.drivers.modules.fake:FakePower ibmc = ironic.drivers.modules.ibmc.power:IBMCPower idrac = ironic.drivers.modules.drac.power:DracPower idrac-redfish = ironic.drivers.modules.drac.power:DracRedfishPower idrac-wsman = ironic.drivers.modules.drac.power:DracWSManPower ilo = ironic.drivers.modules.ilo.power:IloPower ipmitool = ironic.drivers.modules.ipmitool:IPMIPower irmc = ironic.drivers.modules.irmc.power:IRMCPower redfish = ironic.drivers.modules.redfish.power:RedfishPower snmp = ironic.drivers.modules.snmp:SNMPPower xclarity = ironic.drivers.modules.xclarity.power:XClarityPower [ironic.hardware.interfaces.raid] agent = ironic.drivers.modules.agent:AgentRAID fake = ironic.drivers.modules.fake:FakeRAID idrac = ironic.drivers.modules.drac.raid:DracRAID idrac-wsman = ironic.drivers.modules.drac.raid:DracWSManRAID ilo5 = ironic.drivers.modules.ilo.raid:Ilo5RAID irmc = ironic.drivers.modules.irmc.raid:IRMCRAID no-raid = ironic.drivers.modules.noop:NoRAID [ironic.hardware.interfaces.rescue] agent = ironic.drivers.modules.agent:AgentRescue fake = ironic.drivers.modules.fake:FakeRescue no-rescue = ironic.drivers.modules.noop:NoRescue [ironic.hardware.interfaces.storage] cinder = ironic.drivers.modules.storage.cinder:CinderStorage external = ironic.drivers.modules.storage.external:ExternalStorage fake = ironic.drivers.modules.fake:FakeStorage noop = ironic.drivers.modules.storage.noop:NoopStorage [ironic.hardware.interfaces.vendor] fake = ironic.drivers.modules.fake:FakeVendorB ibmc = ironic.drivers.modules.ibmc.vendor:IBMCVendor idrac = ironic.drivers.modules.drac.vendor_passthru:DracVendorPassthru idrac-wsman = ironic.drivers.modules.drac.vendor_passthru:DracWSManVendorPassthru ilo = ironic.drivers.modules.ilo.vendor:VendorPassthru ipmitool = ironic.drivers.modules.ipmitool:VendorPassthru no-vendor = ironic.drivers.modules.noop:NoVendor [ironic.hardware.types] fake-hardware = ironic.drivers.fake_hardware:FakeHardware ibmc = ironic.drivers.ibmc:IBMCHardware idrac = ironic.drivers.drac:IDRACHardware ilo = ironic.drivers.ilo:IloHardware ilo5 = ironic.drivers.ilo:Ilo5Hardware intel-ipmi = ironic.drivers.intel_ipmi:IntelIPMIHardware ipmi = ironic.drivers.ipmi:IPMIHardware irmc = ironic.drivers.irmc:IRMCHardware manual-management = ironic.drivers.generic:ManualManagementHardware redfish = ironic.drivers.redfish:RedfishHardware snmp = ironic.drivers.snmp:SNMPHardware xclarity = ironic.drivers.xclarity:XClarityHardware [oslo.config.opts] ironic = ironic.conf.opts:list_opts [oslo.config.opts.defaults] ironic = ironic.conf.opts:update_opt_defaults [oslo.policy.enforcer] ironic = ironic.common.policy:get_oslo_policy_enforcer [oslo.policy.policies] ironic.api = ironic.common.policy:list_policies [wsgi_scripts] ironic-api-wsgi = ironic.api.wsgi:initialize_wsgi_app ironic-15.0.0/ironic.egg-info/PKG-INFO0000664000175000017500000000517113652514442017166 0ustar zuulzuul00000000000000Metadata-Version: 2.1 Name: ironic Version: 15.0.0 Summary: OpenStack Bare Metal Provisioning Home-page: https://docs.openstack.org/ironic/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ====== Ironic ====== Team and repository tags ------------------------ .. image:: https://governance.openstack.org/tc/badges/ironic.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Overview -------- Ironic consists of an API and plug-ins for managing and provisioning physical machines in a security-aware and fault-tolerant manner. It can be used with nova as a hypervisor driver, or standalone service using bifrost. By default, it will use PXE and IPMI to interact with bare metal machines. Ironic also supports vendor-specific plug-ins which may implement additional functionality. Ironic is distributed under the terms of the Apache License, Version 2.0. The full terms and conditions of this license are detailed in the LICENSE file. Project resources ~~~~~~~~~~~~~~~~~ * Documentation: https://docs.openstack.org/ironic/latest * Source: https://opendev.org/openstack/ironic * Bugs: https://storyboard.openstack.org/#!/project/943 * Wiki: https://wiki.openstack.org/wiki/Ironic * APIs: https://docs.openstack.org/api-ref/baremetal/index.html * Release Notes: https://docs.openstack.org/releasenotes/ironic/ * Design Specifications: https://specs.openstack.org/openstack/ironic-specs/ Project status, bugs, and requests for feature enhancements (RFEs) are tracked in StoryBoard: https://storyboard.openstack.org/#!/project/943 For information on how to contribute to ironic, see https://docs.openstack.org/ironic/latest/contributor Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Requires-Python: >=3.6 Provides-Extra: guru_meditation_reports Provides-Extra: i18n Provides-Extra: test ironic-15.0.0/ironic.egg-info/SOURCES.txt0000664000175000017500000025601713652514443017765 0ustar zuulzuul00000000000000.mailmap .stestr.conf AUTHORS CONTRIBUTING.rst ChangeLog LICENSE README.rst babel.cfg bindep.txt driver-requirements.txt lower-constraints.txt reno.yaml requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/regenerate-samples.sh api-ref/source/baremetal-api-v1-allocation.inc api-ref/source/baremetal-api-v1-chassis.inc api-ref/source/baremetal-api-v1-conductors.inc api-ref/source/baremetal-api-v1-deploy-templates.inc api-ref/source/baremetal-api-v1-driver-passthru.inc api-ref/source/baremetal-api-v1-drivers.inc api-ref/source/baremetal-api-v1-misc.inc api-ref/source/baremetal-api-v1-node-allocation.inc api-ref/source/baremetal-api-v1-node-management.inc api-ref/source/baremetal-api-v1-node-passthru.inc api-ref/source/baremetal-api-v1-nodes-bios.inc api-ref/source/baremetal-api-v1-nodes-portgroups.inc api-ref/source/baremetal-api-v1-nodes-ports.inc api-ref/source/baremetal-api-v1-nodes-traits.inc api-ref/source/baremetal-api-v1-nodes-vifs.inc api-ref/source/baremetal-api-v1-nodes-volume.inc api-ref/source/baremetal-api-v1-nodes.inc api-ref/source/baremetal-api-v1-portgroups-ports.inc api-ref/source/baremetal-api-v1-portgroups.inc api-ref/source/baremetal-api-v1-ports.inc api-ref/source/baremetal-api-v1-volume.inc api-ref/source/baremetal-api-versions.inc api-ref/source/conf.py api-ref/source/index.rst api-ref/source/parameters.yaml api-ref/source/samples/allocation-create-request-2.json api-ref/source/samples/allocation-create-request.json api-ref/source/samples/allocation-create-response.json api-ref/source/samples/allocation-show-response.json api-ref/source/samples/allocation-update-request.json api-ref/source/samples/allocation-update-response.json api-ref/source/samples/allocations-list-response.json api-ref/source/samples/api-root-response.json api-ref/source/samples/api-v1-root-response.json api-ref/source/samples/chassis-create-request.json api-ref/source/samples/chassis-list-details-response.json api-ref/source/samples/chassis-list-response.json api-ref/source/samples/chassis-show-response.json api-ref/source/samples/chassis-update-request.json api-ref/source/samples/chassis-update-response.json api-ref/source/samples/conductor-list-details-response.json api-ref/source/samples/conductor-list-response.json api-ref/source/samples/conductor-show-response.json api-ref/source/samples/deploy-template-create-request.json api-ref/source/samples/deploy-template-create-response.json api-ref/source/samples/deploy-template-detail-response.json api-ref/source/samples/deploy-template-list-response.json api-ref/source/samples/deploy-template-show-response.json api-ref/source/samples/deploy-template-update-request.json api-ref/source/samples/deploy-template-update-response.json api-ref/source/samples/driver-get-response.json api-ref/source/samples/driver-logical-disk-properties-response.json api-ref/source/samples/driver-property-response.json api-ref/source/samples/drivers-list-detail-response.json api-ref/source/samples/drivers-list-response.json api-ref/source/samples/lookup-node-response.json api-ref/source/samples/node-bios-detail-response.json api-ref/source/samples/node-bios-list-response.json api-ref/source/samples/node-create-request-classic.json api-ref/source/samples/node-create-request-dynamic.json api-ref/source/samples/node-create-response.json api-ref/source/samples/node-get-boot-device-response.json api-ref/source/samples/node-get-state-response.json api-ref/source/samples/node-get-supported-boot-devices-response.json api-ref/source/samples/node-inject-nmi.json api-ref/source/samples/node-maintenance-request.json api-ref/source/samples/node-port-detail-response.json api-ref/source/samples/node-port-list-response.json api-ref/source/samples/node-portgroup-detail-response.json api-ref/source/samples/node-portgroup-list-response.json api-ref/source/samples/node-set-active-state.json api-ref/source/samples/node-set-available-state.json api-ref/source/samples/node-set-boot-device.json api-ref/source/samples/node-set-clean-state.json api-ref/source/samples/node-set-manage-state.json api-ref/source/samples/node-set-power-off.json api-ref/source/samples/node-set-raid-request.json api-ref/source/samples/node-set-soft-power-off.json api-ref/source/samples/node-set-traits-request.json api-ref/source/samples/node-show-response.json api-ref/source/samples/node-traits-list-response.json api-ref/source/samples/node-update-driver-info-request.json api-ref/source/samples/node-update-driver-info-response.json api-ref/source/samples/node-update-driver.json api-ref/source/samples/node-validate-response.json api-ref/source/samples/node-vendor-passthru-response.json api-ref/source/samples/node-vif-attach-request.json api-ref/source/samples/node-vif-list-response.json api-ref/source/samples/node-volume-connector-detail-response.json api-ref/source/samples/node-volume-connector-list-response.json api-ref/source/samples/node-volume-list-response.json api-ref/source/samples/node-volume-target-detail-response.json api-ref/source/samples/node-volume-target-list-response.json api-ref/source/samples/nodes-list-details-response.json api-ref/source/samples/nodes-list-response.json api-ref/source/samples/port-create-request.json api-ref/source/samples/port-create-response.json api-ref/source/samples/port-list-detail-response.json api-ref/source/samples/port-list-response.json api-ref/source/samples/port-update-request.json api-ref/source/samples/port-update-response.json api-ref/source/samples/portgroup-create-request.json api-ref/source/samples/portgroup-create-response.json api-ref/source/samples/portgroup-list-detail-response.json api-ref/source/samples/portgroup-list-response.json api-ref/source/samples/portgroup-port-detail-response.json api-ref/source/samples/portgroup-port-list-response.json api-ref/source/samples/portgroup-update-request.json api-ref/source/samples/portgroup-update-response.json api-ref/source/samples/volume-connector-create-request.json api-ref/source/samples/volume-connector-create-response.json api-ref/source/samples/volume-connector-list-detail-response.json api-ref/source/samples/volume-connector-list-response.json api-ref/source/samples/volume-connector-update-request.json api-ref/source/samples/volume-connector-update-response.json api-ref/source/samples/volume-list-response.json api-ref/source/samples/volume-target-create-request.json api-ref/source/samples/volume-target-create-response.json api-ref/source/samples/volume-target-list-detail-response.json api-ref/source/samples/volume-target-list-response.json api-ref/source/samples/volume-target-update-request.json api-ref/source/samples/volume-target-update-response.json devstack/common_settings devstack/plugin.sh devstack/settings devstack/files/apache-ipxe-ironic.template devstack/files/apache-ironic-api-redirect.template devstack/files/debs/ironic devstack/files/hooks/qemu.py devstack/files/rpms/ironic devstack/lib/ironic devstack/tools/ironic/scripts/cleanup-node.sh devstack/tools/ironic/scripts/configure-vm.py devstack/tools/ironic/scripts/create-node.sh devstack/tools/ironic/scripts/setup-network.sh devstack/tools/ironic/templates/brbm.xml devstack/tools/ironic/templates/tftpd-xinetd.template devstack/tools/ironic/templates/vm.xml devstack/upgrade/resources.sh devstack/upgrade/settings devstack/upgrade/shutdown.sh devstack/upgrade/upgrade.sh devstack/upgrade/from-queens/upgrade-ironic doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/_exts/automated_steps.py doc/source/admin/adoption.rst doc/source/admin/agent-token.rst doc/source/admin/api-audit-support.rst doc/source/admin/bios.rst doc/source/admin/boot-from-volume.rst doc/source/admin/building-windows-images.rst doc/source/admin/cleaning.rst doc/source/admin/conductor-groups.rst doc/source/admin/console.rst doc/source/admin/deploy-steps.rst doc/source/admin/drivers.rst doc/source/admin/gmr.rst doc/source/admin/index.rst doc/source/admin/inspection.rst doc/source/admin/metrics.rst doc/source/admin/multitenancy.rst doc/source/admin/node-deployment.rst doc/source/admin/node-multitenancy.rst doc/source/admin/notifications.rst doc/source/admin/portgroups.rst doc/source/admin/power-sync.rst doc/source/admin/radosgw.rst doc/source/admin/raid.rst doc/source/admin/report.txt doc/source/admin/rescue.rst doc/source/admin/retirement.rst doc/source/admin/security.rst doc/source/admin/troubleshooting.rst doc/source/admin/upgrade-guide.rst doc/source/admin/upgrade-to-hardware-types.rst doc/source/admin/drivers/ansible.rst doc/source/admin/drivers/ibmc.rst doc/source/admin/drivers/idrac.rst doc/source/admin/drivers/ilo.rst doc/source/admin/drivers/intel-ipmi.rst doc/source/admin/drivers/ipa.rst doc/source/admin/drivers/ipmitool.rst doc/source/admin/drivers/irmc.rst doc/source/admin/drivers/redfish.rst doc/source/admin/drivers/snmp.rst doc/source/admin/drivers/xclarity.rst doc/source/admin/interfaces/boot.rst doc/source/admin/interfaces/deploy.rst doc/source/cli/index.rst doc/source/cli/ironic-dbsync.rst doc/source/cli/ironic-status.rst doc/source/configuration/config.rst doc/source/configuration/index.rst doc/source/configuration/policy.rst doc/source/configuration/sample-config.rst doc/source/configuration/sample-policy.rst doc/source/contributor/adding-new-job.rst doc/source/contributor/architecture.rst doc/source/contributor/bios_develop.rst doc/source/contributor/contributing.rst doc/source/contributor/debug-ci-failures.rst doc/source/contributor/deploy-steps.rst doc/source/contributor/dev-quickstart.rst doc/source/contributor/drivers.rst doc/source/contributor/faq.rst doc/source/contributor/governance.rst doc/source/contributor/index.rst doc/source/contributor/ironic-boot-from-volume.rst doc/source/contributor/ironic-multitenant-networking.rst doc/source/contributor/jobs-description.rst doc/source/contributor/notifications.rst doc/source/contributor/osprofiler-support.rst doc/source/contributor/releasing.rst doc/source/contributor/rolling-upgrades.rst doc/source/contributor/states.rst doc/source/contributor/third-party-ci.rst doc/source/contributor/vendor-passthru.rst doc/source/contributor/vision-reflection.rst doc/source/contributor/vision.rst doc/source/contributor/webapi-version-history.rst doc/source/contributor/webapi.rst doc/source/images/conceptual_architecture.png doc/source/images/deployment_architecture_2.png doc/source/images/ironic_standalone_with_ibmc_driver.svg doc/source/images/logical_architecture.png doc/source/images/sample_trace.svg doc/source/images/sample_trace_details.svg doc/source/images/states.svg doc/source/install/advanced.rst doc/source/install/configdrive.rst doc/source/install/configure-cleaning.rst doc/source/install/configure-compute.rst doc/source/install/configure-glance-images.rst doc/source/install/configure-glance-swift.rst doc/source/install/configure-identity.rst doc/source/install/configure-integration.rst doc/source/install/configure-ipmi.rst doc/source/install/configure-ipv6-networking.rst doc/source/install/configure-iscsi.rst doc/source/install/configure-networking.rst doc/source/install/configure-nova-flavors.rst doc/source/install/configure-pxe.rst doc/source/install/configure-tenant-networks.rst doc/source/install/creating-images.rst doc/source/install/deploy-ramdisk.rst doc/source/install/enabling-drivers.rst doc/source/install/enabling-https.rst doc/source/install/enrollment.rst doc/source/install/get_started.rst doc/source/install/index.rst doc/source/install/install-obs.rst doc/source/install/install-rdo.rst doc/source/install/install-ubuntu.rst doc/source/install/install.rst doc/source/install/next-steps.rst doc/source/install/setup-drivers.rst doc/source/install/standalone.rst doc/source/install/troubleshooting.rst doc/source/install/include/boot-mode.inc doc/source/install/include/common-configure.inc doc/source/install/include/common-prerequisites.inc doc/source/install/include/configure-ironic-api-mod_wsgi.inc doc/source/install/include/configure-ironic-api.inc doc/source/install/include/configure-ironic-conductor.inc doc/source/install/include/console.inc doc/source/install/include/disk-label.inc doc/source/install/include/kernel-boot-parameters.inc doc/source/install/include/local-boot-partition-images.inc doc/source/install/include/notifications.inc doc/source/install/include/root-device-hints.inc doc/source/install/include/trusted-boot.inc doc/source/install/refarch/common.rst doc/source/install/refarch/index.rst doc/source/install/refarch/small-cloud-trusted-tenants.rst doc/source/user/index.rst etc/apache2/ironic etc/ironic/README-ironic.conf.txt etc/ironic/README-policy.yaml.txt etc/ironic/api_audit_map.conf.sample etc/ironic/rootwrap.conf etc/ironic/rootwrap.d/ironic-images.filters etc/ironic/rootwrap.d/ironic-utils.filters ironic/__init__.py ironic/version.py ironic.egg-info/PKG-INFO ironic.egg-info/SOURCES.txt ironic.egg-info/dependency_links.txt ironic.egg-info/entry_points.txt ironic.egg-info/not-zip-safe ironic.egg-info/pbr.json ironic.egg-info/requires.txt ironic.egg-info/top_level.txt ironic/api/__init__.py ironic/api/app.py ironic/api/config.py ironic/api/expose.py ironic/api/hooks.py ironic/api/types.py ironic/api/wsgi.py ironic/api/controllers/__init__.py ironic/api/controllers/base.py ironic/api/controllers/link.py ironic/api/controllers/root.py ironic/api/controllers/version.py ironic/api/controllers/v1/__init__.py ironic/api/controllers/v1/allocation.py ironic/api/controllers/v1/bios.py ironic/api/controllers/v1/chassis.py ironic/api/controllers/v1/collection.py ironic/api/controllers/v1/conductor.py ironic/api/controllers/v1/deploy_template.py ironic/api/controllers/v1/driver.py ironic/api/controllers/v1/event.py ironic/api/controllers/v1/node.py ironic/api/controllers/v1/notification_utils.py ironic/api/controllers/v1/port.py ironic/api/controllers/v1/portgroup.py ironic/api/controllers/v1/ramdisk.py ironic/api/controllers/v1/state.py ironic/api/controllers/v1/types.py ironic/api/controllers/v1/utils.py ironic/api/controllers/v1/versions.py ironic/api/controllers/v1/volume.py ironic/api/controllers/v1/volume_connector.py ironic/api/controllers/v1/volume_target.py ironic/api/middleware/__init__.py ironic/api/middleware/auth_token.py ironic/api/middleware/json_ext.py ironic/api/middleware/parsable_error.py ironic/cmd/__init__.py ironic/cmd/api.py ironic/cmd/conductor.py ironic/cmd/dbsync.py ironic/cmd/status.py ironic/common/__init__.py ironic/common/boot_devices.py ironic/common/boot_modes.py ironic/common/cinder.py ironic/common/components.py ironic/common/config.py ironic/common/context.py ironic/common/dhcp_factory.py ironic/common/driver_factory.py ironic/common/exception.py ironic/common/faults.py ironic/common/fsm.py ironic/common/grub_conf.template ironic/common/hash_ring.py ironic/common/i18n.py ironic/common/image_service.py ironic/common/images.py ironic/common/indicator_states.py ironic/common/isolinux_config.template ironic/common/keystone.py ironic/common/network.py ironic/common/neutron.py ironic/common/nova.py ironic/common/policy.py ironic/common/profiler.py ironic/common/pxe_utils.py ironic/common/raid.py ironic/common/release_mappings.py ironic/common/rpc.py ironic/common/rpc_service.py ironic/common/service.py ironic/common/states.py ironic/common/swift.py ironic/common/utils.py ironic/common/wsgi_service.py ironic/common/glance_service/__init__.py ironic/common/glance_service/image_service.py ironic/common/glance_service/service_utils.py ironic/common/json_rpc/__init__.py ironic/common/json_rpc/client.py ironic/common/json_rpc/server.py ironic/conductor/__init__.py ironic/conductor/allocations.py ironic/conductor/base_manager.py ironic/conductor/cleaning.py ironic/conductor/deployments.py ironic/conductor/manager.py ironic/conductor/notification_utils.py ironic/conductor/rpcapi.py ironic/conductor/steps.py ironic/conductor/task_manager.py ironic/conductor/utils.py ironic/conf/__init__.py ironic/conf/agent.py ironic/conf/ansible.py ironic/conf/api.py ironic/conf/audit.py ironic/conf/auth.py ironic/conf/cinder.py ironic/conf/conductor.py ironic/conf/console.py ironic/conf/database.py ironic/conf/default.py ironic/conf/deploy.py ironic/conf/dhcp.py ironic/conf/drac.py ironic/conf/glance.py ironic/conf/healthcheck.py ironic/conf/ibmc.py ironic/conf/ilo.py ironic/conf/inspector.py ironic/conf/ipmi.py ironic/conf/irmc.py ironic/conf/iscsi.py ironic/conf/json_rpc.py ironic/conf/metrics.py ironic/conf/metrics_statsd.py ironic/conf/neutron.py ironic/conf/nova.py ironic/conf/opts.py ironic/conf/pxe.py ironic/conf/redfish.py ironic/conf/service_catalog.py ironic/conf/snmp.py ironic/conf/swift.py ironic/conf/xclarity.py ironic/db/__init__.py ironic/db/api.py ironic/db/migration.py ironic/db/sqlalchemy/__init__.py ironic/db/sqlalchemy/alembic.ini ironic/db/sqlalchemy/api.py ironic/db/sqlalchemy/migration.py ironic/db/sqlalchemy/models.py ironic/db/sqlalchemy/alembic/README ironic/db/sqlalchemy/alembic/env.py ironic/db/sqlalchemy/alembic/script.py.mako ironic/db/sqlalchemy/alembic/versions/10b163d4481e_add_port_portgroup_internal_info.py ironic/db/sqlalchemy/alembic/versions/1a59178ebdf6_add_volume_targets_table.py ironic/db/sqlalchemy/alembic/versions/1d6951876d68_add_storage_interface_db_field_and_.py ironic/db/sqlalchemy/alembic/versions/1e15e7122cc9_add_extra_column_to_deploy_templates.py ironic/db/sqlalchemy/alembic/versions/1e1d5ace7dc6_add_inspection_started_at_and_.py ironic/db/sqlalchemy/alembic/versions/21b331f883ef_add_provision_updated_at.py ironic/db/sqlalchemy/alembic/versions/2353895ecfae_add_conductor_hardware_interfaces_table.py ironic/db/sqlalchemy/alembic/versions/242cc6a923b3_add_node_maintenance_reason.py ironic/db/sqlalchemy/alembic/versions/2581ebaf0cb2_initial_migration.py ironic/db/sqlalchemy/alembic/versions/28c44432c9c3_add_node_description.py ironic/db/sqlalchemy/alembic/versions/2aac7e0872f6_add_deploy_templates.py ironic/db/sqlalchemy/alembic/versions/2d13bc3d6bba_add_bios_config_and_interface.py ironic/db/sqlalchemy/alembic/versions/2fb93ffd2af1_increase_node_name_length.py ironic/db/sqlalchemy/alembic/versions/31baaf680d2b_add_node_instance_info.py ironic/db/sqlalchemy/alembic/versions/3ae36a5f5131_add_logical_name.py ironic/db/sqlalchemy/alembic/versions/3bea56f25597_add_unique_constraint_to_instance_uuid.py ironic/db/sqlalchemy/alembic/versions/3cb628139ea4_nodes_add_console_enabled.py ironic/db/sqlalchemy/alembic/versions/3d86a077a3f2_add_port_physical_network.py ironic/db/sqlalchemy/alembic/versions/405cfe08f18d_add_rescue_interface_to_node.py ironic/db/sqlalchemy/alembic/versions/487deb87cc9d_add_conductor_affinity_and_online.py ironic/db/sqlalchemy/alembic/versions/48d6c242bb9b_add_node_tags.py ironic/db/sqlalchemy/alembic/versions/493d8f27f235_add_portgroup_configuration_fields.py ironic/db/sqlalchemy/alembic/versions/4f399b21ae71_add_node_clean_step.py ironic/db/sqlalchemy/alembic/versions/516faf1bb9b1_resizing_column_nodes_driver.py ironic/db/sqlalchemy/alembic/versions/5674c57409b9_replace_nostate_with_available.py ironic/db/sqlalchemy/alembic/versions/5ea1b0d310e_added_port_group_table_and_altered_ports.py ironic/db/sqlalchemy/alembic/versions/60cf717201bc_add_standalone_ports_supported.py ironic/db/sqlalchemy/alembic/versions/664f85c2f622_add_conductor_group_to_nodes_conductors.py ironic/db/sqlalchemy/alembic/versions/789acc877671_add_raid_config.py ironic/db/sqlalchemy/alembic/versions/82c315d60161_add_bios_settings.py ironic/db/sqlalchemy/alembic/versions/868cb606a74a_add_version_field_in_base_class.py ironic/db/sqlalchemy/alembic/versions/93706939026c_add_node_protected_field.py ironic/db/sqlalchemy/alembic/versions/9cbeefa3763f_add_port_is_smartnic.py ironic/db/sqlalchemy/alembic/versions/b2ad35726bb0_add_node_lessee.py ironic/db/sqlalchemy/alembic/versions/b4130a7fc904_create_nodetraits_table.py ironic/db/sqlalchemy/alembic/versions/b9117ac17882_add_node_deploy_step.py ironic/db/sqlalchemy/alembic/versions/bb59b63f55a_add_node_driver_internal_info.py ironic/db/sqlalchemy/alembic/versions/bcdd431ba0bf_add_fields_for_all_interfaces.py ironic/db/sqlalchemy/alembic/versions/c14cef6dfedf_populate_node_network_interface.py ironic/db/sqlalchemy/alembic/versions/cd2c80feb331_add_node_retired_field.py ironic/db/sqlalchemy/alembic/versions/ce6c4b3cf5a2_add_allocation_owner.py ironic/db/sqlalchemy/alembic/versions/d2b036ae9378_add_automated_clean_field.py ironic/db/sqlalchemy/alembic/versions/daa1ba02d98_add_volume_connectors_table.py ironic/db/sqlalchemy/alembic/versions/dbefd6bdaa2c_add_default_column_to_.py ironic/db/sqlalchemy/alembic/versions/dd34e1f1303b_add_resource_class_to_node.py ironic/db/sqlalchemy/alembic/versions/dd67b91a1981_add_allocations_table.py ironic/db/sqlalchemy/alembic/versions/e294876e8028_add_node_network_interface.py ironic/db/sqlalchemy/alembic/versions/e918ff30eb42_resize_column_nodes_instance_info.py ironic/db/sqlalchemy/alembic/versions/f190f9d00a11_add_node_owner.py ironic/db/sqlalchemy/alembic/versions/f6fdb920c182_set_pxe_enabled_true.py ironic/db/sqlalchemy/alembic/versions/fb3f10dd262e_add_fault_to_node_table.py ironic/dhcp/__init__.py ironic/dhcp/base.py ironic/dhcp/neutron.py ironic/dhcp/none.py ironic/drivers/__init__.py ironic/drivers/base.py ironic/drivers/drac.py ironic/drivers/fake_hardware.py ironic/drivers/generic.py ironic/drivers/hardware_type.py ironic/drivers/ibmc.py ironic/drivers/ilo.py ironic/drivers/intel_ipmi.py ironic/drivers/ipmi.py ironic/drivers/irmc.py ironic/drivers/raid_config_schema.json ironic/drivers/redfish.py ironic/drivers/snmp.py ironic/drivers/utils.py ironic/drivers/xclarity.py ironic/drivers/modules/__init__.py ironic/drivers/modules/agent.py ironic/drivers/modules/agent_base.py ironic/drivers/modules/agent_client.py ironic/drivers/modules/agent_config.template ironic/drivers/modules/boot.ipxe ironic/drivers/modules/boot_mode_utils.py ironic/drivers/modules/console_utils.py ironic/drivers/modules/deploy_utils.py ironic/drivers/modules/fake.py ironic/drivers/modules/image_cache.py ironic/drivers/modules/inspect_utils.py ironic/drivers/modules/inspector.py ironic/drivers/modules/ipmitool.py ironic/drivers/modules/ipxe.py ironic/drivers/modules/ipxe_config.template ironic/drivers/modules/iscsi_deploy.py ironic/drivers/modules/master_grub_cfg.txt ironic/drivers/modules/noop.py ironic/drivers/modules/noop_mgmt.py ironic/drivers/modules/pxe.py ironic/drivers/modules/pxe_base.py ironic/drivers/modules/pxe_config.template ironic/drivers/modules/pxe_grub_config.template ironic/drivers/modules/snmp.py ironic/drivers/modules/ansible/__init__.py ironic/drivers/modules/ansible/deploy.py ironic/drivers/modules/ansible/playbooks/add-ironic-nodes.yaml ironic/drivers/modules/ansible/playbooks/ansible.cfg ironic/drivers/modules/ansible/playbooks/clean.yaml ironic/drivers/modules/ansible/playbooks/clean_steps.yaml ironic/drivers/modules/ansible/playbooks/deploy.yaml ironic/drivers/modules/ansible/playbooks/inventory ironic/drivers/modules/ansible/playbooks/shutdown.yaml ironic/drivers/modules/ansible/playbooks/callback_plugins/ironic_log.ini ironic/drivers/modules/ansible/playbooks/callback_plugins/ironic_log.py ironic/drivers/modules/ansible/playbooks/library/facts_wwn.py ironic/drivers/modules/ansible/playbooks/library/root_hints.py ironic/drivers/modules/ansible/playbooks/library/stream_url.py ironic/drivers/modules/ansible/playbooks/roles/clean/defaults/main.yaml ironic/drivers/modules/ansible/playbooks/roles/clean/tasks/main.yaml ironic/drivers/modules/ansible/playbooks/roles/clean/tasks/shred.yaml ironic/drivers/modules/ansible/playbooks/roles/clean/tasks/wipe.yaml ironic/drivers/modules/ansible/playbooks/roles/clean/tasks/zap.yaml ironic/drivers/modules/ansible/playbooks/roles/configure/defaults/main.yaml ironic/drivers/modules/ansible/playbooks/roles/configure/tasks/grub.yaml ironic/drivers/modules/ansible/playbooks/roles/configure/tasks/main.yaml ironic/drivers/modules/ansible/playbooks/roles/configure/tasks/mounts.yaml ironic/drivers/modules/ansible/playbooks/roles/deploy/files/partition_configdrive.sh ironic/drivers/modules/ansible/playbooks/roles/deploy/tasks/configdrive.yaml ironic/drivers/modules/ansible/playbooks/roles/deploy/tasks/download.yaml ironic/drivers/modules/ansible/playbooks/roles/deploy/tasks/main.yaml ironic/drivers/modules/ansible/playbooks/roles/deploy/tasks/write.yaml ironic/drivers/modules/ansible/playbooks/roles/discover/tasks/main.yaml ironic/drivers/modules/ansible/playbooks/roles/discover/tasks/roothints.yaml ironic/drivers/modules/ansible/playbooks/roles/prepare/tasks/main.yaml ironic/drivers/modules/ansible/playbooks/roles/prepare/tasks/parted.yaml ironic/drivers/modules/ansible/playbooks/roles/shutdown/tasks/main.yaml ironic/drivers/modules/drac/__init__.py ironic/drivers/modules/drac/bios.py ironic/drivers/modules/drac/boot.py ironic/drivers/modules/drac/common.py ironic/drivers/modules/drac/inspect.py ironic/drivers/modules/drac/job.py ironic/drivers/modules/drac/management.py ironic/drivers/modules/drac/power.py ironic/drivers/modules/drac/raid.py ironic/drivers/modules/drac/vendor_passthru.py ironic/drivers/modules/ibmc/__init__.py ironic/drivers/modules/ibmc/management.py ironic/drivers/modules/ibmc/mappings.py ironic/drivers/modules/ibmc/power.py ironic/drivers/modules/ibmc/utils.py ironic/drivers/modules/ibmc/vendor.py ironic/drivers/modules/ilo/__init__.py ironic/drivers/modules/ilo/bios.py ironic/drivers/modules/ilo/boot.py ironic/drivers/modules/ilo/common.py ironic/drivers/modules/ilo/console.py ironic/drivers/modules/ilo/firmware_processor.py ironic/drivers/modules/ilo/inspect.py ironic/drivers/modules/ilo/management.py ironic/drivers/modules/ilo/power.py ironic/drivers/modules/ilo/raid.py ironic/drivers/modules/ilo/vendor.py ironic/drivers/modules/intel_ipmi/__init__.py ironic/drivers/modules/intel_ipmi/management.py ironic/drivers/modules/irmc/__init__.py ironic/drivers/modules/irmc/bios.py ironic/drivers/modules/irmc/boot.py ironic/drivers/modules/irmc/common.py ironic/drivers/modules/irmc/inspect.py ironic/drivers/modules/irmc/management.py ironic/drivers/modules/irmc/power.py ironic/drivers/modules/irmc/raid.py ironic/drivers/modules/network/__init__.py ironic/drivers/modules/network/common.py ironic/drivers/modules/network/flat.py ironic/drivers/modules/network/neutron.py ironic/drivers/modules/network/noop.py ironic/drivers/modules/redfish/__init__.py ironic/drivers/modules/redfish/bios.py ironic/drivers/modules/redfish/boot.py ironic/drivers/modules/redfish/inspect.py ironic/drivers/modules/redfish/management.py ironic/drivers/modules/redfish/power.py ironic/drivers/modules/redfish/utils.py ironic/drivers/modules/storage/__init__.py ironic/drivers/modules/storage/cinder.py ironic/drivers/modules/storage/external.py ironic/drivers/modules/storage/noop.py ironic/drivers/modules/xclarity/__init__.py ironic/drivers/modules/xclarity/common.py ironic/drivers/modules/xclarity/management.py ironic/drivers/modules/xclarity/power.py ironic/hacking/__init__.py ironic/hacking/checks.py ironic/objects/__init__.py ironic/objects/allocation.py ironic/objects/base.py ironic/objects/bios.py ironic/objects/chassis.py ironic/objects/conductor.py ironic/objects/deploy_template.py ironic/objects/fields.py ironic/objects/indirection.py ironic/objects/node.py ironic/objects/notification.py ironic/objects/port.py ironic/objects/portgroup.py ironic/objects/trait.py ironic/objects/volume_connector.py ironic/objects/volume_target.py ironic/tests/__init__.py ironic/tests/base.py ironic/tests/functional/__init__.py ironic/tests/unit/__init__.py ironic/tests/unit/policy_fixture.py ironic/tests/unit/raid_constants.py ironic/tests/unit/stubs.py ironic/tests/unit/test_base.py ironic/tests/unit/api/__init__.py ironic/tests/unit/api/base.py ironic/tests/unit/api/test_acl.py ironic/tests/unit/api/test_audit.py ironic/tests/unit/api/test_healthcheck.py ironic/tests/unit/api/test_hooks.py ironic/tests/unit/api/test_middleware.py ironic/tests/unit/api/test_ospmiddleware.py ironic/tests/unit/api/test_proxy_middleware.py ironic/tests/unit/api/test_root.py ironic/tests/unit/api/utils.py ironic/tests/unit/api/controllers/__init__.py ironic/tests/unit/api/controllers/test_base.py ironic/tests/unit/api/controllers/v1/__init__.py ironic/tests/unit/api/controllers/v1/test_allocation.py ironic/tests/unit/api/controllers/v1/test_chassis.py ironic/tests/unit/api/controllers/v1/test_conductor.py ironic/tests/unit/api/controllers/v1/test_deploy_template.py ironic/tests/unit/api/controllers/v1/test_driver.py ironic/tests/unit/api/controllers/v1/test_event.py ironic/tests/unit/api/controllers/v1/test_expose.py ironic/tests/unit/api/controllers/v1/test_node.py ironic/tests/unit/api/controllers/v1/test_notification_utils.py ironic/tests/unit/api/controllers/v1/test_port.py ironic/tests/unit/api/controllers/v1/test_portgroup.py ironic/tests/unit/api/controllers/v1/test_ramdisk.py ironic/tests/unit/api/controllers/v1/test_root.py ironic/tests/unit/api/controllers/v1/test_types.py ironic/tests/unit/api/controllers/v1/test_utils.py ironic/tests/unit/api/controllers/v1/test_versions.py ironic/tests/unit/api/controllers/v1/test_volume.py ironic/tests/unit/api/controllers/v1/test_volume_connector.py ironic/tests/unit/api/controllers/v1/test_volume_target.py ironic/tests/unit/cmd/__init__.py ironic/tests/unit/cmd/test_conductor.py ironic/tests/unit/cmd/test_dbsync.py ironic/tests/unit/cmd/test_status.py ironic/tests/unit/common/__init__.py ironic/tests/unit/common/test_cinder.py ironic/tests/unit/common/test_context.py ironic/tests/unit/common/test_driver_factory.py ironic/tests/unit/common/test_fsm.py ironic/tests/unit/common/test_glance_service.py ironic/tests/unit/common/test_hash_ring.py ironic/tests/unit/common/test_image_service.py ironic/tests/unit/common/test_images.py ironic/tests/unit/common/test_json_rpc.py ironic/tests/unit/common/test_keystone.py ironic/tests/unit/common/test_network.py ironic/tests/unit/common/test_neutron.py ironic/tests/unit/common/test_nova.py ironic/tests/unit/common/test_policy.py ironic/tests/unit/common/test_pxe_utils.py ironic/tests/unit/common/test_raid.py ironic/tests/unit/common/test_release_mappings.py ironic/tests/unit/common/test_rpc.py ironic/tests/unit/common/test_rpc_service.py ironic/tests/unit/common/test_states.py ironic/tests/unit/common/test_swift.py ironic/tests/unit/common/test_utils.py ironic/tests/unit/common/test_wsgi_service.py ironic/tests/unit/conductor/__init__.py ironic/tests/unit/conductor/mgr_utils.py ironic/tests/unit/conductor/test_allocations.py ironic/tests/unit/conductor/test_base_manager.py ironic/tests/unit/conductor/test_cleaning.py ironic/tests/unit/conductor/test_deployments.py ironic/tests/unit/conductor/test_manager.py ironic/tests/unit/conductor/test_notification_utils.py ironic/tests/unit/conductor/test_rpcapi.py ironic/tests/unit/conductor/test_steps.py ironic/tests/unit/conductor/test_task_manager.py ironic/tests/unit/conductor/test_utils.py ironic/tests/unit/conf/__init__.py ironic/tests/unit/conf/test_auth.py ironic/tests/unit/db/__init__.py ironic/tests/unit/db/base.py ironic/tests/unit/db/test_allocations.py ironic/tests/unit/db/test_api.py ironic/tests/unit/db/test_bios_settings.py ironic/tests/unit/db/test_chassis.py ironic/tests/unit/db/test_conductor.py ironic/tests/unit/db/test_deploy_templates.py ironic/tests/unit/db/test_node_tags.py ironic/tests/unit/db/test_node_traits.py ironic/tests/unit/db/test_nodes.py ironic/tests/unit/db/test_portgroups.py ironic/tests/unit/db/test_ports.py ironic/tests/unit/db/test_volume_connectors.py ironic/tests/unit/db/test_volume_targets.py ironic/tests/unit/db/utils.py ironic/tests/unit/db/sqlalchemy/__init__.py ironic/tests/unit/db/sqlalchemy/test_api.py ironic/tests/unit/db/sqlalchemy/test_migrations.py ironic/tests/unit/db/sqlalchemy/test_models.py ironic/tests/unit/db/sqlalchemy/test_types.py ironic/tests/unit/dhcp/__init__.py ironic/tests/unit/dhcp/test_factory.py ironic/tests/unit/dhcp/test_neutron.py ironic/tests/unit/drivers/__init__.py ironic/tests/unit/drivers/boot.ipxe ironic/tests/unit/drivers/ipxe_config.template ironic/tests/unit/drivers/ipxe_config_boot_from_volume_extra_volume.template ironic/tests/unit/drivers/ipxe_config_boot_from_volume_no_extra_volumes.template ironic/tests/unit/drivers/ipxe_config_timeout.template ironic/tests/unit/drivers/pxe_config.template ironic/tests/unit/drivers/pxe_grub_config.template ironic/tests/unit/drivers/test_base.py ironic/tests/unit/drivers/test_drac.py ironic/tests/unit/drivers/test_fake_hardware.py ironic/tests/unit/drivers/test_generic.py ironic/tests/unit/drivers/test_ibmc.py ironic/tests/unit/drivers/test_ilo.py ironic/tests/unit/drivers/test_ipmi.py ironic/tests/unit/drivers/test_irmc.py ironic/tests/unit/drivers/test_redfish.py ironic/tests/unit/drivers/test_snmp.py ironic/tests/unit/drivers/test_utils.py ironic/tests/unit/drivers/test_xclarity.py ironic/tests/unit/drivers/third_party_driver_mock_specs.py ironic/tests/unit/drivers/third_party_driver_mocks.py ironic/tests/unit/drivers/modules/__init__.py ironic/tests/unit/drivers/modules/test_agent.py ironic/tests/unit/drivers/modules/test_agent_base.py ironic/tests/unit/drivers/modules/test_agent_client.py ironic/tests/unit/drivers/modules/test_boot_mode_utils.py ironic/tests/unit/drivers/modules/test_console_utils.py ironic/tests/unit/drivers/modules/test_deploy_utils.py ironic/tests/unit/drivers/modules/test_image_cache.py ironic/tests/unit/drivers/modules/test_inspect_utils.py ironic/tests/unit/drivers/modules/test_inspector.py ironic/tests/unit/drivers/modules/test_ipmitool.py ironic/tests/unit/drivers/modules/test_ipxe.py ironic/tests/unit/drivers/modules/test_iscsi_deploy.py ironic/tests/unit/drivers/modules/test_noop.py ironic/tests/unit/drivers/modules/test_noop_mgmt.py ironic/tests/unit/drivers/modules/test_pxe.py ironic/tests/unit/drivers/modules/test_snmp.py ironic/tests/unit/drivers/modules/ansible/__init__.py ironic/tests/unit/drivers/modules/ansible/test_deploy.py ironic/tests/unit/drivers/modules/drac/__init__.py ironic/tests/unit/drivers/modules/drac/test_bios.py ironic/tests/unit/drivers/modules/drac/test_boot.py ironic/tests/unit/drivers/modules/drac/test_common.py ironic/tests/unit/drivers/modules/drac/test_inspect.py ironic/tests/unit/drivers/modules/drac/test_job.py ironic/tests/unit/drivers/modules/drac/test_management.py ironic/tests/unit/drivers/modules/drac/test_periodic_task.py ironic/tests/unit/drivers/modules/drac/test_power.py ironic/tests/unit/drivers/modules/drac/test_raid.py ironic/tests/unit/drivers/modules/drac/utils.py ironic/tests/unit/drivers/modules/ibmc/__init__.py ironic/tests/unit/drivers/modules/ibmc/base.py ironic/tests/unit/drivers/modules/ibmc/test_management.py ironic/tests/unit/drivers/modules/ibmc/test_power.py ironic/tests/unit/drivers/modules/ibmc/test_utils.py ironic/tests/unit/drivers/modules/ibmc/test_vendor.py ironic/tests/unit/drivers/modules/ilo/__init__.py ironic/tests/unit/drivers/modules/ilo/test_bios.py ironic/tests/unit/drivers/modules/ilo/test_boot.py ironic/tests/unit/drivers/modules/ilo/test_common.py ironic/tests/unit/drivers/modules/ilo/test_console.py ironic/tests/unit/drivers/modules/ilo/test_firmware_processor.py ironic/tests/unit/drivers/modules/ilo/test_inspect.py ironic/tests/unit/drivers/modules/ilo/test_management.py ironic/tests/unit/drivers/modules/ilo/test_power.py ironic/tests/unit/drivers/modules/ilo/test_raid.py ironic/tests/unit/drivers/modules/ilo/test_vendor.py ironic/tests/unit/drivers/modules/intel_ipmi/__init__.py ironic/tests/unit/drivers/modules/intel_ipmi/base.py ironic/tests/unit/drivers/modules/intel_ipmi/test_intel_ipmi.py ironic/tests/unit/drivers/modules/intel_ipmi/test_management.py ironic/tests/unit/drivers/modules/irmc/__init__.py ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ng.xml ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ok.xml ironic/tests/unit/drivers/modules/irmc/test_bios.py ironic/tests/unit/drivers/modules/irmc/test_boot.py ironic/tests/unit/drivers/modules/irmc/test_common.py ironic/tests/unit/drivers/modules/irmc/test_inspect.py ironic/tests/unit/drivers/modules/irmc/test_management.py ironic/tests/unit/drivers/modules/irmc/test_periodic_task.py ironic/tests/unit/drivers/modules/irmc/test_power.py ironic/tests/unit/drivers/modules/irmc/test_raid.py ironic/tests/unit/drivers/modules/network/__init__.py ironic/tests/unit/drivers/modules/network/test_common.py ironic/tests/unit/drivers/modules/network/test_flat.py ironic/tests/unit/drivers/modules/network/test_neutron.py ironic/tests/unit/drivers/modules/network/test_noop.py ironic/tests/unit/drivers/modules/redfish/__init__.py ironic/tests/unit/drivers/modules/redfish/test_bios.py ironic/tests/unit/drivers/modules/redfish/test_boot.py ironic/tests/unit/drivers/modules/redfish/test_inspect.py ironic/tests/unit/drivers/modules/redfish/test_management.py ironic/tests/unit/drivers/modules/redfish/test_power.py ironic/tests/unit/drivers/modules/redfish/test_utils.py ironic/tests/unit/drivers/modules/storage/__init__.py ironic/tests/unit/drivers/modules/storage/test_cinder.py ironic/tests/unit/drivers/modules/storage/test_external.py ironic/tests/unit/drivers/modules/xclarity/__init__.py ironic/tests/unit/drivers/modules/xclarity/test_common.py ironic/tests/unit/drivers/modules/xclarity/test_management.py ironic/tests/unit/drivers/modules/xclarity/test_power.py ironic/tests/unit/objects/__init__.py ironic/tests/unit/objects/test_allocation.py ironic/tests/unit/objects/test_bios.py ironic/tests/unit/objects/test_chassis.py ironic/tests/unit/objects/test_conductor.py ironic/tests/unit/objects/test_deploy_template.py ironic/tests/unit/objects/test_fields.py ironic/tests/unit/objects/test_node.py ironic/tests/unit/objects/test_notification.py ironic/tests/unit/objects/test_objects.py ironic/tests/unit/objects/test_port.py ironic/tests/unit/objects/test_portgroup.py ironic/tests/unit/objects/test_trait.py ironic/tests/unit/objects/test_volume_connector.py ironic/tests/unit/objects/test_volume_target.py ironic/tests/unit/objects/utils.py playbooks/ci-workarounds/etc-neutron.yaml playbooks/ci-workarounds/pre.yaml playbooks/legacy/grenade-dsvm-ironic/run.yaml playbooks/legacy/grenade-dsvm-ironic-multinode-multitenant/run.yaml playbooks/legacy/ironic-dsvm-base/post.yaml playbooks/legacy/ironic-dsvm-base/pre.yaml playbooks/legacy/ironic-dsvm-base-multinode/post.yaml playbooks/legacy/ironic-dsvm-base-multinode/pre.yaml releasenotes/notes/.placeholder releasenotes/notes/5.0-release-afb1fbbe595b6bc8.yaml releasenotes/notes/Add-port-option-support-to-ipmitool-e125d07fe13c53e7.yaml releasenotes/notes/active-node-creation-a41c9869c966c82b.yaml releasenotes/notes/add-agent-api-error-77ec6c272390c488.yaml releasenotes/notes/add-agent-erase-fallback-b07613a7042fe236.yaml releasenotes/notes/add-agent-iboot-0a4b5471c6ace461.yaml releasenotes/notes/add-agent-proxy-support-790e629634ca2eb7.yaml releasenotes/notes/add-ansible-python-interpreter-2035e0f23d407aaf.yaml releasenotes/notes/add-boot-from-volume-support-9f64208f083d0691.yaml releasenotes/notes/add-boot-mode-redfish-inspect-48e2b27ef022932a.yaml releasenotes/notes/add-chassis_uuid-removal-possibility-8b06341a91f7c676.yaml releasenotes/notes/add-choice-to-some-options-9fb327c48e6bfda1.yaml releasenotes/notes/add-cisco-ucs-hardware-types-ee597ff0416f158f.yaml releasenotes/notes/add-configurable-ipmi-retriables-b6056f722f6ed3b0.yaml releasenotes/notes/add-db-deadlock-handling-6bc10076537f3727.yaml releasenotes/notes/add-deploy-steps-drac-raid-interface-7023c03a96996265.yaml releasenotes/notes/add-deploy-steps-ilo-bios-interface-c73152269701ef80.yaml releasenotes/notes/add-deploy-steps-ilo-management-interface-9d0f45954eda643a.yaml releasenotes/notes/add-deploy-steps-ilo-raid-interface-732314cea19fe8ac.yaml releasenotes/notes/add-deploy-steps-redfish-bios-interface-f5e5415108f87598.yaml releasenotes/notes/add-dynamic-allocation-feature-2fd6b4df7943f178.yaml releasenotes/notes/add-error-check-ipmitool-reboot-ca7823202c5ab71d.yaml releasenotes/notes/add-gmr-3c9278d5d785895f.yaml releasenotes/notes/add-healthcheck-middleware-86120fa07a7c8151.yaml releasenotes/notes/add-id-and-uuid-filtering-to-sqalchemy-api.yaml releasenotes/notes/add-indicator-api-8c816b3828e6b43b.yaml releasenotes/notes/add-inspect-wait-state-948f83dfe342897b.yaml releasenotes/notes/add-inspection-abort-a187e6e5c1f6311d.yaml releasenotes/notes/add-ipv6-pxe-support-8fb51c355cc977c4.yaml releasenotes/notes/add-iscsi-portal-port-option-bde3b386f44f2a90.yaml releasenotes/notes/add-kernel-params-redfish-72b87075465c87f6.yaml releasenotes/notes/add-more-retryable-ipmitool-errors-1c9351a89ff0ec1a.yaml releasenotes/notes/add-neutron-request-timeout-1f7372af81f14ddd.yaml releasenotes/notes/add-node-bios-9c1c3d442e8acdac.yaml releasenotes/notes/add-node-boot-mode-control-9761d4bcbd8c3a0d.yaml releasenotes/notes/add-node-description-790097704f45af91.yaml releasenotes/notes/add-node-resource-class-c31e26df4196293e.yaml releasenotes/notes/add-notifications-97b6c79c18b48073.yaml releasenotes/notes/add-oneview-driver-96088bf470b16c34.yaml releasenotes/notes/add-option-persistent-boot-device-139cf280fb66f4f7.yaml releasenotes/notes/add-owner-information-52e153faf570747e.yaml releasenotes/notes/add-parallel-power-syncs-b099d66e80aab616.yaml releasenotes/notes/add-port-advanced-net-fields-55465091f019d962.yaml releasenotes/notes/add-port-internal-info-b7e02889416570f7.yaml releasenotes/notes/add-port-is-smartnic-4ce6974c8fe2732d.yaml releasenotes/notes/add-prep-partition-support-d808849795906e64.yaml releasenotes/notes/add-protection-for-available-nodes-25f163d69782ef63.yaml releasenotes/notes/add-pxe-per-node-526fd79df17efda8.yaml releasenotes/notes/add-pxe-support-for-petitboot-50d1fe4e7da4bfba.yaml releasenotes/notes/add-realtime-support-d814d5917836e9e2.yaml releasenotes/notes/add-redfish-auth-type-5fe78071b528e53b.yaml releasenotes/notes/add-redfish-boot-interface-e7e05bdd2c894d80.yaml releasenotes/notes/add-redfish-boot-mode-support-2f1a2568e71c65d0.yaml releasenotes/notes/add-redfish-inspect-interface-1577e70167f24ae4.yaml releasenotes/notes/add-redfish-sensors-4e2f7e3f8a7c6d5b.yaml releasenotes/notes/add-secure-boot-suport-irmc-2c1f09271f96424d.yaml releasenotes/notes/add-secure-boot-suport-irmc-9509f3735df2aa5d.yaml releasenotes/notes/add-snmp-inspection-support-e68fd6d57cb33846.yaml releasenotes/notes/add-snmp-pdu-driver-type-baytech-mrp27-5007d1d7e0a52162.yaml releasenotes/notes/add-snmp-pdu-driver-type-discovery-1f280b7f06fd1ca5.yaml releasenotes/notes/add-snmp-read-write-community-names-7589a8d1899c142c.yaml releasenotes/notes/add-snmpv3-security-features-bbefb8b844813a53.yaml releasenotes/notes/add-socat-console-ipmitool-ab4402ec976c5c96.yaml releasenotes/notes/add-ssl-support-4547801eedba5942.yaml releasenotes/notes/add-storage-interface-d4e64224804207fc.yaml releasenotes/notes/add-support-for-no-poweroff-on-failure-86e43b3e39043990.yaml releasenotes/notes/add-support-for-smart-nic-0fc5b10ba6772f7f.yaml releasenotes/notes/add-target-raid-config-ansible-deploy-c9ae81d9d25c62fe.yaml releasenotes/notes/add-timeout-parameter-to-power-methods-5f632c936497685e.yaml releasenotes/notes/add-tooz-dep-85c56c74733a222d.yaml releasenotes/notes/add-validate-rescue-2202e8ce9a174ece.yaml releasenotes/notes/add-validate-rescue-to-boot-interface-bd74aff9e250334b.yaml releasenotes/notes/add-vif-attach-detach-support-99eca43eea6e5a30.yaml releasenotes/notes/add_automated_clean_field-b3e7d56f4aeaf512.yaml releasenotes/notes/add_clean_step_clear_job_queue-7b774d8d0e36d1b2.yaml releasenotes/notes/add_clean_step_reset_idrac_and_known_good_state-cdbebf97d7b87fe7.yaml releasenotes/notes/add_conversion_flags_iscsi-d7f846803a647573.yaml releasenotes/notes/add_cpu_fpga_trait_for_irmc_inspection-2b63941b064f7936.yaml releasenotes/notes/add_detail_true_api_query-cb6944847830cd1a.yaml releasenotes/notes/add_infiniband_support-f497767f77277a1a.yaml releasenotes/notes/add_portgroup_support-7d5c6663bb00684a.yaml releasenotes/notes/add_retirement_support-23c5fed7ce8f97d4.yaml releasenotes/notes/add_standalone_ports_supported_field-4c59702a052acf38.yaml releasenotes/notes/added-redfish-driver-00ff5e3f7e9d6ee8.yaml releasenotes/notes/adding-audit-middleware-b95f2a00baed9750.yaml releasenotes/notes/adds-external-storage-interface-9b7c0a0a2afd3176.yaml releasenotes/notes/adds-ilo-ipxe-boot-interface-4fc75292122db80d.yaml releasenotes/notes/adds-ramdisk-deploy-interface-39fc61bc77b57beb.yaml releasenotes/notes/adds-ramdisk-deploy-interface-support-to-ilo-vmedia-1a7228a834465633.yaml releasenotes/notes/adds-secure-erase-switch-23f449c86b3648a4.yaml releasenotes/notes/adopt-ironic-context-5e75540dc2b2f009.yaml releasenotes/notes/adopt-oslo-config-generator-15afd2e7c2f008b4.yaml releasenotes/notes/adoption-feature-update-d2160954a2c36b0a.yaml releasenotes/notes/agent-api-bf9f18d8d38075e4.yaml releasenotes/notes/agent-can-request-reboot-6238e13e2e898f68.yaml releasenotes/notes/agent-command-status-retry-f9b6f53a823c6b01.yaml releasenotes/notes/agent-http-provisioning-d116b3ff36669d16.yaml releasenotes/notes/agent-takeover-60f27cef21ebfb48.yaml releasenotes/notes/agent-token-support-0a5b5aa1585dfbb5.yaml releasenotes/notes/agent-wol-driver-4116f64907d0db9c.yaml releasenotes/notes/agent_partition_image-48a03700f41a3980.yaml releasenotes/notes/allocation-added-owner-policy-c650074e68d03289.yaml releasenotes/notes/allocation-api-6ac2d262689f5f59.yaml releasenotes/notes/allocation-backfill-c31e84c5fcf24216.yaml releasenotes/notes/allocation-owner-policy-162c43b3abb91c76.yaml releasenotes/notes/allow-allocation-update-94d862c3da454be2.yaml releasenotes/notes/allow-deleting-unbound-ports-fa78069b52f099ac.yaml releasenotes/notes/allow-pxelinux-config-folder-to-be-defined-da0ddd397d58dcc8.yaml releasenotes/notes/allow-set-interface-to-node-in-available-bd6f695620c2d77f.yaml releasenotes/notes/allow-to-attach-vif-to-active-node-55963be2ec269043.yaml releasenotes/notes/always-return-chassis-uuid-4eecbc8da2170cb1.yaml releasenotes/notes/amt-driver-wake-up-0880ed85476968be.yaml releasenotes/notes/ansible-deploy-15da234580ca0c30.yaml releasenotes/notes/ansible-loops-de0eef0d5b79a9ff.yaml releasenotes/notes/any-wsgi-8d6ccb0590104146.yaml releasenotes/notes/apache-multiple-workers-11d4ba52c89a13e3.yaml releasenotes/notes/api-none-cdb95e58b69a5c50.yaml releasenotes/notes/async-deprecate-b3d81d7968ea47e5.yaml releasenotes/notes/async_bios_clean_step-7348efff3f6d02c1.yaml releasenotes/notes/automated_clean_config-0170c95ae210f953.yaml releasenotes/notes/backfill_version_column_db_race_condition-713fa05832b93ca5.yaml releasenotes/notes/better-handle-skip-upgrade-3b6f06ac24937aa4.yaml releasenotes/notes/bfv-pxe-boot-3375d331ee2f04f2.yaml releasenotes/notes/bmc_reset-warm-9396ac444cafd734.yaml releasenotes/notes/boot-from-url-98d21670e726c518.yaml releasenotes/notes/boot-ipxe-inc-workaround-548e10d1d6616752.yaml releasenotes/notes/bp-nova-support-instance-power-update-49c531ef13982e62.yaml releasenotes/notes/broken-driver-update-fc5303340080ef04.yaml releasenotes/notes/bug-1506657-3bcb4ef46623124d.yaml releasenotes/notes/bug-1518374-decd73fd82c2eb94.yaml releasenotes/notes/bug-1548086-ed88646061b88faf.yaml releasenotes/notes/bug-1570283-6cdc62e4ef43cb02.yaml releasenotes/notes/bug-1579635-cffd990b51bcb5ab.yaml releasenotes/notes/bug-1592335-7c5835868fe364ea.yaml releasenotes/notes/bug-1596421-0cb8f59073f56240.yaml releasenotes/notes/bug-1607527-75885e145db62d69.yaml releasenotes/notes/bug-1611555-de1ec64ba46982ec.yaml releasenotes/notes/bug-1611556-92cbfde5ee7f44d6.yaml releasenotes/notes/bug-1626453-e8df46aa5db6dd5a.yaml releasenotes/notes/bug-1648387-92db52cbe007fabd.yaml releasenotes/notes/bug-1672457-563d5354b41b060e.yaml releasenotes/notes/bug-1694645-57289200e35bd883.yaml releasenotes/notes/bug-1696296-a972c8d879b98940.yaml releasenotes/notes/bug-1702158-79bf57bd4d8087b6.yaml releasenotes/notes/bug-1749433-363b747d2db67df6.yaml releasenotes/notes/bug-1749860-457292cf62e18a0e.yaml releasenotes/notes/bug-2001832-62e244dc48c1f79e.yaml releasenotes/notes/bug-2002062-959b865ced05b746.yaml releasenotes/notes/bug-2002093-9fcb3613d2daeced.yaml releasenotes/notes/bug-2003972-dae9b7d0f6180339.yaml releasenotes/notes/bug-2004265-cd9056868295f374.yaml releasenotes/notes/bug-2004947-e5f27e11b8f9c96d.yaml releasenotes/notes/bug-2005377-5c63357681a465ec.yaml releasenotes/notes/bug-2005764-15f45e11b9f9c96d.yaml releasenotes/notes/bug-2006266-85da234583ca0c32.yaml releasenotes/notes/bug-2006275-a5ca234683ca4c32.yaml releasenotes/notes/bug-2006334-0cd8f59073f56241.yaml releasenotes/notes/bug-2007567-wsman-raid-48483affdd9f9894.yaml releasenotes/notes/bug-30315-e46eafe5b575f3da.yaml releasenotes/notes/bug-30316-8c53358681e464eb.yaml releasenotes/notes/bug-30317-a972c8d879c98941.yaml releasenotes/notes/bug-35702-25da234580ca0c31.yaml releasenotes/notes/build-configdrive-5b3b9095824faf4e.yaml releasenotes/notes/build-iso-from-esp-d156036aa8ef85fb.yaml releasenotes/notes/build-uefi-only-iso-ce6bcb0da578d1d6.yaml releasenotes/notes/build_instance_info-c7e3f12426b48965.yaml releasenotes/notes/bump-min-ansible-ver-a78e7885c0e9d361.yaml releasenotes/notes/caseless-conductor-restart-check-f70005fbf65f6bb6.yaml releasenotes/notes/catch-third-party-driver-validate-exceptions-94ed2a91c50d2d8e.yaml releasenotes/notes/change-default-boot-option-to-local-8c326077770ab672.yaml releasenotes/notes/change-ramdisk-log-filename-142b10d0b02a5ca6.yaml releasenotes/notes/change-updated-at-object-field-a74466f7c4541072.yaml releasenotes/notes/check-dynamic-allocation-enabled-e94f3b8963b114d0.yaml releasenotes/notes/check-for-whole-disk-image-uefi-3bf2146588de2423.yaml releasenotes/notes/check_obj_versions-e86d897df673e833.yaml releasenotes/notes/check_protocol_for_ironic_api-32f35c93a140d3ae.yaml releasenotes/notes/cisco-drivers-deleted-5a42a8c508704c64.yaml releasenotes/notes/classic-drivers-deprecation-de464065187d4c14.yaml releasenotes/notes/clean-nodes-stuck-in-cleaning-on-startup-443823ea4f937965.yaml releasenotes/notes/cleaning-bios-d74a4947d2525b80.yaml releasenotes/notes/cleaning-maintenance-7ae83b1e4ff992b0.yaml releasenotes/notes/cleaning-retry-fix-89a5d0e65920a064.yaml releasenotes/notes/cleanup-ipxe-f1349e2ac9ec2825.yaml releasenotes/notes/cleanup-provision-ports-before-retry-ec3c89c193766d70.yaml releasenotes/notes/cleanwait_timeout_fail-4323ba7d4d4da3e6.yaml releasenotes/notes/clear-hung-iscsi-sessions-d3b55c4c65fa4c8b.yaml releasenotes/notes/clear-node-target-power-state-de1f25be46d3e6d7.yaml releasenotes/notes/clear-target-stable-states-4545602d7aed9898.yaml releasenotes/notes/collect-deployment-logs-2ec1634847c3f6a5.yaml releasenotes/notes/conductor-groups-c22c17e276e63bed.yaml releasenotes/notes/conductor-power-sync-timeout-extension-fa5e7b5fdd679d84.yaml releasenotes/notes/conductor-version-backfill-9d06f2ad81aebec3.yaml releasenotes/notes/conductor_early_import-fd29fa8b89089977.yaml releasenotes/notes/conf-debug-ipa-1d75e2283ca83395.yaml releasenotes/notes/conf-deploy-image-5adb6c1963b149ae.yaml releasenotes/notes/config-drive-support-for-whole-disk-images-in-iscsi-deploy-0193c5222a7cd129.yaml releasenotes/notes/configdrive-support-using-ceph-radosgw-8c6f7b8bede2077c.yaml releasenotes/notes/configdrive-vendordata-122049bd7c6e1b67.yaml releasenotes/notes/configure-notifications-72824356e7d8832a.yaml releasenotes/notes/consider_embedded_ipa_error_codes-c8fdfaa9e6a1ed06.yaml releasenotes/notes/console-port-allocation-bb07c43e3890c54c.yaml releasenotes/notes/context-domain-id-name-deprecation-ae6e40718273be8d.yaml releasenotes/notes/continue-node-deploy-state-63d9dc9cdcf8e37a.yaml releasenotes/notes/correct-api-version-check-conditional-for-nodename-439bebc02fb5493d.yaml releasenotes/notes/create-on-conductor-c1c52a1f022c4048.yaml releasenotes/notes/create-port-on-conductor-b921738b4b2a5def.yaml releasenotes/notes/dbsync-check-version-c71d5f4fd89ed117.yaml releasenotes/notes/dbsync-online_data_migration-edcf0b1cc3667582.yaml releasenotes/notes/debug-no-api-tracebacks-a8a0caddc9676b06.yaml releasenotes/notes/debug-sensor-data-fix-for-ipmitool-eb13e80ccdd984db.yaml releasenotes/notes/decouple-boot-params-2b05806435ad21e5.yaml releasenotes/notes/default-resource-class-e11bacfb01d6841b.yaml releasenotes/notes/default-swift_account-b008d08e85bdf154.yaml releasenotes/notes/default_boot_option-f22c01f976bc2de7.yaml releasenotes/notes/dell-boss-raid1-ec33e5b9c59d4021.yaml releasenotes/notes/deny-too-long-chassis-description-0690d6f67ed002d5.yaml releasenotes/notes/deploy-step-error-d343e8cb7d1b2305.yaml releasenotes/notes/deploy-steps-required-aa72cdf1c0ec0e84.yaml releasenotes/notes/deploy-templates-5df3368df862631c.yaml releasenotes/notes/deploy_steps-243b341cf742f7cc.yaml releasenotes/notes/deployment-cleaning-polling-flag-be13a866a7c302d7.yaml releasenotes/notes/deprecate-agent-passthru-67d1e2cf25b30a30.yaml releasenotes/notes/deprecate-cisco-drivers-3ae79a24b76ff963.yaml releasenotes/notes/deprecate-clustered-compute-manager-3dd68557446bcc5c.yaml releasenotes/notes/deprecate-dhcp-update-mac-address-f12a4959432c8e20.yaml releasenotes/notes/deprecate-elilo-2beca4800f475426.yaml releasenotes/notes/deprecate-glance-url-scheme-ceff3008cf9cf590.yaml releasenotes/notes/deprecate-global-region-4dbea91de71ebf59.yaml releasenotes/notes/deprecate-hash-distribution-replicas-ef0626ccc592b70e.yaml releasenotes/notes/deprecate-ibmc-9106cc3a81171738.yaml releasenotes/notes/deprecate-inspector-enabled-901fd9c9426046c7.yaml releasenotes/notes/deprecate-irmc-031f55c3bb1fb863.yaml releasenotes/notes/deprecate-oneview-drivers-5a487e1940bcbbc6.yaml releasenotes/notes/deprecate-support-for-glance-v1-8b194e6b20cbfebb.yaml releasenotes/notes/deprecate-xclarity-config-af9b753f96779f42.yaml releasenotes/notes/deprecate-xclarity-d687571fb65ad099.yaml releasenotes/notes/deprecated-cinder-opts-e10c153768285cab.yaml releasenotes/notes/deprecated-glance-opts-4825f000d20c2932.yaml releasenotes/notes/deprecated-inspector-opts-0520b08dbcd10681.yaml releasenotes/notes/deprecated-inspector-opts-b19a08339712cfd7.yaml releasenotes/notes/deprecated-neutron-ops-79abab5b013b7939.yaml releasenotes/notes/deprecated-neutron-opts-2e1d9e65f00301d3.yaml releasenotes/notes/dhcp-provider-clean-dhcp-9352717903d6047e.yaml releasenotes/notes/dhcpv6-stateful-address-count-0f94ac6a55bd9e51.yaml releasenotes/notes/disable-clean-step-reset-ilo-1869a6e08f39901c.yaml releasenotes/notes/disable_periodic_tasks-0ea39fa7a8a108c6.yaml releasenotes/notes/disk-label-capability-d36d126e0ad36dca.yaml releasenotes/notes/disk-label-fix-7580de913835ff44.yaml releasenotes/notes/dont-validate-local_link_connection-when-port-has-client-id-8e584586dc4fca50.yaml releasenotes/notes/drac-fix-double-manage-provide-cycle-6ac8a427068f87fe.yaml releasenotes/notes/drac-fix-get_bios_config-vendor-passthru-causes-exception-1e1dbeeb3e924f29.yaml releasenotes/notes/drac-fix-oob-cleaning-b4b717895e243c9b.yaml releasenotes/notes/drac-fix-power-on-reboot-race-condition-fe712aa9c79ee252.yaml releasenotes/notes/drac-fix-prepare-cleaning-d74ba45135d84531.yaml releasenotes/notes/drac-fix-raid10-greater-than-16-drives-a4cb107e34371a51.yaml releasenotes/notes/drac-inspection-interface-b0abbad98fec1c2e.yaml releasenotes/notes/drac-list-unfinished-jobs-10400419b6bc3c6e.yaml releasenotes/notes/drac-migrate-to-dracclient-2bd8a6d1dd3fdc69.yaml releasenotes/notes/drac-missing-lookup-3ad98e918e1a852a.yaml releasenotes/notes/drac-raid-interface-f4c02b1c4fb37e2d.yaml releasenotes/notes/drac_host-deprecated-b181149246eecb47.yaml releasenotes/notes/drop-ironic-lib-rootwrap-filters-f9224173289c1e30.yaml releasenotes/notes/drop-py-2-7-5140cb76e321cdd1.yaml releasenotes/notes/dual-stack-ironic-493ebc7b71263aaa.yaml releasenotes/notes/duplicated-driver-entry-775370ad84736206.yaml releasenotes/notes/dynamic-allocation-spt-has-physical-mac-8967a1d926ed9301.yaml releasenotes/notes/dynamic-driver-list-show-apis-235e9fca26fc580d.yaml releasenotes/notes/emit-metrics-for-api-calls-69f18fd1b9d54b05.yaml releasenotes/notes/enable-osprofiler-support-e3839b0fa90d3831.yaml releasenotes/notes/enhanced-checksum-f5a2b7aa8632b88f.yaml releasenotes/notes/ensure-unbind-flat-vifs-and-clear-macs-34eec149618e5964.yaml releasenotes/notes/erase-devices-metadata-config-f39b6ca415a87757.yaml releasenotes/notes/error-resilient-enabled_drivers-4e9c864ed6eaddd1.yaml releasenotes/notes/expose-conductor-d13c9c4ef9d9de86.yaml releasenotes/notes/extends-install-bootloader-timeout-8fce9590bf405cdf.yaml releasenotes/notes/fail-when-vif-port-id-is-missing-7640669f9d9e705d.yaml releasenotes/notes/fake-noop-bebc43983eb801d1.yaml releasenotes/notes/fake_soft_power-32683a848a989fc2.yaml releasenotes/notes/fast-track-deployment-f09a8b921b3aae36.yaml releasenotes/notes/fifteen-0da3cca48dceab8b.yaml releasenotes/notes/fips-hashlib-bca9beacc2b48fe7.yaml releasenotes/notes/fix-agent-clean-up-9a25deb85bc53d9b.yaml releasenotes/notes/fix-agent-ilo-temp-image-cleanup-711429d0e67807ae.yaml releasenotes/notes/fix-api-access-logs-68b9ca4f411f339c.yaml releasenotes/notes/fix-api-node-name-updates-f3813295472795be.yaml releasenotes/notes/fix-baremetal-admin-user-not-neutron-admin-f163df90ab520dad.yaml releasenotes/notes/fix-boot-from-volume-for-iscsi-deploy-60bc0790ada62b26.yaml releasenotes/notes/fix-boot-from-volume-for-iscsi-deploy-71c1f2905498c50d.yaml releasenotes/notes/fix-boot-url-for-v6-802abde9de8ba455.yaml releasenotes/notes/fix-bug-1675529-479357c217819420.yaml releasenotes/notes/fix-capabilities-as-string-agent-7c5c7975560ce280.yaml releasenotes/notes/fix-clean-steps-not-running-0d065cb022bc0419.yaml releasenotes/notes/fix-cleaning-spawn-error-60b60281f3be51c2.yaml releasenotes/notes/fix-cleaning-with-traits-3a54faa70d594fd0.yaml releasenotes/notes/fix-commit-to-controller-d26f083ac388a65e.yaml releasenotes/notes/fix-conductor-list-raise-131ac76719b74032.yaml releasenotes/notes/fix-cpu-count-8904a4e1a24456f4.yaml releasenotes/notes/fix-create-configuration-0e000392d9d7f23b.yaml releasenotes/notes/fix-cve-2016-4985-b62abae577025365.yaml releasenotes/notes/fix-delete_configuration-with-multiple-controllers-06fc3fca94ba870f.yaml releasenotes/notes/fix-dir-permissions-bc56e83a651bbdb0.yaml releasenotes/notes/fix-disk-identifier-overwrite-42b33a5a0f7742d8.yaml releasenotes/notes/fix-do-not-tear-down-nodes-upon-cleaning-failure-a9cda6ae71ed2540.yaml releasenotes/notes/fix-drac-job-state-8c5422bbeaf15226.yaml releasenotes/notes/fix-drives-conversion-before-raid-creation-ea1f7eb425f79f2f.yaml releasenotes/notes/fix-esp-grub-path-9e5532993dccc07a.yaml releasenotes/notes/fix-fast-track-entry-path-467c20f97aeb2f4b.yaml releasenotes/notes/fix-fields-missing-from-next-url-fd9fddf8e70b65ea.yaml releasenotes/notes/fix-get-boot-device-not-persistent-de6159d8d2b60656.yaml releasenotes/notes/fix-get-deploy-info-port.yaml releasenotes/notes/fix-gmr-37332a12065c09dc.yaml releasenotes/notes/fix-ilo-drivers-log-message-c3c64c1ca0a0bca8.yaml releasenotes/notes/fix-ilo-firmware-update-swift-path-with-pseudo-folder-0660345510ec0bb4.yaml releasenotes/notes/fix-instance-master-path-config-fa524c907a7888e5.yaml releasenotes/notes/fix-ipa-ephemeral-partition-1f1e020727a49078.yaml releasenotes/notes/fix-ipmi-numeric-password-75e080aa8bdfb9a2.yaml releasenotes/notes/fix-ipmitool-console-empty-password-a8edc5e2a1a7daf6.yaml releasenotes/notes/fix-ipv6-option6-tag-549093681dcf940c.yaml releasenotes/notes/fix-ipxe-interface-without-opt-enabled-4fa2f83975295e20.yaml releasenotes/notes/fix-ipxe-macro-4ae8bc4fe82e8f19.yaml releasenotes/notes/fix-ipxe-template-for-whole-disk-image-943da0311ca7aeb5.yaml releasenotes/notes/fix-keystone-parameters-cdb93576d7e7885b.yaml releasenotes/notes/fix-mac-address-48060f9e2847a38c.yaml releasenotes/notes/fix-mac-address-update-with-contrail-b1e1b725cc0829c2.yaml releasenotes/notes/fix-mitaka-ipa-iscsi.yaml releasenotes/notes/fix-multi-attached-volumes-092ffedbdcf0feac.yaml releasenotes/notes/fix-net-ifaces-rebuild-1cc03df5d37f38dd.yaml releasenotes/notes/fix-noop-net-vif-list-a3d8ecee29097662.yaml releasenotes/notes/fix-not-exist-deploy-image-for-irmc-cb82c6e0b52b8a9a.yaml releasenotes/notes/fix-oneview-deallocate-server-8256e279af837e5d.yaml releasenotes/notes/fix-oneview-deploy-return-values-ab2ec6ae568d95a5.yaml releasenotes/notes/fix-oneview-periodics-0f535fe7a0ad83cd.yaml releasenotes/notes/fix-pagination-marker-with-custom-field-query-65ca29001a03e036.yaml releasenotes/notes/fix-path-a3a0cfd2c135ace9.yaml releasenotes/notes/fix-policy-checkers-1a08203e3c2cf859.yaml releasenotes/notes/fix-prepare-instance-for-agent-interface-56753bdf04dd581f.yaml releasenotes/notes/fix-provisioning-port-cleanup-79ee7930ca206c42.yaml releasenotes/notes/fix-reboot-log-collection-c3e22fc166135e61.yaml releasenotes/notes/fix-rpc-exceptions-12c70eb6ba177e39.yaml releasenotes/notes/fix-security-group-list-add-query-filters-f72cfcefa1e093d2.yaml releasenotes/notes/fix-sendfile-size-cap-d9966a96e2d7db51.yaml releasenotes/notes/fix-sensors-storage-ed5d5bbda9b46645.yaml releasenotes/notes/fix-shellinabox-console-subprocess-timeout-d3eccfe0440013d7.yaml releasenotes/notes/fix-shellinabox-pipe-not-ready-f860c4b7a1ef71a8.yaml releasenotes/notes/fix-socat-command-afc840284446870a.yaml releasenotes/notes/fix-swift-binary-upload-bf9471fca29290e1.yaml releasenotes/notes/fix-swift-ssl-options-d93d653dcd404960.yaml releasenotes/notes/fix-sync-power-state-last-error-65fa42bad8e38c3b.yaml releasenotes/notes/fix-tftp-master-path-config-77face94f5db9af7.yaml releasenotes/notes/fix-updating-node-driver-to-classic-16b0d5ba47e74d10.yaml releasenotes/notes/fix-url-collisions-43abfc8364ca34e7.yaml releasenotes/notes/fix-vif-detach-fca221f1a1c0e9fa.yaml releasenotes/notes/fix-virtualbox-localboot-not-working-558a3dec72b5116b.yaml releasenotes/notes/fix-xclarity-management-defect-ec5af0cc6d1045d9.yaml releasenotes/notes/fix_deploy_validation_resp_code-ed93627d1b0dfa94.yaml releasenotes/notes/fix_pending_non_bios_job_execution-4b22e168ac915f4f.yaml releasenotes/notes/fix_raid0_creation_for_multiple_disks-f47957754fca0312.yaml releasenotes/notes/fixes-deployment-failure-with-fasttrack-f1fe05598fbdbe4a.yaml releasenotes/notes/fixes-execution-of-out-of-band-deploy-steps-1f5967e7bfcabbf9.yaml releasenotes/notes/fixes-get-boot-option-for-software-raid-baa2cffd95e1f624.yaml releasenotes/notes/fixes-noop-network-with-grub-8fd99a73b593ddba.yaml releasenotes/notes/flag_always_reboot-62468a7058b58823.yaml releasenotes/notes/force-out-hung-ipmitool-process-519c7567bcbaa882.yaml releasenotes/notes/futurist-e9c55699f479f97a.yaml releasenotes/notes/get-commands-status-timeout-ecbac91ea149e755.yaml releasenotes/notes/get-supported-boot-devices-manadatory-task-0462fc072d6ea517.yaml releasenotes/notes/glance-deprecations-21e7014b72a1bcef.yaml releasenotes/notes/glance-keystone-dd30b884f07f83fb.yaml releasenotes/notes/glance-v2-83b04fec247cd22f.yaml releasenotes/notes/grub-default-change-to-mac-1e301a96c49acec4.yaml releasenotes/notes/hash-ring-race-da0d584de1f46788.yaml releasenotes/notes/hctl-root-device-hints-0cab86673bc4a924.yaml releasenotes/notes/heartbeat-locked-6e53b68337d5a258.yaml releasenotes/notes/heartbeat_agent_version-70f4e64b19b51d87.yaml releasenotes/notes/hexraw-support-removed-8e8fa07595a629f4.yaml releasenotes/notes/html-errors-27579342e7e8183b.yaml releasenotes/notes/hw-ifaces-periodics-af8c9b93ecca9fcd.yaml releasenotes/notes/ibmc-38-169438974508f62e.yaml releasenotes/notes/ibmc-driver-45fcf9f50ebf0193.yaml releasenotes/notes/idrac-add-initial-redfish-support-27f27f18f3c1cd91.yaml releasenotes/notes/idrac-add-redfish-boot-support-036396b48d3f71f4.yaml releasenotes/notes/idrac-add-redfish-inspect-support-ce74bd3d4a97b588.yaml releasenotes/notes/idrac-advance-python-dracclient-version-01c6ef671670ffb3.yaml releasenotes/notes/idrac-drives-conversion-jbod-to-raid-1a229627708e10b9.yaml releasenotes/notes/idrac-drives-conversion-raid-to-jbod-de10755d1ec094ea.yaml releasenotes/notes/idrac-fix-reboot-failure-c740e765ff41bcf0.yaml releasenotes/notes/idrac-hardware-type-54383960af3459d0.yaml releasenotes/notes/idrac-no-vendor-911904dd69457826.yaml releasenotes/notes/idrac-remove-commit_required-d9ea849e8f5e78e2.yaml releasenotes/notes/idrac-uefi-boot-mode-86f4694b4247a1ca.yaml releasenotes/notes/idrac-wsman-bios-interface-b39a51828f61eff6.yaml releasenotes/notes/ilo-async-bios-clean-steps-15e49545ba818997.yaml releasenotes/notes/ilo-automated-cleaning-fails-14ee438de3dd8690.yaml releasenotes/notes/ilo-bios-settings-bc91524c459a4fd9.yaml releasenotes/notes/ilo-boot-from-iscsi-volume-41e8d510979c5037.yaml releasenotes/notes/ilo-boot-interface-92831b78c5614733.yaml releasenotes/notes/ilo-do-not-power-off-non-deploying-nodes-0a3aed7c8ea3940a.yaml releasenotes/notes/ilo-erase-device-priority-config-509661955a11c28e.yaml releasenotes/notes/ilo-firmware-update-manual-clean-step-e6763dc6dc0d441b.yaml releasenotes/notes/ilo-fix-inspection-b169ad0a22aea2ff.yaml releasenotes/notes/ilo-fix-uefi-iscsi-boot-702ced18e28c5c61.yaml releasenotes/notes/ilo-hardware-type-48fd1c8bccd70659.yaml releasenotes/notes/ilo-inconsistent-default-boot-mode-ef5a7c56372f89f1.yaml releasenotes/notes/ilo-inject-nmi-f487db8c3bfd08ea.yaml releasenotes/notes/ilo-license-activate-manual-clean-step-84d335998d708b49.yaml releasenotes/notes/ilo-managed-inspection-8b549c003224e011.yaml releasenotes/notes/ilo-remove-deprecated-power-retry-ba29a21f03fe8dbb.yaml releasenotes/notes/ilo-soft-power-operations-eaef33a3ff56b047.yaml releasenotes/notes/ilo-update-proliantutils-version-fd41a7c2a27be735.yaml releasenotes/notes/ilo-vendor-e8d299ae13388184.yaml releasenotes/notes/ilo5-oob-raid-a0eac60f7d77a4fc.yaml releasenotes/notes/ilo5-oob-sanitize-disk-erase-cc76ea66eb5fe6df.yaml releasenotes/notes/image-checksum-recalculation-sha256-fd3d5b4b0b757e86.yaml releasenotes/notes/image-no-data-c281f638d3dedfb2.yaml releasenotes/notes/image_checksum_optional-381acf9e441d2a58.yaml releasenotes/notes/implement-policy-in-code-cbb0216ef5f8224f.yaml releasenotes/notes/improve-conductor-shutdown-42687d8b9dac4054.yaml releasenotes/notes/improve-redfish-set-boot-device-e38e9e9442ab5750.yaml releasenotes/notes/inject-nmi-dacd692b1f259a30.yaml releasenotes/notes/inspection-agent-drivers-cad619ec8a4874b1.yaml releasenotes/notes/inspection-boot-network-59fd23ca62b09e81.yaml releasenotes/notes/inspection-logging-e1172f549ef80b04.yaml releasenotes/notes/inspector-enabled-f8a643f03e1e0360.yaml releasenotes/notes/inspector-for-cisco-bffe1d1af7aec677.yaml releasenotes/notes/inspector-periodics-34449c9d77830b3c.yaml releasenotes/notes/inspector-pxe-boot-9ab9fede5671097e.yaml releasenotes/notes/inspector-session-179f83cbb0dc169b.yaml releasenotes/notes/instance-info-root-device-0a5190240fcc8fd8.yaml releasenotes/notes/intel-ipmi-hardware-30aaa65cdbcb779a.yaml releasenotes/notes/invalid_cross_device_link-7ecf3543a8ada09f.yaml releasenotes/notes/ipa-command-retries-and-timeout-29b0be3f2c21328c.yaml releasenotes/notes/ipa-streams-raw-images-1010327b0dad763c.yaml releasenotes/notes/ipmi-cmd-for-ipmi-consoles-2e1104f22df3efcd.yaml releasenotes/notes/ipmi-console-port-ec6348df4eee6746.yaml releasenotes/notes/ipmi-debug-1c7e090c6cc71903.yaml releasenotes/notes/ipmi-disable-timeout-option-e730362007f9bedd.yaml releasenotes/notes/ipmi-noop-mgmt-8fad89dc2b4665b8.yaml releasenotes/notes/ipmi_hex_kg_key-8f6caabe5b7d7a9b.yaml releasenotes/notes/ipminative-bootdev-uefi-954a0dd825bcef97.yaml releasenotes/notes/ipmitool-bootdev-persistent-uefi-b1181a3c82343c8f.yaml releasenotes/notes/ipmitool-vendor-3f0f52240ebbe489.yaml releasenotes/notes/ipv6-provision-67bd9c1dbcc48c97.yaml releasenotes/notes/ipxe-and-uefi-7722bd5db71df02c.yaml releasenotes/notes/ipxe-boot-interface-addition-faacb344a72389f2.yaml releasenotes/notes/ipxe-command-line-ip-argument-4e92cf8bb912f62d.yaml releasenotes/notes/ipxe-dhcp-b799bc326cd2529a.yaml releasenotes/notes/ipxe-uefi-f5be11c7b0606a84.yaml releasenotes/notes/ipxe-use-swift-5ccf490daab809cc.yaml releasenotes/notes/ipxe-with-dhcpv6-2bc7bd7f53a70f51.yaml releasenotes/notes/ipxe_retry_on_failure-e71fc6b3e9a5be3b.yaml releasenotes/notes/ipxe_timeout_parameter-03fc3c76c520fac2.yaml releasenotes/notes/irmc-add-clean-step-reset-bios-config-a8bed625670b7fdf.yaml releasenotes/notes/irmc-additional-capabilities-4fd72ba50d05676c.yaml releasenotes/notes/irmc-boot-from-volume-4bc5d20a0a780669.yaml releasenotes/notes/irmc-boot-interface-8c2e26affd1ebfc4.yaml releasenotes/notes/irmc-dealing-with-ipxe-boot-interface-incompatibility-7d0b2bdb8f9deb46.yaml releasenotes/notes/irmc-manual-clean-bios-configuration-1ad24831501456d5.yaml releasenotes/notes/irmc-manual-clean-create-raid-configuration-bccef8496520bf8c.yaml releasenotes/notes/irmc-oob-inspection-6d072c60f6c88ecb.yaml releasenotes/notes/irmc-support-ipmitool-power-a3480a70753948e5.yaml releasenotes/notes/ironic-11-prelude-6dae469633823f8d.yaml releasenotes/notes/ironic-11.1-prelude-b5ba8134953db4c2.yaml releasenotes/notes/ironic-12.0-prelude-9dd8e80a1a3e8f60.yaml releasenotes/notes/ironic-cfg-defaults-4708eed8adeee609.yaml releasenotes/notes/ironic-python-agent-multidevice-fix-3daa0760696b46b7.yaml releasenotes/notes/ironic-status-upgrade-check-framework-9cd216ddf3afb271.yaml releasenotes/notes/iscsi-inband-cleaning-bff87aac16e5d488.yaml releasenotes/notes/iscsi-optional-cpu-arch-ebf6a90dde34172c.yaml releasenotes/notes/iscsi-verify-attempts-28b1d00b13ba365a.yaml releasenotes/notes/iscsi-whole-disk-cd464d589d029b01.yaml releasenotes/notes/issue-conntrack-bionic-7483671771cf2e82.yaml releasenotes/notes/json-rpc-0edc429696aca6f9.yaml releasenotes/notes/json-rpc-bind-a0348cc6f5efe812.yaml releasenotes/notes/jsonrpc-logging-21670015bb845182.yaml releasenotes/notes/jsonschema_draft04-1cb5fc4a3852f9ae.yaml releasenotes/notes/keystone-auth-3155762c524e44df.yaml releasenotes/notes/keystoneauth-adapter-opts-ca4f68f568e6cf6f.yaml releasenotes/notes/keystoneauth-config-1baa45a0a2dd93b4.yaml releasenotes/notes/kill-old-ramdisk-6fa7a16269ff11b0.yaml releasenotes/notes/list-nodes-by-driver-a1ab9f2b73f652f8.yaml releasenotes/notes/logging-keystoneauth-9db7e56c54c2473d.yaml releasenotes/notes/lookup-heartbeat-f9772521d12a0549.yaml releasenotes/notes/lookup-ignore-malformed-macs-09e7e909f3a134a3.yaml releasenotes/notes/make-terminal-session-timeout-configurable-b2365b7699b0f98b.yaml releasenotes/notes/make-versioned-notifications-topics-configurable-18d70d573c27809e.yaml releasenotes/notes/manual-abort-d3d8985a5de7376a.yaml releasenotes/notes/manual-clean-4cc2437be1aea69a.yaml releasenotes/notes/mask-configdrive-contents-77fc557d6bc63b2b.yaml releasenotes/notes/mask-ssh-creds-54ab7b2656578d2e.yaml releasenotes/notes/mdns-a5f4034257139e31.yaml releasenotes/notes/messaging-log-level-5f870ea69db53d26.yaml releasenotes/notes/metrics-notifier-information-17858c8e27c795d7.yaml releasenotes/notes/migrate-to-pysnmp-hlapi-477075b5e69cc5bc.yaml releasenotes/notes/migrate_to_hardware_types-0c85c6707c4f296d.yaml releasenotes/notes/migrate_vif_port_id-5e1496638240933d.yaml releasenotes/notes/min-sushy-version-change-3b697530e0c05dee.yaml releasenotes/notes/multi-arch-deploy-bcf840107fc94bef.yaml releasenotes/notes/multiple-workers-for-send-sensor-data-89d29c12da30ec54.yaml releasenotes/notes/multitenant-networking-0a13c4aba252573e.yaml releasenotes/notes/name-root-device-hints-a1484ea01e399065.yaml releasenotes/notes/name-suffix-47aea2d265fa75ae.yaml releasenotes/notes/needs-agent-version-in-heartbeat-4e6806b679c53ec5.yaml releasenotes/notes/net-names-b8a36aa30659ce2f.yaml releasenotes/notes/network-flat-use-node-uuid-for-binding-hostid-afb43097e7204b99.yaml releasenotes/notes/neutron-port-timeout-cbd82e1d09c6a46c.yaml releasenotes/notes/neutron-port-update-598183909d44396c.yaml releasenotes/notes/new_capabilities-5241619c4b46a460.yaml releasenotes/notes/newton-driver-deprecations-e40369be37203057.yaml releasenotes/notes/next-link-for-instance-uuid-f46eafe5b575f3de.yaml releasenotes/notes/no-classic-drivers-e68d8527491314c3.yaml releasenotes/notes/no-classic-idrac-4fbf1ba66c35fb4a.yaml releasenotes/notes/no-classic-ilo-7822af6821d2f1cc.yaml releasenotes/notes/no-classic-ipmi-7ec52a7b01e40536.yaml releasenotes/notes/no-classic-irmc-3a606045e87119b7.yaml releasenotes/notes/no-classic-oneview-e46ee2838d2b1d37.yaml releasenotes/notes/no-classic-snmp-b77d267b535da216.yaml releasenotes/notes/no-classic-ucs-cimc-7c62bb189ffbe0dd.yaml releasenotes/notes/no-coreos-f8717f9bb6a64627.yaml releasenotes/notes/no-downward-sql-migration-52279e875cd8b7a3.yaml releasenotes/notes/no-fake-308b50d4ab83ca7a.yaml releasenotes/notes/no-glance-v1-d249e8079f46f40c.yaml releasenotes/notes/no-instance-uuid-workaround-fc458deb168c7a8b.yaml releasenotes/notes/no-last-error-overwrite-b90aac3303eb992e.yaml releasenotes/notes/no-more-legacy-auth-eeb32f907d0ab5de.yaml releasenotes/notes/no-root-device-as-kernel-param-5e5326acae7b77a4.yaml releasenotes/notes/no-sensors-in-maintenance-7a0ecf418336d105.yaml releasenotes/notes/no-ssh-drivers-6ee5ff4c3ecdd3fb.yaml releasenotes/notes/node-credentials-cleaning-b1903f49ffeba029.yaml releasenotes/notes/node-deletion-update-resources-53862e48ab658f77.yaml releasenotes/notes/node-fault-8c59c0ecb94ba562.yaml releasenotes/notes/node-in-maintenance-fail-afd0eace24fa28be.yaml releasenotes/notes/node-lessee-4fb320a597192742.yaml releasenotes/notes/node-name-remove-720aa8007f2f8b75.yaml releasenotes/notes/node-owner-policy-d7168976bba70566.yaml releasenotes/notes/node-owner-policy-ports-1d3193fd897feaa6.yaml releasenotes/notes/node-owner-provision-fix-ee2348b5922f7648.yaml releasenotes/notes/node-save-internal-info-c5cc8f56f1d0dab0.yaml releasenotes/notes/node-storage-interface-api-1d6e217303bd53ff.yaml releasenotes/notes/node-stuck-when-conductor-down-3aa41a3abed9daf5.yaml releasenotes/notes/node-traits-2d950b62eea24491.yaml releasenotes/notes/node-update-instance-info-extra-policies-862b2a70b941cf39.yaml releasenotes/notes/nodes-classic-drivers-cannot-set-interfaces-620b37c4e5c88b80.yaml releasenotes/notes/non-persistent-boot-5e3a0cd78e9dc91b.yaml releasenotes/notes/noop-mgmt-a4b1a248492c7638.yaml releasenotes/notes/notify-node-storage-interface-7fd07ee7ee71cd22.yaml releasenotes/notes/notimplementederror-misspell-276a181afd652cf6.yaml releasenotes/notes/ocata-summary-a70f995cb3b18e18.yaml releasenotes/notes/oneview-agent-mixin-removal-b7277e8f20df5ef2.yaml releasenotes/notes/oneview-hardware-type-69bbb79da434871f.yaml releasenotes/notes/oneview-inspection-interface-c2d6902bbeca0501.yaml releasenotes/notes/oneview-node-free-for-ironic-61b05fee827664cb.yaml releasenotes/notes/oneview-onetime-boot-64a68e135a45f5e2.yaml releasenotes/notes/oneview-timeout-power-db5125e05831d925.yaml releasenotes/notes/oneview-timing-metrics-0b6c1b54e80eb683.yaml releasenotes/notes/online_data_migration_update_versions-ea03aff12d9c036f.yaml releasenotes/notes/only_default_flat_network_if_enabled-b5c6ea415239a53c.yaml releasenotes/notes/oob-power-off-7bbdf5947ed24bf8.yaml releasenotes/notes/opentack-baremetal-request-id-daa72b785eaaaa8d.yaml releasenotes/notes/optional-redfish-system-id-3f6e8b0ac989cb9b.yaml releasenotes/notes/orphan-nodes-389cb6d90c2917ec.yaml releasenotes/notes/oslo-i18n-optional-76bab4d2697c6f94.yaml releasenotes/notes/oslo-proxy-headers-middleware-22188a2976f8f460.yaml releasenotes/notes/oslo-reports-optional-59469955eaffdf1d.yaml releasenotes/notes/oslopolicy-scripts-bdcaeaf7dd9ce2ac.yaml releasenotes/notes/osprofiler-61a330800abe4ee6.yaml releasenotes/notes/parallel-erasure-1943da9b53a2095d.yaml releasenotes/notes/partprobe-retries-e69e9d20f3a3c2d3.yaml releasenotes/notes/pass-metrics-config-to-agent-on-lookup-6db9ae187c4e8151.yaml releasenotes/notes/pass-region-to-swiftclient-c8c8bf1020f62ebc.yaml releasenotes/notes/pass_portgroup_settings_to_neutron-a6aec830a82c38a3.yaml releasenotes/notes/periodic-tasks-drivers-ae9cddab88b546c6.yaml releasenotes/notes/persist-redfish-sessions-d521a0846fa45c40.yaml releasenotes/notes/pin-api-version-029748f7d3be68d1.yaml releasenotes/notes/port-0-is-valid-d7188af3be6f3ecb.yaml releasenotes/notes/port-list-bad-request-078512862c22118e.yaml releasenotes/notes/port-local-link-connection-network-type-71103d919e27fc5d.yaml releasenotes/notes/port-physical-network-a7009dc514353796.yaml releasenotes/notes/port_delete-6628b736a1b556f6.yaml releasenotes/notes/portgroup-crud-notifications-91204635528972b2.yaml releasenotes/notes/power-fault-recovery-6e22f0114ceee203.yaml releasenotes/notes/poweroff-after-10-tries-c592506f02c167c0.yaml releasenotes/notes/prelude-to-the-stein-f25b6073b6d1c598.yaml releasenotes/notes/prevent-callback-url-from-being-updated-41d50b20fb236e82.yaml releasenotes/notes/proliantutils_version_update-b6e5ff0e496215a5.yaml releasenotes/notes/protected-650acb2c8a387e17.yaml releasenotes/notes/provide_mountpoint-58cfd25b6dd4cfde.yaml releasenotes/notes/pxe-enabled-ports-check-c1736215dce76e97.yaml releasenotes/notes/pxe-retry-762a00ba1089bd75.yaml releasenotes/notes/pxe-snmp-driver-supported-9c559c6182c6ec4b.yaml releasenotes/notes/pxe-takeover-d8f14bcb60e5b121.yaml releasenotes/notes/queens-prelude-61fb897e96ed64c5.yaml releasenotes/notes/radosgw-temp-url-b04aac50698b4461.yaml releasenotes/notes/raid-dell-boss-e9c5da9ddceedd67.yaml releasenotes/notes/raid-hints-c27097ded0137f7c.yaml releasenotes/notes/raid-to-support-jbod-568f88207b9216e2.yaml releasenotes/notes/raise-bad-request-exception-on-validating-inspection-failure-57d7fd2999cf4ecf.yaml releasenotes/notes/ramdisk-boot-fails-4e8286e6a4e0dfb6.yaml releasenotes/notes/ramdisk-grub-use-user-kernel-ramdisk-7d572fe130932605.yaml releasenotes/notes/ramdisk-params-6083bfaa7ffa9dfe.yaml releasenotes/notes/reactive-ibmc-driver-d2149ca81a198090.yaml releasenotes/notes/reboot-do-not-power-off-if-already-1452256167d40009.yaml releasenotes/notes/rebuild-configdrive-f52479fd55b0f5ce.yaml releasenotes/notes/redfish-add-root-prefix-03b5f31ec6bbd146.yaml releasenotes/notes/redfish-bios-interface-a1acd8122c896a38.yaml releasenotes/notes/redfish-managed-inspection-936341ffa8e1f22a.yaml releasenotes/notes/refactor-ironic-lib-22939896d8d46a77.yaml releasenotes/notes/release-4.3.0-cc531ab7190f8a00.yaml releasenotes/notes/release-reservation-on-conductor-stop-6ebbcdf92da57ca6.yaml releasenotes/notes/rely-on-standalone-ports-supported-8153e1135787828b.yaml releasenotes/notes/removal-pre-allocation-for-oneview-09310a215b3aaf3c.yaml releasenotes/notes/remove-DEPRECATED-options-from-[agent]-7b6cce21b5f52022.yaml releasenotes/notes/remove-agent-heartbeat-timeout-abf8787b8477bae7.yaml releasenotes/notes/remove-agent-passthru-432b18e6c430cee6.yaml releasenotes/notes/remove-agent-passthru-complete-a6b2df65b95889d5.yaml releasenotes/notes/remove-agent_last_heartbeat-65a9fe02f20465c5.yaml releasenotes/notes/remove-ansible_deploy-driver-options-a28dc2f36110a67a.yaml releasenotes/notes/remove-app-wsgi-d5887ca28e4b9f00.yaml releasenotes/notes/remove-clean-nodes-38cfa633ca518f99.yaml releasenotes/notes/remove-clustered-compute-manager-6b45ed3803be53d1.yaml releasenotes/notes/remove-deprecated-build-instance-info-for-deploy-2fe165fc018010e4.yaml releasenotes/notes/remove-deprecated-deploy-erase-devices-iterations-55680ab95cbce3e9.yaml releasenotes/notes/remove-deprecated-dhcp-provider-method-89926a8f0f4793a4.yaml releasenotes/notes/remove-deprecated-dhcp-provider-methods-582742f3000be3c7.yaml releasenotes/notes/remove-deprecated-drac_host-865be09c6e8fcb90.yaml releasenotes/notes/remove-deprecated-hash_distribution_replicas-08351358eba4c9e1.yaml releasenotes/notes/remove-deprecated-ilo-clean-priority-erase-devices-bb3073da562ed41d.yaml releasenotes/notes/remove-deprecated-option-names-6d5d53cc70dd2d49.yaml releasenotes/notes/remove-discoverd-group-03eaf75e9f94d7be.yaml releasenotes/notes/remove-driver-object-periodic-tasks-1357a1cd3589becf.yaml releasenotes/notes/remove-driver-periodic-task-f5e513b06b601ce4.yaml releasenotes/notes/remove-elilo-support-7fc1227f66e59084.yaml releasenotes/notes/remove-enabled-drivers-5afcd77b53da1499.yaml releasenotes/notes/remove-exception-message-92100debeb40d4c7.yaml releasenotes/notes/remove-glance-num-retries-24898fc9230d9497.yaml releasenotes/notes/remove-inspecting-state-support-10325bdcdd182079.yaml releasenotes/notes/remove-ipmi-retry-timeout-c1b2cf7df6771a43.yaml releasenotes/notes/remove-ipminative-driver-3367d25bbcc41fdc.yaml releasenotes/notes/remove-ipxe-enabled-opt-61d106f01c46acab.yaml releasenotes/notes/remove-ipxe-tags-with-ipv6-cf4b7937c27590d6.yaml releasenotes/notes/remove-iscsi-deploy-ipa-mitaka-c0efa0d5c31933b6.yaml releasenotes/notes/remove-iscsi-verify-attempts-ede5b56b0545da08.yaml releasenotes/notes/remove-manage-tftp-0c2f4f417b92b1ee.yaml releasenotes/notes/remove-messaging-aliases-0a6ba1ed392b1fed.yaml releasenotes/notes/remove-metric-pxe-boot-option-1aec41aebecc1ce9.yaml releasenotes/notes/remove-most-unsupported-049f3401c2554a3c.yaml releasenotes/notes/remove-neutron-client-workarounds-996c59623684929b.yaml releasenotes/notes/remove-oneview-9315c7b926fd4aa2.yaml releasenotes/notes/remove-periodic-interval-45f57ebad9aaa14e.yaml releasenotes/notes/remove-policy-json-be92ffdba7bda951.yaml releasenotes/notes/remove-pxe-http-5a05c54f57747bfe.yaml releasenotes/notes/remove-python-oneviewclient-b1d345ef861e156e.yaml releasenotes/notes/remove-radosgw-config-b664f3023dc8403c.yaml releasenotes/notes/remove-ssh-power-port-delay-7ae6e5eb893439cd.yaml releasenotes/notes/remove-verbose-option-261f1b9e24212ee2.yaml releasenotes/notes/remove-vifs-on-teardown-707c8e40c46b6e64.yaml releasenotes/notes/remove_vagrant-4472cedd0284557c.yaml releasenotes/notes/removed-glance-host-port-protocol-dc6e682097ba398f.yaml releasenotes/notes/removed-keystone-section-1ec46442fb332c29.yaml releasenotes/notes/rename-iso-builder-func-46694ed6ded84f4a.yaml releasenotes/notes/rescue-interface-for-ilo-hardware-type-2392989d0fef8849.yaml releasenotes/notes/rescue-interface-for-irmc-hardware-type-17e38197849748e0.yaml releasenotes/notes/rescue-node-87e3b673c61ef628.yaml releasenotes/notes/reserved-node-names-67a08012ed1131ae.yaml releasenotes/notes/reset-interface-e62036ac76b87486.yaml releasenotes/notes/resource-class-change-563797d5a3c35683.yaml releasenotes/notes/resource-classes-1bf903547236a473.yaml releasenotes/notes/resources-crud-notifications-70cba9f761da3afe.yaml releasenotes/notes/restart-console-on-conductor-startup-5cff6128c325b18e.yaml releasenotes/notes/resume-cleaning-post-oob-reboot-b76c23f98219a8d2.yaml releasenotes/notes/reusing-oneview-client-6a3936fb8f113c10.yaml releasenotes/notes/rolling-upgrades-ccad5159ca3cedbe.yaml releasenotes/notes/root-api-version-info-9dd6cadd3d3d4bbe.yaml releasenotes/notes/root-device-hints-rotational-c21f02130394e1d4.yaml releasenotes/notes/scciclient-0.4.0-6f01c0f0a5c39062.yaml releasenotes/notes/security_groups-b57a5d6c30c2fae4.yaml releasenotes/notes/send-sensor-data-for-all-nodes-a732d9df43e74318.yaml releasenotes/notes/server_profile_template_uri-c79e4f15cc20a1cf.yaml releasenotes/notes/set-boot-mode-4c42b3fd0b5f5b37.yaml releasenotes/notes/setting_provisioning_cleaning_network-fb60caa1cf59cdcf.yaml releasenotes/notes/shellinabox-locking-fix-2fae2a451a8a489a.yaml releasenotes/notes/shred-final-overwrite-with-zeros-50b5ba5b19c0da27.yaml releasenotes/notes/sighup-service-reloads-configs-0e2462e3f064a2ff.yaml releasenotes/notes/smartnic-logic-has-merged-in-neutron-79078280d40f042c.yaml releasenotes/notes/snmp-driver-udp-transport-settings-67419be988fcff40.yaml releasenotes/notes/snmp-hardware-type-ee3d471cf5c596f4.yaml releasenotes/notes/snmp-noop-mgmt-53e93ac3b6dd8517.yaml releasenotes/notes/snmp-outlet-validate-ffbe8e6687172efc.yaml releasenotes/notes/snmp-reboot-delay-d18ee3f6c6fc0998.yaml releasenotes/notes/socat-address-conf-5cf043fabb10bd76.yaml releasenotes/notes/socat-respawn-de9e8805c820a7ac.yaml releasenotes/notes/soft-power-operations-oneview-e7ac054668235998.yaml releasenotes/notes/soft-reboot-poweroff-9fdb0a4306dd668d.yaml releasenotes/notes/software-raid-4a88e6c5af9ea742.yaml releasenotes/notes/software-raid-with-uefi-5b88e6c5af9ea743.yaml releasenotes/notes/sofware_raid_use_rootfs_uuid-f61eb671d696d251.yaml releasenotes/notes/sort_key_allowed_field-091f8eeedd0a2ace.yaml releasenotes/notes/ssh-console-58721af6830f8892.yaml releasenotes/notes/stop-console-during-unprovision-a29d8facb3f03be5.yaml releasenotes/notes/story-2002600-return-503-if-no-conductors-online-ead1512628182ec4.yaml releasenotes/notes/story-2002637-4825d60b096e475b.yaml releasenotes/notes/story-2004266-4725d327900850bf.yaml releasenotes/notes/story-2004444-f540d9bbc3532ad0.yaml releasenotes/notes/story-2006217-redfish-bios-cleaning-fails-fee32f04dd97cbd2.yaml releasenotes/notes/story-2006218-uefi-iso-creation-fails-ba0180991fdd0783.yaml releasenotes/notes/story-2006223-ilo-hpsum-firmware-update-fails-622883e4785313c1.yaml releasenotes/notes/story-2006288-ilo-power-on-fails-with-no-boot-device-b698fef59b04e515.yaml releasenotes/notes/story-2006316-raid-create-fails-c3661e185fb11c9f.yaml releasenotes/notes/story-2006321-ilo5-raid-create-fails-1bb1e648da0db0f1.yaml releasenotes/notes/streaming-partition-images-d58fe619658b066e.yaml releasenotes/notes/sum-based-update-firmware-manual-clean-step-e69ade488060cf27.yaml releasenotes/notes/support-root-device-hints-with-operators-96cf34fa37b5b2e8.yaml releasenotes/notes/support_to_hash_rescue_password-0915927e41e6d845.yaml releasenotes/notes/tempest_plugin_removal-009f9ce8456b16fe.yaml releasenotes/notes/train-release-59ff1643ec92c10a.yaml releasenotes/notes/transmit-all-ports-b570009d1a008067.yaml releasenotes/notes/type-error-str-6826c53d7e5e1243.yaml releasenotes/notes/uefi-first-prepare-e7fa1e2a78b4af99.yaml releasenotes/notes/uefi-grub2-by-default-6b797a9e690d2dd5.yaml releasenotes/notes/undeprecate-xclarity-4f4752017e8310e7.yaml releasenotes/notes/update-boot_mode-for-cleaning-scenario-for-ilo-hardware-type-ebca86da8fc271f6.yaml releasenotes/notes/update-irmc-set-boot-device-fd50d9dce42aaa89.yaml releasenotes/notes/update-live-port-ee3fa9b77f5d0cf7.yaml releasenotes/notes/update-port-pxe-enabled-f954f934209cbf5b.yaml releasenotes/notes/update-proliantutils-version-20ebcc22dc2df527.yaml releasenotes/notes/update-proliantutils-version-54c0cd5c5d3c01dc.yaml releasenotes/notes/update-python-scciclient-required-version-71398d5d5e1c0bf8.yaml releasenotes/notes/upgrade-delete_configuration-0f0bb43c57278734.yaml releasenotes/notes/use-current-node-driver_internal_info-5c11de8f2c2b2e87.yaml releasenotes/notes/use-dhcp-option-numbers-8b0b0efae912ff5f.yaml releasenotes/notes/use-ironic-lib-exception-4bff237c9667bf46.yaml releasenotes/notes/use_secrets_to_generate_token-55af0f43e5a80b9e.yaml releasenotes/notes/v1-discovery-4311398040581fe8.yaml releasenotes/notes/validate-ilo-certificates-3ab98bb8cfad7d60.yaml releasenotes/notes/validate-image-url-wnen-deploying-8820f4398ea9de9f.yaml releasenotes/notes/validate-instance-traits-525dd3150aa6afa2.yaml releasenotes/notes/validate-node-properties-73509ee40f409ca2.yaml releasenotes/notes/validate-port-info-before-using-it-e26135982d37c698.yaml releasenotes/notes/vendor-passthru-shared-lock-6a9e32952ee6c2fe.yaml releasenotes/notes/vif-detach-locking-fix-7be66f8150e19819.yaml releasenotes/notes/vif-detach-locking-fix-revert-3961d47fe419460a.yaml releasenotes/notes/volume-connector-and-target-api-dd172f121ab3af8e.yaml releasenotes/notes/whole-disk-root-gb-9132e5a354e6cb9d.yaml releasenotes/notes/whole-disk-scsi-install-bootloader-f7e791d82da476ca.yaml releasenotes/notes/wipe-disk-before-deployment-0a8b9cede4a659e9.yaml releasenotes/notes/wsgi-applications-5d36cf2a8885a56d.yaml releasenotes/notes/wwn-extension-root-device-hints-de40ca1444ba4888.yaml releasenotes/notes/xclarity-driver-622800d17459e3f9.yaml releasenotes/notes/xclarity-mask-password-9fe7605ece7689c3.yaml releasenotes/notes/xenserver-ssh-driver-398084fe91ac56f1.yaml releasenotes/notes/zero-temp-url-c21e208f8933c6f6.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po releasenotes/source/locale/ja/LC_MESSAGES/releasenotes.po tools/__init__.py tools/bandit.yml tools/check-releasenotes.py tools/flake8wrap.sh tools/link_aggregation_on_windows.ps1 tools/run_bashate.sh tools/states_to_dot.py tools/test-setup.sh tools/with_venv.sh tools/config/ironic-config-generator.conf tools/policy/ironic-policy-generator.conf zuul.d/ironic-jobs.yaml zuul.d/legacy-ironic-jobs.yaml zuul.d/project.yamlironic-15.0.0/ironic.egg-info/not-zip-safe0000664000175000017500000000000113652514442020313 0ustar zuulzuul00000000000000 ironic-15.0.0/ironic.egg-info/dependency_links.txt0000664000175000017500000000000113652514442022133 0ustar zuulzuul00000000000000 ironic-15.0.0/ironic.egg-info/requires.txt0000664000175000017500000000266013652514442020471 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 alembic>=0.9.6 automaton>=1.9.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 WebOb>=1.7.1 python-cinderclient!=4.0.0,>=3.3.0 python-neutronclient>=6.7.0 python-glanceclient>=2.8.0 keystoneauth1>=3.18.0 ironic-lib>=2.17.1 python-swiftclient>=3.2.0 pytz>=2013.6 stevedore>=1.20.0 oslo.concurrency>=3.26.0 oslo.config>=5.2.0 oslo.context>=2.19.2 oslo.db>=4.40.0 oslo.rootwrap>=5.8.0 oslo.log>=3.36.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.service!=1.28.1,>=1.24.0 oslo.upgradecheck>=0.1.0 oslo.utils>=3.38.0 osprofiler>=1.5.0 os-traits>=0.4.0 pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 requests>=2.14.2 rfc3986>=0.3.1 jsonpatch!=1.20,>=1.16 WSME>=0.9.3 Jinja2>=2.10 keystonemiddleware>=4.17.0 oslo.messaging>=5.29.0 retrying!=1.3.0,>=1.2.3 oslo.versionedobjects>=1.31.2 jsonschema>=2.6.0 psutil>=3.2.2 futurist>=1.2.0 tooz>=1.58.0 openstacksdk>=0.37.0 [:(sys_platform!='win32')] pysendfile>=2.0.0 [guru_meditation_reports] oslo.reports>=1.18.0 [i18n] oslo.i18n>=3.15.3 [test] hacking<3.1.0,>=3.0.0 coverage!=4.4,>=4.0 ddt>=1.0.1 doc8>=0.6.0 fixtures>=3.0.0 mock>=3.0.0 Babel!=2.4.0,>=2.3.4 PyMySQL>=0.7.6 iso8601>=0.1.11 oslo.reports>=1.18.0 oslotest>=3.2.0 stestr>=1.0.0 psycopg2>=2.7.3 testtools>=2.2.0 testresources>=2.0.0 testscenarios>=0.4 WebTest>=2.0.27 bashate>=0.5.1 flake8-import-order>=0.17.1 Pygments>=2.2.0 bandit!=1.6.0,<2.0.0,>=1.1.0 ironic-15.0.0/babel.cfg0000664000175000017500000000002113652514273014631 0ustar zuulzuul00000000000000[python: **.py] ironic-15.0.0/PKG-INFO0000664000175000017500000000517113652514443014212 0ustar zuulzuul00000000000000Metadata-Version: 2.1 Name: ironic Version: 15.0.0 Summary: OpenStack Bare Metal Provisioning Home-page: https://docs.openstack.org/ironic/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ====== Ironic ====== Team and repository tags ------------------------ .. image:: https://governance.openstack.org/tc/badges/ironic.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Overview -------- Ironic consists of an API and plug-ins for managing and provisioning physical machines in a security-aware and fault-tolerant manner. It can be used with nova as a hypervisor driver, or standalone service using bifrost. By default, it will use PXE and IPMI to interact with bare metal machines. Ironic also supports vendor-specific plug-ins which may implement additional functionality. Ironic is distributed under the terms of the Apache License, Version 2.0. The full terms and conditions of this license are detailed in the LICENSE file. Project resources ~~~~~~~~~~~~~~~~~ * Documentation: https://docs.openstack.org/ironic/latest * Source: https://opendev.org/openstack/ironic * Bugs: https://storyboard.openstack.org/#!/project/943 * Wiki: https://wiki.openstack.org/wiki/Ironic * APIs: https://docs.openstack.org/api-ref/baremetal/index.html * Release Notes: https://docs.openstack.org/releasenotes/ironic/ * Design Specifications: https://specs.openstack.org/openstack/ironic-specs/ Project status, bugs, and requests for feature enhancements (RFEs) are tracked in StoryBoard: https://storyboard.openstack.org/#!/project/943 For information on how to contribute to ironic, see https://docs.openstack.org/ironic/latest/contributor Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Requires-Python: >=3.6 Provides-Extra: guru_meditation_reports Provides-Extra: i18n Provides-Extra: test ironic-15.0.0/test-requirements.txt0000664000175000017500000000145613652514273017361 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking>=3.0.0,<3.1.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT doc8>=0.6.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD mock>=3.0.0 # BSD Babel!=2.4.0,>=2.3.4 # BSD PyMySQL>=0.7.6 # MIT License iso8601>=0.1.11 # MIT oslo.reports>=1.18.0 # Apache-2.0 oslotest>=3.2.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 psycopg2>=2.7.3 # LGPL/ZPL testtools>=2.2.0 # MIT testresources>=2.0.0 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD WebTest>=2.0.27 # MIT bashate>=0.5.1 # Apache-2.0 flake8-import-order>=0.17.1 # LGPLv3 Pygments>=2.2.0 # BSD bandit!=1.6.0,>=1.1.0,<2.0.0 # Apache-2.0 ironic-15.0.0/playbooks/0000775000175000017500000000000013652514443015114 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/0000775000175000017500000000000013652514443016360 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/ironic-dsvm-base-multinode/0000775000175000017500000000000013652514443023520 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/ironic-dsvm-base-multinode/pre.yaml0000664000175000017500000000126613652514273025200 0ustar zuulzuul00000000000000- hosts: primary name: Clone devstack-gate to /opt/git tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' ironic-15.0.0/playbooks/legacy/ironic-dsvm-base-multinode/post.yaml0000664000175000017500000000063313652514273025374 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs ironic-15.0.0/playbooks/legacy/ironic-dsvm-base/0000775000175000017500000000000013652514443021522 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/ironic-dsvm-base/pre.yaml0000664000175000017500000000126613652514273023202 0ustar zuulzuul00000000000000- hosts: primary name: Clone devstack-gate to /opt/git tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' ironic-15.0.0/playbooks/legacy/ironic-dsvm-base/post.yaml0000664000175000017500000000063313652514273023376 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs ironic-15.0.0/playbooks/legacy/grenade-dsvm-ironic/0000775000175000017500000000000013652514443022215 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/grenade-dsvm-ironic/run.yaml0000664000175000017500000001205313652514273023707 0ustar zuulzuul00000000000000# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # NOTE(sambetts) DO NOT UPDATE this job when you update the other jobs with # changes related to the current branch. The devstack local config defined in # this job is run against the last (old) version of the devstack plugin in the # grenade steps. # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - hosts: all name: Autoconverted job legacy-grenade-dsvm-ironic from old job gate-grenade-dsvm-ironic-ubuntu-xenial-nv tasks: - name: Show the environment shell: cmd: | env executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | cat << 'EOF' >> ironic-vars-early # Set this early so that we do not have to be as careful with builder ordering in jobs. export GRENADE_PLUGINRC="enable_grenade_plugin ironic https://opendev.org/openstack/ironic" EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | cat << 'EOF' >> ironic-extra-vars export PROJECTS="openstack/grenade $PROJECTS" export DEVSTACK_GATE_GRENADE=pullup export DEVSTACK_GATE_OS_TEST_TIMEOUT=2600 export DEVSTACK_GATE_TEMPEST_BAREMETAL_BUILD_TIMEOUT=1200 export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_BUILD_DEPLOY_RAMDISK=False" export DEVSTACK_GATE_TLSPROXY=0 export DEVSTACK_GATE_USE_PYTHON3=True export BUILD_TIMEOUT # Standardize VM size for each supported ramdisk export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_SPECS_RAM=384" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_RAMDISK_TYPE=tinyipa" EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | cat << 'EOF' >> ironic-vars-early # use tempest plugin export DEVSTACK_LOCAL_CONFIG+=$'\n'"TEMPEST_PLUGINS+=' /opt/stack/new/ironic-tempest-plugin'" export TEMPEST_CONCURRENCY=1 EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PROJECTS="openstack/ironic $PROJECTS" export PROJECTS="openstack/ironic-lib $PROJECTS" export PROJECTS="openstack/ironic-python-agent $PROJECTS" export PROJECTS="openstack/ironic-python-agent-builder $PROJECTS" export PROJECTS="openstack/ironic-tempest-plugin $PROJECTS" export PROJECTS="openstack/python-ironicclient $PROJECTS" export PROJECTS="openstack/virtualbmc $PROJECTS" export PYTHONUNBUFFERED=true export DEVSTACK_GATE_TEMPEST=1 export DEVSTACK_GATE_IRONIC=1 export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_VIRT_DRIVER=ironic export DEVSTACK_GATE_CONFIGDRIVE=1 export DEVSTACK_GATE_IRONIC_DRIVER=ipmi export BRANCH_OVERRIDE="{{ zuul.override_checkout | default('default') }}" if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi if [[ "$ZUUL_BRANCH" != "stable/ocata" && "$BRANCH_OVERRIDE" != "stable/ocata" ]]; then export DEVSTACK_GATE_TLSPROXY=1 fi export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_TEMPEST_WHOLE_DISK_IMAGE=False" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_EPHEMERAL_DISK=1" export DEVSTACK_GATE_IRONIC_BUILD_RAMDISK=0 export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_INSPECTOR_BUILD_RAMDISK=False" # NOTE(TheJulia): Keep the runtime down by disabling cleaning # the nodes and focus on the server related tests as opposed # to network scenario testing export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_AUTOMATED_CLEAN_ENABLED=False" export DEVSTACK_GATE_TEMPEST_REGEX=test_server_basic_ops export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_COUNT=7" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_REQUIRE_AGENT_TOKEN=False" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_DEFAULT_BOOT_OPTION=netboot" # Ensure the ironic-vars-EARLY file exists touch ironic-vars-early # Pull in the EARLY variables injected by the optional builders source ironic-vars-early export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin ironic https://opendev.org/openstack/ironic" # Ensure the ironic-EXTRA-vars file exists touch ironic-extra-vars # Pull in the EXTRA variables injected by the optional builders source ironic-extra-vars cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' ironic-15.0.0/playbooks/legacy/grenade-dsvm-ironic-multinode-multitenant/0000775000175000017500000000000013652514443026555 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/legacy/grenade-dsvm-ironic-multinode-multitenant/run.yaml0000664000175000017500000002105213652514273030246 0ustar zuulzuul00000000000000# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # NOTE(sambetts) DO NOT UPDATE this job when you update the other jobs with # changes related to the current branch. The devstack local config defined in # this job is run against the last (old) version of the devstack plugin in the # grenade steps. # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - hosts: primary name: Autoconverted job legacy-grenade-dsvm-ironic-multinode-multitenant from old job gate-grenade-dsvm-ironic-multinode-multitenant-ubuntu-xenial tasks: # NOTE(TheJulia): Python supports recompiling bytecode if a precompiled # (.pyc) file is written to disk. Python will automatically recompile # should that file disappear and attempt to load and use that bytecode. # This can lead to unexpected and undesirable behavior such as python # crashing. # # As this job scenario upgrades across possible structural changes to # python modules, and operates in a mixed environment between releases # it is a good idea to prevent scenarios where newer modules installed # by packages are leveraged by un-upgraded services because their # underlying python packages have been updated during runtime. # # This is unique to Ironic's rolling upgrade grenade job, as Nova is # excluded from being upgraded in the stack, and Ironic is left in # a half-upgraded situation. The net result of which is we have an # unstable Nova installation. # https://bugs.launchpad.net/ironic/+bug/1744139 # # TODO(TheJulia): We either need to find a better way to test rolling # upgrades. Something which supports virtualenvs would be ideal, as # well as something that allows us greater upgrade order control as # the Ironic upgrade sequence is problematic and breaks towards the end # of every cycle. - shell: cmd: | echo 'DefaultEnvironment="PYTHONDONTWRITEBYTECODE=1"' >>/etc/systemd/system.conf systemctl daemon-reexec become: yes - shell: cmd: | cat << 'EOF' >> ironic-vars-early # Set this early so that we do not have to be as careful with builder ordering in jobs. export GRENADE_PLUGINRC="enable_grenade_plugin ironic https://opendev.org/openstack/ironic" EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | # Precreate brbm so that it is created before neutron services are started, as they fail if it # is not present DEBIAN_FRONTEND=noninteractive sudo -E apt-get --option Dpkg::Options::=--force-confold --assume-yes install openvswitch-switch sudo systemctl restart openvswitch-switch sudo ovs-vsctl -- --may-exist add-br brbm cat << 'EOF' >> ironic-extra-vars export PROJECTS="openstack/grenade $PROJECTS" export DEVSTACK_GATE_GRENADE=pullup export DEVSTACK_GATE_OS_TEST_TIMEOUT=2600 export DEVSTACK_GATE_TEMPEST_BAREMETAL_BUILD_TIMEOUT=1200 export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_BUILD_DEPLOY_RAMDISK=False" export DEVSTACK_GATE_TLSPROXY=0 export DEVSTACK_GATE_USE_PYTHON3=True export BUILD_TIMEOUT export GRENADE_PLUGINRC+=$'\n'"enable_grenade_plugin networking-generic-switch https://opendev.org/openstack/networking-generic-switch" export DEVSTACK_GATE_TOPOLOGY="multinode" # networking-generic-switch requires sudo to execute ovs-vsctl commands export DEVSTACK_GATE_REMOVE_STACK_SUDO=0 export PROJECTS="openstack/networking-generic-switch $PROJECTS" export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin networking-generic-switch https://opendev.org/openstack/networking-generic-switch" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_USE_LINK_LOCAL=True" export DEVSTACK_LOCAL_CONFIG+=$'\n'"OVS_BRIDGE_MAPPINGS=mynetwork:brbm,public:br_ironic_vxlan" export DEVSTACK_LOCAL_CONFIG+=$'\n'"OVS_PHYSICAL_BRIDGE=brbm" export DEVSTACK_LOCAL_CONFIG+=$'\n'"PHYSICAL_NETWORK=mynetwork" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_PROVISION_NETWORK_NAME=ironic-provision" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1" export DEVSTACK_LOCAL_CONFIG+=$'\n'"Q_PLUGIN=ml2" export DEVSTACK_LOCAL_CONFIG+=$'\n'"PUBLIC_BRIDGE=br_ironic_vxlan" export DEVSTACK_LOCAL_CONFIG+=$'\n'"ENABLE_TENANT_VLANS=True" export DEVSTACK_LOCAL_CONFIG+=$'\n'"Q_ML2_TENANT_NETWORK_TYPE=vlan" export DEVSTACK_LOCAL_CONFIG+=$'\n'"TENANT_VLAN_RANGE=100:150" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_ENABLED_NETWORK_INTERFACES=flat,neutron" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_NETWORK_INTERFACE=neutron" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_DEFAULT_BOOT_OPTION=local" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_AUTOMATED_CLEAN_ENABLED=False" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_SPECS_RAM=384" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_RAMDISK_TYPE=tinyipa" export DEVSTACK_LOCAL_CONFIG+=$'\n'"LIBVIRT_STORAGE_POOL_PATH=/opt/libvirt/images" export DEVSTACK_GATE_TEMPEST_REGEX=test_server_basic_ops EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | cat << 'EOF' >> ironic-vars-early # use tempest plugin export DEVSTACK_LOCAL_CONFIG+=$'\n'"TEMPEST_PLUGINS+=' /opt/stack/new/ironic-tempest-plugin'" export TEMPEST_CONCURRENCY=4 EOF chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PROJECTS="openstack/ironic $PROJECTS" export PROJECTS="openstack/ironic-lib $PROJECTS" export PROJECTS="openstack/ironic-python-agent $PROJECTS" export PROJECTS="openstack/ironic-python-agent-builder $PROJECTS" export PROJECTS="openstack/ironic-tempest-plugin $PROJECTS" export PROJECTS="openstack/python-ironicclient $PROJECTS" export PROJECTS="openstack/virtualbmc $PROJECTS" export PYTHONUNBUFFERED=true export DEVSTACK_GATE_TEMPEST=1 export DEVSTACK_GATE_IRONIC=1 export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_VIRT_DRIVER=ironic export DEVSTACK_GATE_CONFIGDRIVE=1 export DEVSTACK_GATE_IRONIC_DRIVER=ipmi export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_DEFAULT_DEPLOY_INTERFACE=direct" export BRANCH_OVERRIDE="{{ zuul.override_checkout | default('default') }}" if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi if [[ "$ZUUL_BRANCH" != "stable/ocata" && "$BRANCH_OVERRIDE" != "stable/ocata" ]]; then export DEVSTACK_GATE_TLSPROXY=1 fi # the direct deploy interface requires Swift temporary URLs export DEVSTACK_LOCAL_CONFIG+=$'\n'"SWIFT_ENABLE_TEMPURLS=True" export DEVSTACK_LOCAL_CONFIG+=$'\n'"SWIFT_TEMPURL_KEY=secretkey" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_TEMPEST_WHOLE_DISK_IMAGE=True" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_EPHEMERAL_DISK=0" export DEVSTACK_GATE_IRONIC_BUILD_RAMDISK=0 export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_INSPECTOR_BUILD_RAMDISK=False" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_VM_COUNT=7" export DEVSTACK_LOCAL_CONFIG+=$'\n'"IRONIC_REQUIRE_AGENT_TOKEN=False" # Ensure the ironic-vars-EARLY file exists touch ironic-vars-early # Pull in the EARLY variables injected by the optional builders source ironic-vars-early export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin ironic https://opendev.org/openstack/ironic" # Ensure the ironic-EXTRA-vars file exists touch ironic-extra-vars # Pull in the EXTRA variables injected by the optional builders source ironic-extra-vars cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' ironic-15.0.0/playbooks/ci-workarounds/0000775000175000017500000000000013652514443020063 5ustar zuulzuul00000000000000ironic-15.0.0/playbooks/ci-workarounds/pre.yaml0000664000175000017500000000054013652514273021535 0ustar zuulzuul00000000000000- hosts: all name: Pre-setup tasks tasks: - shell: cmd: | set -e set -x sudo mkdir -p ~stack/.ssh sudo cp ~root/.ssh/id_rsa.pub ~root/.ssh/id_rsa ~stack/.ssh sudo chmod 700 ~stack/.ssh sudo chown -R stack ~stack executable: /bin/bash roles: - multi-node-bridge ironic-15.0.0/playbooks/ci-workarounds/etc-neutron.yaml0000664000175000017500000000031713652514273023214 0ustar zuulzuul00000000000000- hosts: all name: Create /etc/neutron for the devstack base job tasks: - name: Creates directory file: path: /etc/neutron state: directory mode: 0777 become: yes ironic-15.0.0/requirements.txt0000664000175000017500000000326013652514273016377 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT alembic>=0.9.6 # MIT automaton>=1.9.0 # Apache-2.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT WebOb>=1.7.1 # MIT python-cinderclient!=4.0.0,>=3.3.0 # Apache-2.0 python-neutronclient>=6.7.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0 keystoneauth1>=3.18.0 # Apache-2.0 ironic-lib>=2.17.1 # Apache-2.0 python-swiftclient>=3.2.0 # Apache-2.0 pytz>=2013.6 # MIT stevedore>=1.20.0 # Apache-2.0 pysendfile>=2.0.0;sys_platform!='win32' # MIT oslo.concurrency>=3.26.0 # Apache-2.0 oslo.config>=5.2.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.db>=4.40.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 oslo.upgradecheck>=0.1.0 # Apache-2.0 oslo.utils>=3.38.0 # Apache-2.0 osprofiler>=1.5.0 # Apache-2.0 os-traits>=0.4.0 # Apache-2.0 pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD requests>=2.14.2 # Apache-2.0 rfc3986>=0.3.1 # Apache-2.0 jsonpatch!=1.20,>=1.16 # BSD WSME>=0.9.3 # MIT Jinja2>=2.10 # BSD License (3 clause) keystonemiddleware>=4.17.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 retrying!=1.3.0,>=1.2.3 # Apache-2.0 oslo.versionedobjects>=1.31.2 # Apache-2.0 jsonschema>=2.6.0 # MIT psutil>=3.2.2 # BSD futurist>=1.2.0 # Apache-2.0 tooz>=1.58.0 # Apache-2.0 openstacksdk>=0.37.0 # Apache-2.0 ironic-15.0.0/README.rst0000664000175000017500000000270413652514273014604 0ustar zuulzuul00000000000000====== Ironic ====== Team and repository tags ------------------------ .. image:: https://governance.openstack.org/tc/badges/ironic.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Overview -------- Ironic consists of an API and plug-ins for managing and provisioning physical machines in a security-aware and fault-tolerant manner. It can be used with nova as a hypervisor driver, or standalone service using bifrost. By default, it will use PXE and IPMI to interact with bare metal machines. Ironic also supports vendor-specific plug-ins which may implement additional functionality. Ironic is distributed under the terms of the Apache License, Version 2.0. The full terms and conditions of this license are detailed in the LICENSE file. Project resources ~~~~~~~~~~~~~~~~~ * Documentation: https://docs.openstack.org/ironic/latest * Source: https://opendev.org/openstack/ironic * Bugs: https://storyboard.openstack.org/#!/project/943 * Wiki: https://wiki.openstack.org/wiki/Ironic * APIs: https://docs.openstack.org/api-ref/baremetal/index.html * Release Notes: https://docs.openstack.org/releasenotes/ironic/ * Design Specifications: https://specs.openstack.org/openstack/ironic-specs/ Project status, bugs, and requests for feature enhancements (RFEs) are tracked in StoryBoard: https://storyboard.openstack.org/#!/project/943 For information on how to contribute to ironic, see https://docs.openstack.org/ironic/latest/contributor ironic-15.0.0/.stestr.conf0000664000175000017500000000010213652514273015354 0ustar zuulzuul00000000000000[DEFAULT] test_path=${TESTS_DIR:-./ironic/tests/unit/} top_dir=./ ironic-15.0.0/etc/0000775000175000017500000000000013652514443013664 5ustar zuulzuul00000000000000ironic-15.0.0/etc/apache2/0000775000175000017500000000000013652514443015167 5ustar zuulzuul00000000000000ironic-15.0.0/etc/apache2/ironic0000664000175000017500000000257113652514273016403 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is an example Apache2 configuration file for using the # Ironic API through mod_wsgi. This version assumes you are # running devstack to configure the software, and PBR has generated # and installed the ironic-api-wsgi script while installing ironic. Listen 6385 WSGIDaemonProcess ironic user=stack group=stack threads=10 display-name=%{GROUP} WSGIScriptAlias / /usr/local/bin/ironic-api-wsgi SetEnv APACHE_RUN_USER stack SetEnv APACHE_RUN_GROUP stack WSGIProcessGroup ironic ErrorLog /var/log/apache2/ironic_error.log LogLevel info CustomLog /var/log/apache2/ironic_access.log combined WSGIProcessGroup ironic WSGIApplicationGroup %{GLOBAL} AllowOverride All Require all granted ironic-15.0.0/etc/ironic/0000775000175000017500000000000013652514443015147 5ustar zuulzuul00000000000000ironic-15.0.0/etc/ironic/rootwrap.conf0000664000175000017500000000165013652514273017676 0ustar zuulzuul00000000000000# Configuration for ironic-rootwrap # This file should be owned by (and only writable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writable by root ! filters_path=/etc/ironic/rootwrap.d,/usr/share/ironic/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, user0, user1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR ironic-15.0.0/etc/ironic/README-policy.yaml.txt0000664000175000017500000000040413652514273021102 0ustar zuulzuul00000000000000To generate the sample policy.yaml file, run the following command from the top level of the repo: tox -egenpolicy For a pre-generated example of the latest policy.yaml, see: https://docs.openstack.org/ironic/latest/configuration/sample-policy.html ironic-15.0.0/etc/ironic/README-ironic.conf.txt0000664000175000017500000000040413652514273021051 0ustar zuulzuul00000000000000To generate the sample ironic.conf file, run the following command from the top level of the repo: tox -egenconfig For a pre-generated example of the latest ironic.conf, see: https://docs.openstack.org/ironic/latest/configuration/sample-config.html ironic-15.0.0/etc/ironic/rootwrap.d/0000775000175000017500000000000013652514443017246 5ustar zuulzuul00000000000000ironic-15.0.0/etc/ironic/rootwrap.d/ironic-images.filters0000664000175000017500000000032413652514273023366 0ustar zuulzuul00000000000000# ironic-rootwrap command filters to manipulate images # This file should be owned by (and only-writable by) the root user [Filters] # ironic/common/images.py: 'qemu-img' qemu-img: CommandFilter, qemu-img, root ironic-15.0.0/etc/ironic/rootwrap.d/ironic-utils.filters0000664000175000017500000000047013652514273023263 0ustar zuulzuul00000000000000# ironic-rootwrap command filters for disk manipulation # This file should be owned by (and only-writable by) the root user [Filters] # ironic/drivers/modules/deploy_utils.py iscsiadm: CommandFilter, iscsiadm, root # ironic/common/utils.py mount: CommandFilter, mount, root umount: CommandFilter, umount, root ironic-15.0.0/etc/ironic/api_audit_map.conf.sample0000664000175000017500000000134313652514273022074 0ustar zuulzuul00000000000000[DEFAULT] # default target endpoint type # should match the endpoint type defined in service catalog target_endpoint_type = None # possible end path of API requests # path of api requests for CADF target typeURI # Just need to include top resource path to identify class # of resources. Ex: Log audit event for API requests # path containing "nodes" keyword and node uuid. [path_keywords] nodes = node drivers = driver chassis = chassis ports = port states = state power = None provision = None maintenance = None validate = None boot_device = None supported = None console = None vendor_passthru = vendor_passthru # map endpoint type defined in service catalog to CADF typeURI [service_endpoints] baremetal = service/compute/baremetal ironic-15.0.0/devstack/0000775000175000017500000000000013652514443014715 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/upgrade/0000775000175000017500000000000013652514443016344 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/upgrade/from-queens/0000775000175000017500000000000013652514443020605 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/upgrade/from-queens/upgrade-ironic0000664000175000017500000000035113652514273023440 0ustar zuulzuul00000000000000function configure_ironic_upgrade { # Remove the classic drivers from the configuration (forced by devstack-gate) # TODO(dtantsur): remove when classic drivers are removed sed -i '/^enabled_drivers/d' $IRONIC_CONF_FILE } ironic-15.0.0/devstack/upgrade/upgrade.sh0000775000175000017500000001137513652514273020342 0ustar zuulzuul00000000000000#!/usr/bin/env bash # ``upgrade-ironic`` echo "*********************************************************************" echo "Begin $0" echo "*********************************************************************" # Clean up any resources that may be in use cleanup() { set +o errexit echo "*********************************************************************" echo "ERROR: Abort $0" echo "*********************************************************************" # Kill ourselves to signal any calling process trap 2; kill -2 $$ } trap cleanup SIGHUP SIGINT SIGTERM # Keep track of the grenade directory RUN_DIR=$(cd $(dirname "$0") && pwd) # Source params source $GRENADE_DIR/grenaderc # Import common functions source $GRENADE_DIR/functions # This script exits on an error so that errors don't compound and you see # only the first error that occurred. set -o errexit # Upgrade Ironic # ============ # Duplicate some setup bits from target DevStack source $TARGET_DEVSTACK_DIR/stackrc source $TARGET_DEVSTACK_DIR/lib/tls source $TARGET_DEVSTACK_DIR/lib/nova source $TARGET_DEVSTACK_DIR/lib/neutron-legacy source $TARGET_DEVSTACK_DIR/lib/apache source $TARGET_DEVSTACK_DIR/lib/keystone source $TOP_DIR/openrc admin admin # Keep track of the DevStack directory IRONIC_DEVSTACK_DIR=$(dirname "$0")/.. source $IRONIC_DEVSTACK_DIR/lib/ironic # Print the commands being run so that we can see the command that triggers # an error. It is also useful for following allowing as the install occurs. set -o xtrace function wait_for_keystone { if ! wait_for_service $SERVICE_TIMEOUT ${KEYSTONE_AUTH_URI}/v$IDENTITY_API_VERSION/; then die $LINENO "keystone did not start" fi } # Save current config files for posterity if [[ -d $IRONIC_CONF_DIR ]] && [[ ! -d $SAVE_DIR/etc.ironic ]] ; then cp -pr $IRONIC_CONF_DIR $SAVE_DIR/etc.ironic fi stack_install_service ironic # calls upgrade-ironic for specific release upgrade_project ironic $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH # NOTE(rloo): make sure it is OK to do an upgrade. Except that we aren't # parsing/checking the output of this command because the output could change # based on the checks it makes. $IRONIC_BIN_DIR/ironic-status upgrade check $IRONIC_BIN_DIR/ironic-dbsync --config-file=$IRONIC_CONF_FILE # NOTE(vsaienko) pin_release only on multinode job, for cold upgrade (single node) # run online data migration instead. if [[ "${HOST_TOPOLOGY}" == "multinode" ]]; then iniset $IRONIC_CONF_FILE DEFAULT pin_release_version ${BASE_DEVSTACK_BRANCH#*/} else ironic-dbsync online_data_migrations fi ensure_started='ironic-conductor nova-compute ' ensure_stopped='' # Multinode grenade is designed to upgrade services only on primary node. And there is no way to manipulate # subnode during grenade phases. With this after upgrade we can have upgraded (new) services on primary # node and not upgraded (old) services on subnode. # According to Ironic upgrade procedure, we shouldn't have upgraded (new) ironic-api and not upgraded (old) # ironic-conductor. By setting redirect of API requests from primary node to subnode during upgrade # allow to satisfy ironic upgrade requirements. if [[ "$HOST_TOPOLOGY_ROLE" == "primary" ]]; then disable_service ir-api ensure_stopped+='ironic-api' ironic_wsgi_conf=$(apache_site_config_for ironic-api-wsgi) sudo cp $IRONIC_DEVSTACK_FILES_DIR/apache-ironic-api-redirect.template $ironic_wsgi_conf sudo sed -e " s|%IRONIC_SERVICE_PROTOCOL%|$IRONIC_SERVICE_PROTOCOL|g; s|%IRONIC_SERVICE_HOST%|$IRONIC_PROVISION_SUBNET_SUBNODE_IP|g; " -i $ironic_wsgi_conf enable_apache_site ipxe-ironic else ensure_started+='ironic-api ' fi start_ironic # NOTE(vsaienko) do not restart n-cpu on multinode as we didn't upgrade nova. if [[ "${HOST_TOPOLOGY}" != "multinode" ]]; then # NOTE(vsaienko) installing ironic service triggers apache restart, that # may cause nova-compute failure due to LP1537076 stop_nova_compute || true wait_for_keystone start_nova_compute fi if [[ -n "$ensure_stopped" ]]; then ensure_services_stopped $ensure_stopped fi ensure_services_started $ensure_started # We need these steps only in case of flat-network # NOTE(vsaienko) starting from Ocata when Neutron is restarted there is no guarantee that # internal tag, that was assigned to network will be the same. As result we need to update # tag on link between br-int and brbm to new value after restart. if [[ -z "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then net_id=$(openstack network show ironic_grenade -f value -c id) create_ovs_taps $net_id fi set +o xtrace echo "*********************************************************************" echo "SUCCESS: End $0" echo "*********************************************************************" ironic-15.0.0/devstack/upgrade/resources.sh0000775000175000017500000001421013652514273020714 0ustar zuulzuul00000000000000#!/bin/bash # # Copyright 2015 Hewlett-Packard Development Company, L.P. # Copyright 2016 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $TOP_DIR/openrc admin admin IRONIC_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd) source $IRONIC_DEVSTACK_DIR/lib/ironic RESOURCES_NETWORK_GATEWAY=${RESOURCES_NETWORK_GATEWAY:-10.2.0.1} RESOURCES_FIXED_RANGE=${RESOURCES_FIXED_RANGE:-10.2.0.0/20} NEUTRON_NET=ironic_grenade set -o xtrace # TODO(dtantsur): remove in Rocky, needed for parsing Placement API responses install_package jq function wait_for_ironic_resources { local i local nodes_count nodes_count=$(openstack baremetal node list -f value -c "Provisioning State" | wc -l) echo_summary "Waiting 5 minutes for Ironic resources become available again" for i in $(seq 1 30); do if openstack baremetal node list -f value -c "Provisioning State" | grep -qi failed; then die $LINENO "One of nodes is in failed state." fi if [[ $(openstack baremetal node list -f value -c "Provisioning State" | grep -ci available) == $nodes_count ]]; then return 0 fi sleep 10 done openstack baremetal node list die $LINENO "Timed out waiting for Ironic nodes are available again." } total_nodes=$IRONIC_VM_COUNT if [[ "${HOST_TOPOLOGY}" == "multinode" ]]; then total_nodes=$(( 2 * $total_nodes )) fi function early_create { # We need these steps only in case of flat-network if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then return fi # Ironic needs to have network access to the instance during deployment # from the control plane (ironic-conductor). This 'early_create' function # creates a new network with a unique CIDR, adds a route to this network # from ironic-conductor and creates taps between br-int and brbm. # ironic-conductor will be able to access the ironic nodes via this new # network. # TODO(vsaienko) use OSC when Neutron commands are supported in the stable # release. local net_id net_id=$(openstack network create --share $NEUTRON_NET -f value -c id) resource_save network net_id $net_id local subnet_params="" subnet_params+="--ip_version 4 " subnet_params+="--gateway $RESOURCES_NETWORK_GATEWAY " subnet_params+="--name $NEUTRON_NET " subnet_params+="$net_id $RESOURCES_FIXED_RANGE" local subnet_id subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2) resource_save network subnet_id $subnet_id local router_id router_id=$(openstack router create $NEUTRON_NET -f value -c id) resource_save network router_id $router_id neutron router-interface-add $NEUTRON_NET $subnet_id neutron router-gateway-set $NEUTRON_NET public # Add a route to the baremetal network via the Neutron public router. # ironic-conductor will be able to access the ironic nodes via this new # route. local r_net_gateway # Determine the IP address of the interface (ip -4 route get 8.8.8.8) that # will be used to access a public IP on the router we created ($router_id). # In this case we use the Google DNS server at 8.8.8.8 as the public IP # address. This does not actually attempt to contact 8.8.8.8, it just # determines the IP address of the interface that traffic to 8.8.8.8 would # use. We use the IP address of this interface to setup the route. test_with_retry "sudo ip netns exec qrouter-$router_id ip -4 route get 8.8.8.8 " "Route did not start" 60 r_net_gateway=$(sudo ip netns exec qrouter-$router_id ip -4 route get 8.8.8.8 |grep dev | awk '{print $7}') sudo ip route replace $RESOURCES_FIXED_RANGE via $r_net_gateway # NOTE(vsaienko) remove connection between br-int and brbm from old setup sudo ovs-vsctl -- --if-exists del-port ovs-1-tap1 sudo ovs-vsctl -- --if-exists del-port brbm-1-tap1 create_ovs_taps $net_id } function create { : } function verify { : } function verify_noapi { : } function destroy { # We need these steps only in case of flat-network if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then return fi # NOTE(vsaienko) move ironic VMs back to private network. local net_id net_id=$(openstack network show private -f value -c id) create_ovs_taps $net_id # NOTE(vsaienko) during early_create phase we update grenade resources neutron/subnet_id, # neutron/router_id, neutron/net_id. It was needed to instruct nova to boot instances # in ironic_grenade network instead of neutron_grenade during resources phase. As result # during neutron/resources.sh destroy phase ironic_grenade router|subnet|network were deleted. # Make sure that we removed neutron resources here. neutron router-gateway-clear neutron_grenade || /bin/true neutron router-interface-delete neutron_grenade neutron_grenade || /bin/true neutron router-delete neutron_grenade || /bin/true neutron net-delete neutron_grenade || /bin/true } # Dispatcher case $1 in "early_create") wait_for_ironic_resources wait_for_nova_resources $total_nodes early_create ;; "create") create ;; "verify_noapi") # NOTE(vdrok): our implementation of verify_noapi is a noop, but # grenade always passes the upgrade side (pre-upgrade or post-upgrade) # as an argument to it. Pass all the arguments grenade passes further. verify_noapi "${@:2}" ;; "verify") # NOTE(vdrok): pass all the arguments grenade passes further. verify "${@:2}" ;; "destroy") destroy ;; esac ironic-15.0.0/devstack/upgrade/shutdown.sh0000775000175000017500000000070613652514273020562 0ustar zuulzuul00000000000000#!/bin/bash # # set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions # We need base DevStack functions for this source $BASE_DEVSTACK_DIR/functions source $BASE_DEVSTACK_DIR/stackrc # needed for status directory source $BASE_DEVSTACK_DIR/lib/tls source $BASE_DEVSTACK_DIR/lib/apache # Keep track of the DevStack directory IRONIC_DEVSTACK_DIR=$(dirname "$0")/.. source $IRONIC_DEVSTACK_DIR/lib/ironic set -o xtrace stop_ironic ironic-15.0.0/devstack/upgrade/settings0000664000175000017500000000311013652514273020123 0ustar zuulzuul00000000000000# Grenade needs to know that Ironic has a Grenade plugin. This is done in the # gate by setting GRENADE_PLUGINRC when using openstack-infra/devstack-gate. # That means that in the project openstack-infra/project-config we will need to # update the Ironic grenade job(s) in jenkins/jobs/devstack-gate.yaml with # this: # export GRENADE_PLUGINRC="enable_grenade_plugin ironic https://opendev.org/openstack/ironic" # If openstack-infra/project-config is not updated then the Grenade tests will # never get run for Ironic register_project_for_upgrade ironic register_db_to_save ironic # Duplicate some settings from devstack. Use old devstack as we install base # environment from it. In common_settings we also source the old localrc # variables, so we need to do this before checking the HOST_TOPOLOGY value IRONIC_BASE_DEVSTACK_DIR=$TOP_DIR/../../old/ironic/devstack source $IRONIC_BASE_DEVSTACK_DIR/common_settings if [[ "${HOST_TOPOLOGY}" != "multinode" ]]; then # Disable automated cleaning on single node grenade to save a time and resources. export IRONIC_AUTOMATED_CLEAN_ENABLED=False fi # NOTE(jlvillal): For multi-node grenade jobs we do not want to upgrade Nova if [[ "${HOST_TOPOLOGY}" == "multinode" ]]; then # Remove 'nova' from the list of projects to upgrade UPGRADE_PROJECTS=$(echo $UPGRADE_PROJECTS | sed -e 's/\s*nova//g' ) fi # NOTE(vdrok): Do not setup multicell during upgrade export CELLSV2_SETUP="singleconductor" # https://storyboard.openstack.org/#!/story/2003808 # pxe booting with virtio broken in xenial-updates/queens/main export LIBVIRT_NIC_DRIVER=e1000 ironic-15.0.0/devstack/lib/0000775000175000017500000000000013652514443015463 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/lib/ironic0000664000175000017500000035012113652514273016674 0ustar zuulzuul00000000000000#!/bin/bash # # lib/ironic # Functions to control the configuration and operation of the **Ironic** service # Dependencies: # # - ``functions`` file # - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined # - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined # - ``SERVICE_HOST`` # - ``KEYSTONE_TOKEN_FORMAT`` must be defined # ``stack.sh`` calls the entry points in this order: # # - install_ironic # - install_ironicclient # - init_ironic # - start_ironic # - stop_ironic # - cleanup_ironic # ensure we don't re-source this in the same environment [[ -z "$_IRONIC_DEVSTACK_LIB" ]] || return 0 declare -r -g _IRONIC_DEVSTACK_LIB=1 # Save xtrace and pipefail settings _XTRACE_IRONIC=$(set +o | grep xtrace) _PIPEFAIL_IRONIC=$(set +o | grep pipefail) set -o xtrace set +o pipefail # Defaults # -------- # Set up default directories GITDIR["python-ironicclient"]=$DEST/python-ironicclient GITDIR["ironic-lib"]=$DEST/ironic-lib GITREPO["pyghmi"]=${PYGHMI_REPO:-${GIT_BASE}/x/pyghmi} GITBRANCH["pyghmi"]=${PYGHMI_BRANCH:-master} GITDIR["pyghmi"]=$DEST/pyghmi GITREPO["virtualbmc"]=${VIRTUALBMC_REPO:-${GIT_BASE}/openstack/virtualbmc.git} GITBRANCH["virtualbmc"]=${VIRTUALBMC_BRANCH:-master} GITDIR["virtualbmc"]=$DEST/virtualbmc GITREPO["virtualpdu"]=${VIRTUALPDU_REPO:-${GIT_BASE}/openstack/virtualpdu.git} GITBRANCH["virtualpdu"]=${VIRTUALPDU_BRANCH:-master} GITDIR["virtualpdu"]=$DEST/virtualpdu GITREPO["sushy"]=${SUSHY_REPO:-${GIT_BASE}/openstack/sushy.git} GITBRANCH["sushy"]=${SUSHY_BRANCH:-master} GITDIR["sushy"]=$DEST/sushy GITREPO["sushy-tools"]=${SUSHY_TOOLS_REPO:-${GIT_BASE}/openstack/sushy-tools.git} GITBRANCH["sushy-tools"]=${SUSHY_TOOLS_BRANCH:-master} GITDIR["sushy-tools"]=$DEST/sushy-tools IRONIC_DIR=$DEST/ironic IRONIC_DEVSTACK_DIR=$IRONIC_DIR/devstack IRONIC_DEVSTACK_FILES_DIR=$IRONIC_DEVSTACK_DIR/files # TODO(dtantsur): delete these three when we migrate image building to # ironic-python-agent-builder completely IRONIC_PYTHON_AGENT_REPO=${IRONIC_PYTHON_AGENT_REPO:-${GIT_BASE}/openstack/ironic-python-agent.git} IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-$TARGET_BRANCH} IRONIC_PYTHON_AGENT_DIR=$DEST/ironic-python-agent IRONIC_PYTHON_AGENT_BUILDER_REPO=${IRONIC_PYTHON_AGENT_BUILDER_REPO:-${GIT_BASE}/openstack/ironic-python-agent-builder.git} IRONIC_PYTHON_AGENT_BUILDER_BRANCH=${IRONIC_PYTHON_AGENT_BUILDER_BRANCH:-$TARGET_BRANCH} IRONIC_PYTHON_AGENT_BUILDER_DIR=$DEST/ironic-python-agent-builder IRONIC_DIB_BINDEP_FILE=https://opendev.org/openstack/diskimage-builder/raw/branch/master/bindep.txt IRONIC_DATA_DIR=$DATA_DIR/ironic IRONIC_STATE_PATH=/var/lib/ironic IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic} IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic} IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf IRONIC_ROOTWRAP_CONF=$IRONIC_CONF_DIR/rootwrap.conf # Deploy Ironic API under uwsgi (NOT mod_wsgi) server. # Devstack aims to remove mod_wsgi support, so ironic shouldn't use it too. # If set to False that will fall back to use the eventlet server that # can happen on grenade runs. # The (confusing) name IRONIC_USE_MOD_WSGI is left for backward compatibility, # for example during grenade runs # TODO(pas-ha) remove IRONIC_USE_MOD_WSGI var after oldest supported # stable branch is stable/rocky IRONIC_USE_MOD_WSGI=$(trueorfalse $ENABLE_HTTPD_MOD_WSGI_SERVICES IRONIC_USE_MOD_WSGI) # If True, will deploy Ironic API under WSGI server, currently supported one # is uwsgi. # Defaults to the (now confusingly named) IRONIC_USE_MOD_WSGI for backward compat IRONIC_USE_WSGI=$(trueorfalse $IRONIC_USE_MOD_WSGI IRONIC_USE_WSGI) # Whether DevStack will be setup for bare metal or VMs IRONIC_IS_HARDWARE=$(trueorfalse False IRONIC_IS_HARDWARE) # Deploy callback timeout can be changed from its default (1800), if required. IRONIC_CALLBACK_TIMEOUT=${IRONIC_CALLBACK_TIMEOUT:-} # Timeout before retrying PXE boot. Set low to help the CI. if [[ "$IRONIC_IS_HARDWARE" == False ]]; then IRONIC_PXE_BOOT_RETRY_TIMEOUT=${IRONIC_PXE_BOOT_RETRY_TIMEOUT:-600} else IRONIC_PXE_BOOT_RETRY_TIMEOUT=${IRONIC_PXE_BOOT_RETRY_TIMEOUT:-} fi # Ping timeout after the node becomes active IRONIC_PING_TIMEOUT=${IRONIC_PING_TIMEOUT:-} # Deploy to hardware platform IRONIC_HW_NODE_CPU=${IRONIC_HW_NODE_CPU:-1} IRONIC_HW_NODE_RAM=${IRONIC_HW_NODE_RAM:-512} IRONIC_HW_NODE_DISK=${IRONIC_HW_NODE_DISK:-10} IRONIC_HW_EPHEMERAL_DISK=${IRONIC_HW_EPHEMERAL_DISK:-0} IRONIC_HW_ARCH=${IRONIC_HW_ARCH:-x86_64} # The file is composed of multiple lines, each line includes fields # separated by white space, in the format: # # [] # # For example: # # 192.168.110.107 00:1e:67:57:50:4c root otc123 # # Supported IRONIC_DEPLOY_DRIVERs: # ipmi: # # # idrac: # # # irmc: # # IRONIC_HWINFO_FILE=${IRONIC_HWINFO_FILE:-$IRONIC_DATA_DIR/hardware_info} # Set up defaults for functional / integration testing IRONIC_NODE_UUID=${IRONIC_NODE_UUID:-`uuidgen`} IRONIC_SCRIPTS_DIR=${IRONIC_SCRIPTS_DIR:-$IRONIC_DEVSTACK_DIR/tools/ironic/scripts} IRONIC_TEMPLATES_DIR=${IRONIC_TEMPLATES_DIR:-$IRONIC_DEVSTACK_DIR/tools/ironic/templates} IRONIC_BAREMETAL_BASIC_OPS=$(trueorfalse False IRONIC_BAREMETAL_BASIC_OPS) IRONIC_TFTPBOOT_DIR=${IRONIC_TFTPBOOT_DIR:-$IRONIC_DATA_DIR/tftpboot} IRONIC_TFTPSERVER_IP=${IRONIC_TFTPSERVER_IP:-$HOST_IP} IRONIC_TFTP_BLOCKSIZE=${IRONIC_TFTP_BLOCKSIZE:-$((PUBLIC_BRIDGE_MTU-50))} IRONIC_VM_COUNT=${IRONIC_VM_COUNT:-1} IRONIC_VM_SPECS_CPU=${IRONIC_VM_SPECS_CPU:-1} IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-1280} IRONIC_VM_SPECS_CPU_ARCH=${IRONIC_VM_SPECS_CPU_ARCH:-'x86_64'} IRONIC_VM_SPECS_DISK=${IRONIC_VM_SPECS_DISK:-10} IRONIC_VM_SPECS_DISK_FORMAT=${IRONIC_VM_SPECS_DISK_FORMAT:-qcow2} IRONIC_VM_EPHEMERAL_DISK=${IRONIC_VM_EPHEMERAL_DISK:-0} IRONIC_VM_EMULATOR=${IRONIC_VM_EMULATOR:-'/usr/bin/qemu-system-x86_64'} IRONIC_VM_ENGINE=${IRONIC_VM_ENGINE:-qemu} IRONIC_VM_NETWORK_BRIDGE=${IRONIC_VM_NETWORK_BRIDGE:-brbm} IRONIC_VM_INTERFACE_COUNT=${IRONIC_VM_INTERFACE_COUNT:-2} IRONIC_VM_VOLUME_COUNT=${IRONIC_VM_VOLUME_COUNT:-1} IRONIC_VM_MACS_CSV_FILE=${IRONIC_VM_MACS_CSV_FILE:-$IRONIC_DATA_DIR/ironic_macs.csv} IRONIC_CLEAN_NET_NAME=${IRONIC_CLEAN_NET_NAME:-${IRONIC_PROVISION_NETWORK_NAME:-${PRIVATE_NETWORK_NAME}}} IRONIC_RESCUE_NET_NAME=${IRONIC_RESCUE_NET_NAME:-${IRONIC_CLEAN_NET_NAME}} IRONIC_EXTRA_PXE_PARAMS=${IRONIC_EXTRA_PXE_PARAMS:-} IRONIC_TTY_DEV=${IRONIC_TTY_DEV:-ttyS0,115200} IRONIC_TEMPEST_BUILD_TIMEOUT=${IRONIC_TEMPEST_BUILD_TIMEOUT:-${BUILD_TIMEOUT:-}} if [[ -n "$BUILD_TIMEOUT" ]]; then echo "WARNING: BUILD_TIMEOUT variable is renamed to IRONIC_TEMPEST_BUILD_TIMEOUT and will be deprecated in Pike." fi IRONIC_DEFAULT_API_VERSION=${IRONIC_DEFAULT_API_VERSION:-} IRONIC_CMD="openstack baremetal" if [[ -n "$IRONIC_DEFAULT_API_VERSION" ]]; then IRONIC_CMD="$IRONIC_CMD --os-baremetal-api-version $IRONIC_DEFAULT_API_VERSION" fi IRONIC_ENABLED_HARDWARE_TYPES=${IRONIC_ENABLED_HARDWARE_TYPES:-"ipmi,fake-hardware"} # list of all available driver interfaces types IRONIC_DRIVER_INTERFACE_TYPES="bios boot power management deploy console inspect raid rescue storage network vendor" IRONIC_ENABLED_BIOS_INTERFACES=${IRONIC_ENABLED_BIOS_INTERFACES:-"fake,no-bios"} IRONIC_ENABLED_BOOT_INTERFACES=${IRONIC_ENABLED_BOOT_INTERFACES:-"fake,ipxe"} IRONIC_ENABLED_CONSOLE_INTERFACES=${IRONIC_ENABLED_CONSOLE_INTERFACES:-"fake,no-console"} IRONIC_ENABLED_DEPLOY_INTERFACES=${IRONIC_ENABLED_DEPLOY_INTERFACES:-"fake,iscsi,direct"} IRONIC_ENABLED_INSPECT_INTERFACES=${IRONIC_ENABLED_INSPECT_INTERFACES:-"fake,no-inspect"} IRONIC_ENABLED_MANAGEMENT_INTERFACES=${IRONIC_ENABLED_MANAGEMENT_INTERFACES:-"fake,ipmitool,noop"} IRONIC_ENABLED_NETWORK_INTERFACES=${IRONIC_ENABLED_NETWORK_INTERFACES:-"flat,noop"} IRONIC_ENABLED_POWER_INTERFACES=${IRONIC_ENABLED_POWER_INTERFACES:-"fake,ipmitool"} IRONIC_ENABLED_RAID_INTERFACES=${IRONIC_ENABLED_RAID_INTERFACES:-"fake,agent,no-raid"} IRONIC_ENABLED_RESCUE_INTERFACES=${IRONIC_ENABLED_RESCUE_INTERFACES:-"fake,no-rescue"} IRONIC_ENABLED_STORAGE_INTERFACES=${IRONIC_ENABLED_STORAGE_INTERFACES:-"fake,cinder,noop"} IRONIC_ENABLED_VENDOR_INTERFACES=${IRONIC_ENABLED_VENDOR_INTERFACES:-"fake,ipmitool,no-vendor"} # for usage with hardware types IRONIC_DEFAULT_BIOS_INTERFACE=${IRONIC_DEFAULT_BIOS_INTERFACE:-} IRONIC_DEFAULT_BOOT_INTERFACE=${IRONIC_DEFAULT_BOOT_INTERFACE:-} IRONIC_DEFAULT_CONSOLE_INTERFACE=${IRONIC_DEFAULT_CONSOLE_INTERFACE:-} IRONIC_DEFAULT_DEPLOY_INTERFACE=${IRONIC_DEFAULT_DEPLOY_INTERFACE:-} IRONIC_DEFAULT_INSPECT_INTERFACE=${IRONIC_DEFAULT_INSPECT_INTERFACE:-} IRONIC_DEFAULT_MANAGEMENT_INTERFACE=${IRONIC_DEFAULT_MANAGEMENT_INTERFACE:-} IRONIC_DEFAULT_NETWORK_INTERFACE=${IRONIC_DEFAULT_NETWORK_INTERFACE:-} IRONIC_DEFAULT_POWER_INTERFACE=${IRONIC_DEFAULT_POWER_INTERFACE:-} IRONIC_DEFAULT_RAID_INTERFACE=${IRONIC_DEFAULT_RAID_INTERFACE:-} IRONIC_DEFAULT_RESCUE_INTERFACE=${IRONIC_DEFAULT_RESCUE_INTERFACE:-} IRONIC_DEFAULT_STORAGE_INTERFACE=${IRONIC_DEFAULT_STORAGE_INTERFACE:-} IRONIC_DEFAULT_VENDOR_INTERFACE=${IRONIC_DEFAULT_VENDOR_INTERFACE:-} # If IRONIC_VM_ENGINE is explicitly set to "auto" or "kvm", # devstack will attempt to use hardware virtualization # (aka nested kvm). We do not enable it in the infra gates # because it is not consistently supported/working across # all gate infrastructure providers. if [[ "$IRONIC_VM_ENGINE" == "auto" ]]; then sudo modprobe kvm || true if [ ! -e /dev/kvm ]; then echo "WARNING: Switching to QEMU" IRONIC_VM_ENGINE=qemu if [[ -z "$IRONIC_VM_EMULATOR" ]]; then IRONIC_VM_EMULATOR='/usr/bin/qemu-system-x86_64' fi else IRONIC_VM_ENGINE=kvm fi fi if [[ "$IRONIC_VM_ENGINE" == "kvm" ]]; then # Set this to empty, so configure-vm.py can autodetect location # of KVM binary IRONIC_VM_EMULATOR="" fi # By default, baremetal VMs will console output to file. IRONIC_VM_LOG_CONSOLE=$(trueorfalse True IRONIC_VM_LOG_CONSOLE) IRONIC_VM_LOG_DIR=${IRONIC_VM_LOG_DIR:-$IRONIC_DATA_DIR/logs/} IRONIC_VM_LOG_ROTATE=$(trueorfalse True IRONIC_VM_LOG_ROTATE) # Set resource_classes for nodes to use Nova's placement engine IRONIC_DEFAULT_RESOURCE_CLASS=${IRONIC_DEFAULT_RESOURCE_CLASS:-baremetal} # Set traits for nodes. Traits should be separated by whitespace. IRONIC_DEFAULT_TRAITS=${IRONIC_DEFAULT_TRAITS-CUSTOM_GOLD} # Whether to build the ramdisk or download a prebuilt one. IRONIC_BUILD_DEPLOY_RAMDISK=$(trueorfalse True IRONIC_BUILD_DEPLOY_RAMDISK) # Ironic IPA ramdisk type, supported types are: IRONIC_SUPPORTED_RAMDISK_TYPES_RE="^(tinyipa|dib)$" IRONIC_RAMDISK_TYPE=${IRONIC_RAMDISK_TYPE:-dib} # Confirm we have a supported ramdisk type or fail early. if [[ ! "$IRONIC_RAMDISK_TYPE" =~ $IRONIC_SUPPORTED_RAMDISK_TYPES_RE ]]; then die $LINENO "Unrecognized IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected 'tinyipa' or 'dib'" fi # If present, these files are used as deploy ramdisk/kernel. # (The value must be an absolute path) IRONIC_DEPLOY_RAMDISK=${IRONIC_DEPLOY_RAMDISK:-$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.initramfs} IRONIC_DEPLOY_KERNEL=${IRONIC_DEPLOY_KERNEL:-$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.kernel} IRONIC_DEPLOY_ISO=${IRONIC_DEPLOY_ISO:-$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.iso} # If present, this file is used to deploy/boot nodes over virtual media # (The value must be an absolute path) IRONIC_EFIBOOT=${IRONIC_EFIBOOT:-$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.efiboot} # NOTE(jroll) this needs to be updated when stable branches are cut IPA_DOWNLOAD_BRANCH=${IPA_DOWNLOAD_BRANCH:-master} IPA_DOWNLOAD_BRANCH=$(echo $IPA_DOWNLOAD_BRANCH | tr / -) # OS for using with DIB images IRONIC_DIB_RAMDISK_OS=${IRONIC_DIB_RAMDISK_OS:-centos8} IRONIC_DIB_RAMDISK_RELEASE=${IRONIC_DIB_RAMDISK_RELEASE:-} # Configure URLs required to download ramdisk if we're not building it, and # IRONIC_DEPLOY_RAMDISK/KERNEL or the RAMDISK/KERNEL_URLs have not been # preconfigured. if [[ "$IRONIC_BUILD_DEPLOY_RAMDISK" == "False" && \ ! (-e "$IRONIC_DEPLOY_RAMDISK" && -e "$IRONIC_DEPLOY_KERNEL") && \ (-z "$IRONIC_AGENT_KERNEL_URL" || -z "$IRONIC_AGENT_RAMDISK_URL") ]]; then case $IRONIC_RAMDISK_TYPE in tinyipa) IRONIC_AGENT_KERNEL_FILE=tinyipa-${IPA_DOWNLOAD_BRANCH}.vmlinuz IRONIC_AGENT_RAMDISK_FILE=tinyipa-${IPA_DOWNLOAD_BRANCH}.gz ;; dib) IRONIC_AGENT_KERNEL_FILE=ipa-${IRONIC_DIB_RAMDISK_OS}-${IPA_DOWNLOAD_BRANCH}.kernel IRONIC_AGENT_RAMDISK_FILE=ipa-${IRONIC_DIB_RAMDISK_OS}-${IPA_DOWNLOAD_BRANCH}.initramfs ;; esac IRONIC_AGENT_KERNEL_URL=https://tarballs.openstack.org/ironic-python-agent/${IRONIC_RAMDISK_TYPE}/files/${IRONIC_AGENT_KERNEL_FILE} IRONIC_AGENT_RAMDISK_URL=https://tarballs.openstack.org/ironic-python-agent/${IRONIC_RAMDISK_TYPE}/files/${IRONIC_AGENT_RAMDISK_FILE} fi # This refers the options for disk-image-create and the platform on which # to build the dib based ironic-python-agent ramdisk. IRONIC_DIB_RAMDISK_OPTIONS=${IRONIC_DIB_RAMDISK_OPTIONS:-} if [[ -z "$IRONIC_DIB_RAMDISK_OPTIONS" ]]; then if [[ "$IRONIC_DIB_RAMDISK_OS" == "centos8" ]]; then # Adapt for DIB naming change IRONIC_DIB_RAMDISK_OS=centos-minimal IRONIC_DIB_RAMDISK_RELEASE=8 fi IRONIC_DIB_RAMDISK_OPTIONS="$IRONIC_DIB_RAMDISK_OS" fi # DHCP timeout for the dhcp-all-interfaces element. IRONIC_DIB_DHCP_TIMEOUT=${IRONIC_DIB_DHCP_TIMEOUT:-60} # Some drivers in Ironic require deploy ramdisk in bootable ISO format. # Set this variable to "true" to build an ISO for deploy ramdisk and # upload to Glance. IRONIC_DEPLOY_ISO_REQUIRED=$(trueorfalse False IRONIC_DEPLOY_ISO_REQUIRED) if [[ "$IRONIC_DEPLOY_ISO_REQUIRED" = "True" \ && "$IRONIC_BUILD_DEPLOY_RAMDISK" = "False" \ && ! -e "$IRONIC_DEPLOY_ISO" ]]; then die "Prebuilt ISOs are not available, provide an ISO via IRONIC_DEPLOY_ISO \ or set IRONIC_BUILD_DEPLOY_RAMDISK=True to use ISOs" fi # Which deploy driver to use - valid choices right now # are ``ipmi``, ``snmp`` and ``redfish``. # # Additional valid choices if IRONIC_IS_HARDWARE == true are: # ``idrac`` and ``irmc``. IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-ipmi} # If the requested driver is not yet enable, enable it, if it is not it will fail anyway if [[ -z "$(echo ${IRONIC_ENABLED_HARDWARE_TYPES} | grep -w ${IRONIC_DEPLOY_DRIVER})" ]]; then die "The deploy driver $IRONIC_DEPLOY_DRIVER is not in the list of enabled \ hardware types $IRONIC_ENABLED_HARDWARE_TYPES" fi # Support entry points installation of console scripts IRONIC_BIN_DIR=$(get_python_exec_prefix) IRONIC_UWSGI_CONF=$IRONIC_CONF_DIR/ironic-uwsgi.ini IRONIC_UWSGI=$IRONIC_BIN_DIR/ironic-api-wsgi # Ironic connection info. Note the port must be specified. if is_service_enabled tls-proxy; then IRONIC_SERVICE_PROTOCOL=https fi IRONIC_SERVICE_PROTOCOL=${IRONIC_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL} IRONIC_SERVICE_PORT=${IRONIC_SERVICE_PORT:-6385} IRONIC_SERVICE_PORT_INT=${IRONIC_SERVICE_PORT_INT:-16385} # If ironic api running under apache or UWSGI we use the path rather than port if [[ "$IRONIC_USE_WSGI" == "True" ]]; then IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST/baremetal} else IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST:$IRONIC_SERVICE_PORT} fi # Enable iPXE IRONIC_IPXE_ENABLED=$(trueorfalse True IRONIC_IPXE_ENABLED) # Options below are only applied when IRONIC_IPXE_ENABLED is True IRONIC_IPXE_USE_SWIFT=$(trueorfalse False IRONIC_IPXE_USE_SWIFT) IRONIC_HTTP_DIR=${IRONIC_HTTP_DIR:-$IRONIC_DATA_DIR/httpboot} IRONIC_HTTP_PORT=${IRONIC_HTTP_PORT:-3928} # Allow using JSON RPC instead of oslo.messaging IRONIC_RPC_TRANSPORT=${IRONIC_RPC_TRANSPORT:-oslo} IRONIC_JSON_RPC_PORT=${IRONIC_JSON_RPC_PORT:-8089} # The first port in the range to bind the Virtual BMCs. The number of # ports that will be used depends on $IRONIC_VM_COUNT variable, e.g if # $IRONIC_VM_COUNT=3 the ports 6230, 6231 and 6232 will be used for the # Virtual BMCs, one for each VM. IRONIC_VBMC_PORT_RANGE_START=${IRONIC_VBMC_PORT_RANGE_START:-6230} IRONIC_VBMC_CONFIG_FILE=${IRONIC_VBMC_CONFIG_FILE:-$IRONIC_CONF_DIR/virtualbmc/virtualbmc.conf} IRONIC_VBMC_LOGFILE=${IRONIC_VBMC_LOGFILE:-$IRONIC_VM_LOG_DIR/virtualbmc.log} IRONIC_VBMC_SYSTEMD_SERVICE=devstack@virtualbmc.service # Virtual PDU configs IRONIC_VPDU_CONFIG_FILE=${IRONIC_VPDU_CONFIG_FILE:-$IRONIC_CONF_DIR/virtualpdu/virtualpdu.conf} IRONIC_VPDU_PORT_RANGE_START=${IRONIC_VPDU_PORT_RANGE_START:-1} IRONIC_VPDU_LISTEN_PORT=${IRONIC_VPDU_LISTEN_PORT:-1161} IRONIC_VPDU_COMMUNITY=${IRONIC_VPDU_COMMUNITY:-private} IRONIC_VPDU_SNMPDRIVER=${IRONIC_VPDU_SNMPDRIVER:-apc_rackpdu} IRONIC_VPDU_SYSTEMD_SERVICE=devstack@virtualpdu.service # Redfish configs IRONIC_REDFISH_EMULATOR_PORT=${IRONIC_REDFISH_EMULATOR_PORT:-9132} IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE="devstack@redfish-emulator.service" IRONIC_REDFISH_EMULATOR_CONFIG=${IRONIC_REDFISH_EMULATOR_CONFIG:-$IRONIC_CONF_DIR/redfish/emulator.conf} # To explicitly enable configuration of Glance with Swift # (which is required by some vendor drivers), set this # variable to true. IRONIC_CONFIGURE_GLANCE_WITH_SWIFT=$(trueorfalse False IRONIC_CONFIGURE_GLANCE_WITH_SWIFT) # The path to the libvirt hooks directory, used if IRONIC_VM_LOG_ROTATE is True IRONIC_LIBVIRT_HOOKS_PATH=${IRONIC_LIBVIRT_HOOKS_PATH:-/etc/libvirt/hooks/} LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"} LIBVIRT_STORAGE_POOL_PATH=${LIBVIRT_STORAGE_POOL_PATH:-/var/lib/libvirt/images} # The authentication strategy used by ironic-api. Valid values are: # keystone and noauth. IRONIC_AUTH_STRATEGY=${IRONIC_AUTH_STRATEGY:-keystone} # By default, terminal SSL certificate is disabled. IRONIC_TERMINAL_SSL=$(trueorfalse False IRONIC_TERMINAL_SSL) IRONIC_TERMINAL_CERT_DIR=${IRONIC_TERMINAL_CERT_DIR:-$IRONIC_DATA_DIR/terminal_cert/} # This flag is used to allow adding Link-Local-Connection info # to ironic port-create command. LLC info is obtained from # IRONIC_{VM,HW}_NODES_FILE IRONIC_USE_LINK_LOCAL=$(trueorfalse False IRONIC_USE_LINK_LOCAL) # Allow selecting dhcp provider IRONIC_DHCP_PROVIDER=${IRONIC_DHCP_PROVIDER:-neutron} # This is the network interface to use for a node IRONIC_NETWORK_INTERFACE=${IRONIC_NETWORK_INTERFACE:-} # Ironic provision network name, if this value is set it means we are using # multi-tenant networking. If not set, then we are not using multi-tenant # networking and are therefore using a 'flat' network. IRONIC_PROVISION_NETWORK_NAME=${IRONIC_PROVISION_NETWORK_NAME:-} # Provision network provider type. Can be flat or vlan. # This is only used if IRONIC_PROVISION_NETWORK_NAME has been set. IRONIC_PROVISION_PROVIDER_NETWORK_TYPE=${IRONIC_PROVISION_PROVIDER_NETWORK_TYPE:-'vlan'} # If IRONIC_PROVISION_PROVIDER_NETWORK_TYPE is vlan. VLAN_ID may be specified. If it is not set, # vlan will be allocated dynamically. # This is only used if IRONIC_PROVISION_NETWORK_NAME has been set. IRONIC_PROVISION_SEGMENTATION_ID=${IRONIC_PROVISION_SEGMENTATION_ID:-} # Allocation network pool for provision network # Example: IRONIC_PROVISION_ALLOCATION_POOL=start=10.0.5.10,end=10.0.5.100 # This is only used if IRONIC_PROVISION_NETWORK_NAME has been set. IRONIC_PROVISION_ALLOCATION_POOL=${IRONIC_PROVISION_ALLOCATION_POOL:-'start=10.0.5.10,end=10.0.5.100'} # Ironic provision subnet name. # This is only used if IRONIC_PROVISION_NETWORK_NAME has been set. IRONIC_PROVISION_PROVIDER_SUBNET_NAME=${IRONIC_PROVISION_PROVIDER_SUBNET_NAME:-${IRONIC_PROVISION_NETWORK_NAME}-subnet} # When enabled this will set the physical_network attribute for ironic ports # and subnet-to-segment association on provisioning network will be configured. # NOTE: The neutron segments service_plugin must be loaded for this. IRONIC_USE_NEUTRON_SEGMENTS=$(trueorfalse False IRONIC_USE_NEUTRON_SEGMENTS) # This is the storage interface to use for a node # Only 'cinder' can be set for testing boot from volume IRONIC_STORAGE_INTERFACE=${IRONIC_STORAGE_INTERFACE:-} # With multinode case all ironic-conductors should have IP from provisioning network. # IRONIC_PROVISION_SUBNET_GATEWAY - is configured on primary node. # Ironic provision subnet gateway. IRONIC_PROVISION_SUBNET_GATEWAY=${IRONIC_PROVISION_SUBNET_GATEWAY:-'10.0.5.1'} IRONIC_PROVISION_SUBNET_SUBNODE_IP=${IRONIC_PROVISION_SUBNET_SUBNODE_IP:-'10.0.5.2'} # Ironic provision subnet prefix # Example: IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24 IRONIC_PROVISION_SUBNET_PREFIX=${IRONIC_PROVISION_SUBNET_PREFIX:-'10.0.5.0/24'} if [[ "$HOST_TOPOLOGY_ROLE" == "primary" ]]; then IRONIC_TFTPSERVER_IP=$IRONIC_PROVISION_SUBNET_GATEWAY IRONIC_HTTP_SERVER=$IRONIC_PROVISION_SUBNET_GATEWAY fi if [[ "$HOST_TOPOLOGY_ROLE" == "subnode" ]]; then IRONIC_TFTPSERVER_IP=$IRONIC_PROVISION_SUBNET_SUBNODE_IP IRONIC_HTTP_SERVER=$IRONIC_PROVISION_SUBNET_SUBNODE_IP fi IRONIC_HTTP_SERVER=${IRONIC_HTTP_SERVER:-$IRONIC_TFTPSERVER_IP} # Port that must be permitted for iSCSI connections to be # established from the tenant network. ISCSI_SERVICE_PORT=${ISCSI_SERVICE_PORT:-3260} # Retrieving logs from the deploy ramdisk # # IRONIC_DEPLOY_LOGS_COLLECT possible values are: # * always: Collect the ramdisk logs from the deployment on success or # failure (Default in DevStack for debugging purpose). # * on_failure: Collect the ramdisk logs upon a deployment failure # (Default in Ironic). # * never: Never collect the ramdisk logs. IRONIC_DEPLOY_LOGS_COLLECT=${IRONIC_DEPLOY_LOGS_COLLECT:-always} # IRONIC_DEPLOY_LOGS_STORAGE_BACKEND possible values are: # * local: To store the logs in the local filesystem (Default in Ironic and DevStack). # * swift: To store the logs in Swift. IRONIC_DEPLOY_LOGS_STORAGE_BACKEND=${IRONIC_DEPLOY_LOGS_STORAGE_BACKEND:-local} # The path to the directory where Ironic should put the logs when IRONIC_DEPLOY_LOGS_STORAGE_BACKEND is set to "local" IRONIC_DEPLOY_LOGS_LOCAL_PATH=${IRONIC_DEPLOY_LOGS_LOCAL_PATH:-$IRONIC_VM_LOG_DIR/deploy_logs} # Fast track option IRONIC_DEPLOY_FAST_TRACK=${IRONIC_DEPLOY_FAST_TRACK:-False} # Agent Token requirement IRONIC_REQUIRE_AGENT_TOKEN=${IRONIC_REQUIRE_AGENT_TOKEN:-True} # Define baremetal min_microversion in tempest config. Default value None is picked from tempest. TEMPEST_BAREMETAL_MIN_MICROVERSION=${TEMPEST_BAREMETAL_MIN_MICROVERSION:-} # Define baremetal max_microversion in tempest config. No default value means that it is picked from tempest. TEMPEST_BAREMETAL_MAX_MICROVERSION=${TEMPEST_BAREMETAL_MAX_MICROVERSION:-} # get_pxe_boot_file() - Get the PXE/iPXE boot file path function get_pxe_boot_file { local pxe_boot_file if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then if is_ubuntu; then pxe_boot_file=/usr/lib/ipxe/undionly.kpxe elif is_fedora || is_suse; then pxe_boot_file=/usr/share/ipxe/undionly.kpxe fi else # Standard PXE if is_ubuntu; then # Ubuntu Xenial (16.04) places the file under /usr/lib/PXELINUX pxe_paths="/usr/lib/syslinux/pxelinux.0 /usr/lib/PXELINUX/pxelinux.0" for p in $pxe_paths; do if [[ -f $p ]]; then pxe_boot_file=$p fi done elif is_fedora || is_suse; then pxe_boot_file=/usr/share/syslinux/pxelinux.0 fi fi echo $pxe_boot_file } # PXE boot image IRONIC_PXE_BOOT_IMAGE=${IRONIC_PXE_BOOT_IMAGE:-$(get_pxe_boot_file)} IRONIC_AUTOMATED_CLEAN_ENABLED=$(trueorfalse True IRONIC_AUTOMATED_CLEAN_ENABLED) IRONIC_SECURE_BOOT=${IRONIC_SECURE_BOOT:-False} IRONIC_UEFI_BOOT_LOADER=${IRONIC_UEFI_BOOT_LOADER:-grub2} IRONIC_GRUB2_SHIM_FILE=${IRONIC_GRUB2_SHIM_FILE:-} IRONIC_GRUB2_FILE=${IRONIC_GRUB2_FILE:-} IRONIC_UEFI_FILES_DIR=${IRONIC_UEFI_FILES_DIR:-/var/lib/libvirt/images} UEFI_LOADER_PATH=$IRONIC_UEFI_FILES_DIR/OVMF_CODE.fd UEFI_NVRAM_PATH=$IRONIC_UEFI_FILES_DIR/OVMF_VARS.fd # Handle architecture specific package installs if [[ $IRONIC_HW_ARCH == "x86_64" ]]; then install_package shim if is_ubuntu; then install_package grub-efi-amd64-signed elif is_fedora; then install_package grub2-efi fi fi # Sanity checks if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then if [[ "$IRONIC_IPXE_ENABLED" == "False" ]] && [[ "$IRONIC_UEFI_BOOT_LOADER" != "grub2" ]]; then die $LINENO "Boot mode UEFI is only supported with iPXE and grub2 bootloaders." fi if ! is_fedora && ! is_ubuntu; then die $LINENO "Boot mode UEFI only works in Ubuntu or Fedora for now." fi if is_arch "x86_64"; then if is_ubuntu; then install_package grub-efi elif is_fedora; then install_package grub2 grub2-efi fi fi if is_ubuntu && [[ -z $IRONIC_GRUB2_FILE ]]; then IRONIC_GRUB2_SHIM_FILE=/usr/lib/shim/shimx64.efi IRONIC_GRUB2_FILE=/usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed fi if [[ "$IRONIC_IPXE_ENABLED" == "False" ]]; then # NOTE(TheJulia): While we no longer directly copy the # IRONIC_GRUB2_FILE, we still check the existence as # without the bootloader package we would be unable to build # the netboot core image. if [[ -z $IRONIC_GRUB2_SHIM_FILE ]] || [[ -z $IRONIC_GRUB2_FILE ]] || [[ ! -f $IRONIC_GRUB2_SHIM_FILE ]] || [[ ! -f $IRONIC_GRUB2_FILE ]]; then die $LINENO "Grub2 Bootloader and Shim file missing." fi fi fi # TODO(dtantsur): change this when we change the default value. IRONIC_DEFAULT_BOOT_OPTION=${IRONIC_DEFAULT_BOOT_OPTION:-local} if [ $IRONIC_DEFAULT_BOOT_OPTION != "netboot" ] && [ $IRONIC_DEFAULT_BOOT_OPTION != "local" ]; then die $LINENO "Supported values for IRONIC_DEFAULT_BOOT_OPTION are 'netboot' and 'local' only." fi # TODO(pas-ha) find a way to (cross-)sign the custom CA bundle used by tls-proxy # with default iPXE cert - for reference see http://ipxe.org/crypto if is_service_enabled tls-proxy && [[ "$IRONIC_IPXE_USE_SWIFT" == "True" ]]; then die $LINENO "Ironic in DevStack does not yet support booting iPXE from HTTPS URLs" fi # Timeout for "manage" action. 2 minutes is more than enough. IRONIC_MANAGE_TIMEOUT=${IRONIC_MANAGE_TIMEOUT:-120} # Timeout for "provide" action. This involves cleaning. Generally, 15 minutes # should be enough, but real hardware may need more. IRONIC_CLEANING_TIMEOUT=${IRONIC_CLEANING_TIMEOUT:-1200} IRONIC_CLEANING_DELAY=10 IRONIC_CLEANING_ATTEMPTS=$(( $IRONIC_CLEANING_TIMEOUT / $IRONIC_CLEANING_DELAY )) # Timeout for ironic-neutron-agent to report state before providing nodes. # The agent reports every 60 seconds, 2 minutes should do. IRONIC_NEUTRON_AGENT_REPORT_STATE_DELAY=10 IRONIC_NEUTRON_AGENT_REPORT_STATE_TIMEOUT=${IRONIC_NEUTRON_AGENT_REPORT_STATE_TIMEOUT:-120} IRONIC_NEUTRON_AGENT_REPORT_STATE_ATTEMPTS=$(( $IRONIC_NEUTRON_AGENT_REPORT_STATE_TIMEOUT / IRONIC_NEUTRON_AGENT_REPORT_STATE_DELAY )) # Username to use by Ansible to access ramdisk, # to be set as '[ansible]/default_username' option. # If not set here (default), will be set to 'tc' for TinyIPA ramdisk, # for other ramdisks it must be either provided here, # or set manually per-node via ironic API IRONIC_ANSIBLE_SSH_USER=${IRONIC_ANSIBLE_SSH_USER:-} # Path to the private SSH key to use by ansible deploy interface # that will be set as '[ansible]/default_key_file' option in config. # The public key path is assumed to be ${IRONIC_ANSIBLE_SSH_KEY}.pub # and will be used when rebuilding the image to include this public key # in ~/.ssh/authorized_keys of a $IRONIC_ANSIBLE_SSH_USER in the ramdisk. # Only the TinyIPA ramdisks are currently supported for such rebuild. # For TinyIPA ramdisks, if the specified file doesn't exist, it will # be created and will contain a new RSA passwordless key. We assume # that the directories in the path to this file exist and are # writable. # For other ramdisk types, make sure the corresponding public key is baked into # the ramdisk to be used by DevStack and provide the path to the private key here, # or set it manually per node via ironic API. # FIXME(pas-ha) auto-generated keys currently won't work for multi-node # DevStack deployment, as we do not distribute this generated key to subnodes yet. IRONIC_ANSIBLE_SSH_KEY=${IRONIC_ANSIBLE_SSH_KEY:-$IRONIC_DATA_DIR/ansible_ssh_key} IRONIC_AGENT_IMAGE_DOWNLOAD_SOURCE=${IRONIC_AGENT_IMAGE_DOWNLOAD_SOURCE:-swift} # Functions # --------- # UEFI related functions function get_uefi_ipxe_boot_file { if is_ubuntu; then echo /usr/lib/ipxe/ipxe.efi elif is_fedora; then echo /usr/share/ipxe/ipxe-x86_64.efi fi } function get_uefi_loader { if is_ubuntu; then echo /usr/share/OVMF/OVMF_CODE.fd elif is_fedora; then echo /usr/share/edk2/ovmf/OVMF_CODE.fd fi } function get_uefi_nvram { if is_ubuntu; then echo /usr/share/OVMF/OVMF_VARS.fd elif is_fedora; then echo /usr/share/edk2/ovmf/OVMF_VARS.fd fi } # Misc function restart_libvirt { local libvirt_service_name="libvirtd" if is_ubuntu && ! type libvirtd; then libvirt_service_name="libvirt-bin" fi restart_service $libvirt_service_name } # Test if any Ironic services are enabled # is_ironic_enabled function is_ironic_enabled { [[ ,${ENABLED_SERVICES} =~ ,"ir-" ]] && return 0 return 1 } function is_deployed_by_agent { [[ -z "${IRONIC_DEPLOY_DRIVER%%agent*}" || "$IRONIC_DEFAULT_DEPLOY_INTERFACE" == "direct" ]] && return 0 return 1 } function is_deployed_by_ipmi { [[ "$IRONIC_DEPLOY_DRIVER" == ipmi ]] && return 0 return 1 } function is_deployed_by_ilo { [[ "${IRONIC_DEPLOY_DRIVER}" == ilo ]] && return 0 return 1 } function is_deployed_by_drac { [[ "${IRONIC_DEPLOY_DRIVER}" == idrac ]] && return 0 return 1 } function is_deployed_by_snmp { [[ "${IRONIC_DEPLOY_DRIVER}" == snmp ]] && return 0 return 1 } function is_deployed_by_redfish { [[ "$IRONIC_DEPLOY_DRIVER" == redfish ]] && return 0 return 1 } function is_deployed_by_irmc { [[ "$IRONIC_DEPLOY_DRIVER" == irmc ]] && return 0 return 1 } function is_deployed_by_xclarity { [[ "$IRONIC_DEPLOY_DRIVER" == xclarity ]] && return 0 return 1 } function is_drac_enabled { [[ -z "${IRONIC_ENABLED_HARDWARE_TYPES%%*idrac*}" ]] && return 0 return 1 } function is_ansible_deploy_enabled { [[ -z "${IRONIC_ENABLED_DEPLOY_INTERFACES%%*ansible*}" ]] && return 0 return 1 } function is_redfish_enabled { [[ -z "${IRONIC_ENABLED_HARDWARE_TYPES%%*redfish*}" ]] && return 0 return 1 } function is_ansible_with_tinyipa { # NOTE(pas-ha) we support rebuilding the ramdisk to include (generated) SSH keys # as needed for ansible deploy interface only for TinyIPA ramdisks for now is_ansible_deploy_enabled && [[ "$IRONIC_RAMDISK_TYPE" == "tinyipa" ]] && return 0 return 1 } function is_glance_configuration_required { is_deployed_by_agent || is_ansible_deploy_enabled || [[ "$IRONIC_CONFIGURE_GLANCE_WITH_SWIFT" == "True" ]] && return 0 return 1 } function is_deploy_iso_required { [[ "$IRONIC_IS_HARDWARE" == "True" && "$IRONIC_DEPLOY_ISO_REQUIRED" == "True" ]] && return 0 return 1 } # Assert that the redfish hardware type is enabled in case we are using # the redfish driver if is_deployed_by_redfish && [[ "$IRONIC_ENABLED_HARDWARE_TYPES" != *"redfish"* ]]; then die $LINENO "Please make sure that the redfish hardware" \ "type is enabled. Take a look at the " \ "IRONIC_ENABLED_HARDWARE_TYPES configuration option" \ "for DevStack" fi # Assert that for non-TynyIPA ramdisks and Ansible, the private SSH key file to use exists. if is_ansible_deploy_enabled && [[ "$IRONIC_RAMDISK_TYPE" != "tinyipa" ]]; then if [[ ! -f $IRONIC_ANSIBLE_SSH_KEY ]]; then die $LINENO "Using non-TinyIPA ramdisks with ansible deploy interface" \ "requires setting IRONIC_ANSIBLE_SSH_KEY to existing"\ "private SSH key file to be used by Ansible." fi fi # Syslinux >= 5.00 pxelinux.0 binary is not "stand-alone" anymore, # it depends on some c32 modules to work correctly. # More info: http://www.syslinux.org/wiki/index.php/Library_modules function setup_syslinux_modules { # Ignore it for iPXE, it doesn't repend on syslinux modules [[ "$IRONIC_IPXE_ENABLED" == "True" ]] && return 0 # Ubuntu Xenial keeps doesn't ship pxelinux.0 as part of syslinux anymore if is_ubuntu && [[ -d /usr/lib/PXELINUX/ ]]; then # TODO(lucasagomes): Figure out whether its UEFI or BIOS once # we have UEFI support in DevStack cp -aR /usr/lib/syslinux/modules/bios/*.c32 $IRONIC_TFTPBOOT_DIR else cp -aR $(dirname $IRONIC_PXE_BOOT_IMAGE)/*.c32 $IRONIC_TFTPBOOT_DIR fi } function start_virtualbmc { start_service $IRONIC_VBMC_SYSTEMD_SERVICE } function stop_virtualbmc { stop_service $IRONIC_VBMC_SYSTEMD_SERVICE } function cleanup_virtualbmc { stop_virtualbmc disable_service $IRONIC_VBMC_SYSTEMD_SERVICE local unitfile="$SYSTEMD_DIR/$IRONIC_VBMC_SYSTEMD_SERVICE" sudo rm -f $unitfile $SYSTEMCTL daemon-reload } function install_virtualbmc { # Install pyghmi from source, if requested, otherwise it will be # downloaded as part of the virtualbmc installation if use_library_from_git "pyghmi"; then git_clone_by_name "pyghmi" setup_dev_lib "pyghmi" fi if use_library_from_git "virtualbmc"; then git_clone_by_name "virtualbmc" setup_dev_lib "virtualbmc" else pip_install_gr "virtualbmc" fi local cmd cmd=$(which vbmcd) cmd+=" --foreground" write_user_unit_file $IRONIC_VBMC_SYSTEMD_SERVICE "$cmd" "" "$STACK_USER" local unitfile="$SYSTEMD_DIR/$IRONIC_VBMC_SYSTEMD_SERVICE" iniset -sudo $unitfile "Service" "Environment" "VIRTUALBMC_CONFIG=$IRONIC_VBMC_CONFIG_FILE" enable_service $IRONIC_VBMC_SYSTEMD_SERVICE } function configure_virtualbmc { if [[ ! -d $(dirname $IRONIC_VBMC_CONFIG_FILE) ]]; then mkdir -p $(dirname $IRONIC_VBMC_CONFIG_FILE) fi iniset -sudo $IRONIC_VBMC_CONFIG_FILE log debug True } function start_virtualpdu { start_service $IRONIC_VPDU_SYSTEMD_SERVICE } function stop_virtualpdu { stop_service $IRONIC_VPDU_SYSTEMD_SERVICE } function cleanup_virtualpdu { stop_virtualpdu disable_service $IRONIC_VPDU_SYSTEMD_SERVICE local unitfile="$SYSTEMD_DIR/$IRONIC_VPDU_SYSTEMD_SERVICE" sudo rm -f $unitfile $SYSTEMCTL daemon-reload } function install_virtualpdu { if use_library_from_git "virtualpdu"; then git_clone_by_name "virtualpdu" setup_dev_lib "virtualpdu" else pip_install "virtualpdu" fi local cmd cmd=$(which virtualpdu) cmd+=" $IRONIC_VPDU_CONFIG_FILE" write_user_unit_file $IRONIC_VPDU_SYSTEMD_SERVICE "$cmd" "" "$STACK_USER" enable_service $IRONIC_VPDU_SYSTEMD_SERVICE } function configure_virtualpdu { mkdir -p $(dirname $IRONIC_VPDU_CONFIG_FILE) iniset -sudo $IRONIC_VPDU_CONFIG_FILE global debug True iniset -sudo $IRONIC_VPDU_CONFIG_FILE global libvirt_uri "qemu:///system" iniset -sudo $IRONIC_VPDU_CONFIG_FILE PDU listen_address ${HOST_IP} iniset -sudo $IRONIC_VPDU_CONFIG_FILE PDU listen_port ${IRONIC_VPDU_LISTEN_PORT} iniset -sudo $IRONIC_VPDU_CONFIG_FILE PDU community ${IRONIC_VPDU_COMMUNITY} iniset -sudo $IRONIC_VPDU_CONFIG_FILE PDU ports $(_generate_pdu_ports) iniset -sudo $IRONIC_VPDU_CONFIG_FILE PDU outlet_default_state "OFF" } # _generate_pdu_ports() - Generates list of port:node_name. function _generate_pdu_ports { pdu_port_number=${IRONIC_VPDU_PORT_RANGE_START} port_config=() for vm_name in $(_ironic_bm_vm_names); do port_config+=("${pdu_port_number}:${vm_name}") pdu_port_number=$(( pdu_port_number + 1 )) done echo ${port_config[*]} | tr ' ' ',' } function start_redfish { start_service $IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE } function stop_redfish { stop_service $IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE } function cleanup_redfish { stop_redfish rm -f $IRONIC_REDFISH_EMULATOR_CONFIG disable_service $IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE local unitfile="$SYSTEMD_DIR/$IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE" sudo rm -f $unitfile $SYSTEMCTL daemon-reload } function install_redfish { # TODO(lucasagomes): Use Apache WSGI instead of gunicorn gunicorn=gunicorn if is_ubuntu; then if python3_enabled; then gunicorn=${gunicorn}3 fi install_package $gunicorn else pip_install_gr "gunicorn" fi if use_library_from_git "sushy-tools"; then git_clone_by_name "sushy-tools" setup_dev_lib "sushy-tools" else pip_install "sushy-tools" fi local cmd cmd=$(which $gunicorn) cmd+=" sushy_tools.emulator.main:app" cmd+=" --bind ${HOST_IP}:${IRONIC_REDFISH_EMULATOR_PORT}" cmd+=" --env FLASK_DEBUG=1" cmd+=" --env SUSHY_EMULATOR_CONFIG=${IRONIC_REDFISH_EMULATOR_CONFIG}" write_user_unit_file $IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE "$cmd" "" "$STACK_USER" enable_service $IRONIC_REDFISH_EMULATOR_SYSTEMD_SERVICE } function configure_redfish { if [[ ! -d $(dirname $IRONIC_REDFISH_EMULATOR_CONFIG) ]]; then mkdir -p $(dirname $IRONIC_REDFISH_EMULATOR_CONFIG) fi cat - < $IRONIC_REDFISH_EMULATOR_CONFIG SUSHY_EMULATOR_BOOT_LOADER_MAP = { 'UEFI': { 'x86_64': '$UEFI_LOADER_PATH' }, 'Legacy': { 'x86_64': None } } EOF } function setup_sushy { if use_library_from_git "sushy"; then git_clone_by_name "sushy" setup_dev_lib "sushy" else pip_install_gr "sushy" fi } # install_ironic() - Install the things! function install_ironic { # NOTE(vsaienko) do not check required_services on subnode if [[ "$HOST_TOPOLOGY_ROLE" != "subnode" ]]; then # make sure all needed service were enabled local req_services="key" if is_service_enabled nova && [[ "$VIRT_DRIVER" == "ironic" ]]; then req_services+=" nova glance neutron" fi for srv in $req_services; do if ! is_service_enabled "$srv"; then die $LINENO "$srv should be enabled for Ironic." fi done fi if use_library_from_git "ironic-lib"; then git_clone_by_name "ironic-lib" setup_dev_lib "ironic-lib" fi setup_develop $IRONIC_DIR if [[ "$IRONIC_USE_WSGI" == "True" || "$IRONIC_IPXE_ENABLED" == "True" ]]; then install_apache_wsgi fi if [[ "$IRONIC_BOOT_MODE" == "uefi" && "$IRONIC_IS_HARDWARE" == "False" ]]; then # Append the nvram configuration to libvirt if it's not present already if ! sudo grep -q "^nvram" /etc/libvirt/qemu.conf; then echo "nvram=[\"$UEFI_LOADER_PATH:$UEFI_NVRAM_PATH\"]" | sudo tee -a /etc/libvirt/qemu.conf fi # Replace the default virtio PXE ROM in QEMU with an EFI capable # one. The EFI ROM should work on with both boot modes, Legacy # BIOS and UEFI. if is_ubuntu; then # (rpittau) in bionic the UEFI in the ovmf 0~20180205.c0d9813c-2 # package is broken: EFI v2.70 by EDK II # As a workaround, here we download and install the old working # version from the multiverse repository: EFI v2.60 by EDK II # Bug reference: # https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1821729 local temp_deb temp_deb="$(mktemp)" wget http://archive.ubuntu.com/ubuntu/pool/multiverse/e/edk2/ovmf_0~20160408.ffea0a2c-2_all.deb -O "$temp_deb" sudo dpkg -i "$temp_deb" rm -f "$temp_deb" # NOTE(TheJulia): This no longer seems required as the ovmf images # DO correctly network boot. The effect of this is making the # default boot loader iPXE, which is not always desired nor # realistic for hardware in the field. # If it is after Train, we should likely just delete the lines # below and consider the same for Fedora. # sudo rm /usr/share/qemu/pxe-virtio.rom # sudo ln -s /usr/lib/ipxe/qemu/efi-virtio.rom /usr/share/qemu/pxe-virtio.rom elif is_fedora; then sudo rm /usr/share/qemu/pxe-virtio.rom sudo ln -s /usr/share/ipxe.efi/1af41000.rom /usr/share/qemu/pxe-virtio.rom fi # Restart libvirt to the changes to take effect restart_libvirt fi if is_redfish_enabled || is_deployed_by_redfish; then setup_sushy fi if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then if is_deployed_by_ipmi; then install_virtualbmc fi if is_deployed_by_snmp; then install_virtualpdu fi if is_deployed_by_redfish; then install_redfish fi fi if is_drac_enabled; then pip_install python-dracclient fi if is_ansible_deploy_enabled; then pip_install "$(grep '^ansible' $IRONIC_DIR/driver-requirements.txt | awk '{print $1}')" fi } # install_ironicclient() - Collect sources and prepare function install_ironicclient { if use_library_from_git "python-ironicclient"; then git_clone_by_name "python-ironicclient" setup_dev_lib "python-ironicclient" else # nothing actually "requires" ironicclient, so force instally from pypi pip_install_gr python-ironicclient fi } # _cleanup_ironic_apache_additions() - Remove uwsgi files, disable and remove apache vhost file function _cleanup_ironic_apache_additions { if [[ "$IRONIC_IPXE_ENABLED" == "True" ]]; then sudo rm -rf $IRONIC_HTTP_DIR disable_apache_site ipxe-ironic sudo rm -f $(apache_site_config_for ipxe-ironic) fi if [[ "$IRONIC_USE_WSGI" == "True" ]]; then remove_uwsgi_config "$IRONIC_UWSGI_CONF" "$IRONIC_UWSGI" fi restart_apache_server } # _config_ironic_apache_ipxe() - Configure ironic IPXE site function _config_ironic_apache_ipxe { local ipxe_apache_conf ipxe_apache_conf=$(apache_site_config_for ipxe-ironic) sudo cp $IRONIC_DEVSTACK_FILES_DIR/apache-ipxe-ironic.template $ipxe_apache_conf sudo sed -e " s|%PUBLICPORT%|$IRONIC_HTTP_PORT|g; s|%HTTPROOT%|$IRONIC_HTTP_DIR|g; s|%APACHELOGDIR%|$APACHE_LOG_DIR|g; " -i $ipxe_apache_conf enable_apache_site ipxe-ironic } # cleanup_ironic_config_files() - Remove residual cache/config/log files, # left over from previous runs that would need to clean up. function cleanup_ironic_config_files { sudo rm -rf $IRONIC_AUTH_CACHE_DIR $IRONIC_CONF_DIR sudo rm -rf $IRONIC_VM_LOG_DIR/* } # cleanup_ironic() - Clean everything left from Ironic function cleanup_ironic { cleanup_ironic_config_files # Cleanup additions made to Apache if [[ "$IRONIC_USE_WSGI" == "True" || "$IRONIC_IPXE_ENABLED" == "True" ]]; then _cleanup_ironic_apache_additions fi cleanup_virtualbmc cleanup_virtualpdu cleanup_redfish # Remove the hook to disable log rotate sudo rm -rf $IRONIC_LIBVIRT_HOOKS_PATH/qemu } # configure_ironic_dirs() - Create all directories required by Ironic and # associated services. function configure_ironic_dirs { sudo install -d -o $STACK_USER $IRONIC_CONF_DIR $STACK_USER $IRONIC_DATA_DIR \ $IRONIC_STATE_PATH $IRONIC_TFTPBOOT_DIR $IRONIC_TFTPBOOT_DIR/pxelinux.cfg sudo chown -R $STACK_USER:$STACK_USER $IRONIC_TFTPBOOT_DIR if [[ "$IRONIC_IPXE_ENABLED" == "True" ]]; then sudo install -d -o $STACK_USER -g $STACK_USER $IRONIC_HTTP_DIR fi if [ ! -f "$IRONIC_PXE_BOOT_IMAGE" ]; then die $LINENO "PXE boot file $IRONIC_PXE_BOOT_IMAGE not found." fi # Copy PXE binary # NOTE(mjturek): The PXE binary is x86_64 specific. So it should only be copied when # deploying to an x86_64 node. if [[ $IRONIC_HW_ARCH == "x86_64" ]]; then cp $IRONIC_PXE_BOOT_IMAGE $IRONIC_TFTPBOOT_DIR setup_syslinux_modules fi if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then local uefi_boot_file uefi_boot_file=$(get_uefi_ipxe_boot_file) if [ ! -f $uefi_boot_file ]; then die $LINENO "UEFI boot file $uefi_boot_file not found." fi cp $uefi_boot_file $IRONIC_TFTPBOOT_DIR if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then local uefi_loader local uefi_nvram # Copy the OVMF images to libvirt's path uefi_loader=$(get_uefi_loader) uefi_nvram=$(get_uefi_nvram) sudo cp $uefi_loader $UEFI_LOADER_PATH sudo cp $uefi_nvram $UEFI_NVRAM_PATH fi fi # Create the logs directory when saving the deploy logs to the filesystem if [[ "$IRONIC_DEPLOY_LOGS_STORAGE_BACKEND" == "local" && "$IRONIC_DEPLOY_LOGS_COLLECT" != "never" ]]; then install -d -o $STACK_USER $IRONIC_DEPLOY_LOGS_LOCAL_PATH fi } function configure_ironic_networks { if [[ -n "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then echo_summary "Configuring Ironic provisioning network" configure_ironic_provision_network fi echo_summary "Configuring Ironic cleaning network" configure_ironic_cleaning_network echo_summary "Configuring Ironic rescue network" configure_ironic_rescue_network } function configure_ironic_cleaning_network { iniset $IRONIC_CONF_FILE neutron cleaning_network $IRONIC_CLEAN_NET_NAME } function configure_ironic_rescue_network { iniset $IRONIC_CONF_FILE neutron rescuing_network $IRONIC_RESCUE_NET_NAME } function configure_ironic_provision_network { # This is only called if IRONIC_PROVISION_NETWORK_NAME has been set and # means we are using multi-tenant networking. local net_id local ironic_provision_network_ip # NOTE(vsaienko) For multinode case there is no need to create a new provisioning # network on subnode, as it was created on primary node. Just get an existed network UUID. if [[ "$HOST_TOPOLOGY_ROLE" != "subnode" ]]; then die_if_not_set $LINENO IRONIC_PROVISION_SUBNET_PREFIX "You must specify the IRONIC_PROVISION_SUBNET_PREFIX" die_if_not_set $LINENO PHYSICAL_NETWORK "You must specify the PHYSICAL_NETWORK" die_if_not_set $LINENO IRONIC_PROVISION_SUBNET_GATEWAY "You must specify the IRONIC_PROVISION_SUBNET_GATEWAY" net_id=$(openstack network create --provider-network-type $IRONIC_PROVISION_PROVIDER_NETWORK_TYPE \ --provider-physical-network "$PHYSICAL_NETWORK" \ ${IRONIC_PROVISION_SEGMENTATION_ID:+--provider-segment $IRONIC_PROVISION_SEGMENTATION_ID} \ ${IRONIC_PROVISION_NETWORK_NAME} -f value -c id) die_if_not_set $LINENO net_id "Failure creating net_id for $IRONIC_PROVISION_NETWORK_NAME" if [[ "${IRONIC_USE_NEUTRON_SEGMENTS}" == "True" ]]; then local net_segment_id net_segment_id=$(openstack network segment list --network $net_id -f value -c ID) die_if_not_set $LINENO net_segment_id "Failure getting net_segment_id for $IRONIC_PROVISION_NETWORK_NAME" fi local subnet_id subnet_id="$(openstack subnet create --ip-version 4 \ ${IRONIC_PROVISION_ALLOCATION_POOL:+--allocation-pool $IRONIC_PROVISION_ALLOCATION_POOL} \ ${net_segment_id:+--network-segment $net_segment_id} \ $IRONIC_PROVISION_PROVIDER_SUBNET_NAME \ --gateway $IRONIC_PROVISION_SUBNET_GATEWAY --network $net_id \ --subnet-range $IRONIC_PROVISION_SUBNET_PREFIX -f value -c id)" die_if_not_set $LINENO subnet_id "Failure creating SUBNET_ID for $IRONIC_PROVISION_NETWORK_NAME" ironic_provision_network_ip=$IRONIC_PROVISION_SUBNET_GATEWAY else net_id=$(openstack network show $IRONIC_PROVISION_NETWORK_NAME -f value -c id) ironic_provision_network_ip=$IRONIC_PROVISION_SUBNET_SUBNODE_IP fi IRONIC_PROVISION_SEGMENTATION_ID=${IRONIC_PROVISION_SEGMENTATION_ID:-`openstack network show ${net_id} -f value -c provider:segmentation_id`} provision_net_prefix=${IRONIC_PROVISION_SUBNET_PREFIX##*/} # Set provision network GW on physical interface # Add vlan on br interface in case of IRONIC_PROVISION_PROVIDER_NETWORK_TYPE==vlan # othervise assign ip to br interface directly. if [[ "$IRONIC_PROVISION_PROVIDER_NETWORK_TYPE" == "vlan" ]]; then sudo ip link add link $OVS_PHYSICAL_BRIDGE name $OVS_PHYSICAL_BRIDGE.$IRONIC_PROVISION_SEGMENTATION_ID type vlan id $IRONIC_PROVISION_SEGMENTATION_ID sudo ip link set dev $OVS_PHYSICAL_BRIDGE up sudo ip link set dev $OVS_PHYSICAL_BRIDGE.$IRONIC_PROVISION_SEGMENTATION_ID up sudo ip addr add dev $OVS_PHYSICAL_BRIDGE.$IRONIC_PROVISION_SEGMENTATION_ID $ironic_provision_network_ip/$provision_net_prefix else sudo ip link set dev $OVS_PHYSICAL_BRIDGE up sudo ip addr add dev $OVS_PHYSICAL_BRIDGE $ironic_provision_network_ip/$provision_net_prefix fi iniset $IRONIC_CONF_FILE neutron provisioning_network $IRONIC_PROVISION_NETWORK_NAME } function cleanup_ironic_provision_network { # Cleanup OVS_PHYSICAL_BRIDGE subinterfaces local bridge_subint bridge_subint=$(cat /proc/net/dev | sed -n "s/^\(${OVS_PHYSICAL_BRIDGE}\.[0-9]*\).*/\1/p") for sub_int in $bridge_subint; do sudo ip link set dev $sub_int down sudo ip link del dev $sub_int done } # configure_ironic() - Set config files, create data dirs, etc function configure_ironic { configure_ironic_dirs # (re)create ironic configuration file and configure common parameters. rm -f $IRONIC_CONF_FILE iniset $IRONIC_CONF_FILE DEFAULT debug True inicomment $IRONIC_CONF_FILE DEFAULT log_file iniset $IRONIC_CONF_FILE database connection `database_connection_url ironic` iniset $IRONIC_CONF_FILE DEFAULT state_path $IRONIC_STATE_PATH iniset $IRONIC_CONF_FILE DEFAULT use_syslog $SYSLOG # NOTE(vsaienko) with multinode each conductor should have its own host. iniset $IRONIC_CONF_FILE DEFAULT host $LOCAL_HOSTNAME # Retrieve deployment logs iniset $IRONIC_CONF_FILE agent deploy_logs_collect $IRONIC_DEPLOY_LOGS_COLLECT iniset $IRONIC_CONF_FILE agent deploy_logs_storage_backend $IRONIC_DEPLOY_LOGS_STORAGE_BACKEND iniset $IRONIC_CONF_FILE agent deploy_logs_local_path $IRONIC_DEPLOY_LOGS_LOCAL_PATH # Set image_download_source for direct interface iniset $IRONIC_CONF_FILE agent image_download_source $IRONIC_AGENT_IMAGE_DOWNLOAD_SOURCE # Configure JSON RPC backend iniset $IRONIC_CONF_FILE DEFAULT rpc_transport $IRONIC_RPC_TRANSPORT iniset $IRONIC_CONF_FILE json_rpc port $IRONIC_JSON_RPC_PORT # Set fast track options iniset $IRONIC_CONF_FILE deploy fast_track $IRONIC_DEPLOY_FAST_TRACK # Set requirement for agent tokens iniset $IRONIC_CONF_FILE DEFAULT require_agent_token $IRONIC_REQUIRE_AGENT_TOKEN # No need to check if RabbitMQ is enabled, this call does it in a smart way if [[ "$IRONIC_RPC_TRANSPORT" == "oslo" ]]; then iniset_rpc_backend ironic $IRONIC_CONF_FILE fi # Configure Ironic conductor, if it was enabled. if is_service_enabled ir-cond; then configure_ironic_conductor fi # Configure Ironic API, if it was enabled. if is_service_enabled ir-api; then configure_ironic_api fi # Format logging setup_logging $IRONIC_CONF_FILE # Adds ironic site for IPXE if [[ "$IRONIC_IPXE_ENABLED" == "True" ]]; then _config_ironic_apache_ipxe fi # Adds uWSGI for Ironic API if [[ "$IRONIC_USE_WSGI" == "True" ]]; then write_uwsgi_config "$IRONIC_UWSGI_CONF" "$IRONIC_UWSGI" "/baremetal" fi if [[ "$os_VENDOR" =~ (Debian|Ubuntu) ]]; then # The groups change with newer libvirt. Older Ubuntu used # 'libvirtd', but now uses libvirt like Debian. Do a quick check # to see if libvirtd group already exists to handle grenade's case. LIBVIRT_GROUP=$(cut -d ':' -f 1 /etc/group | grep 'libvirtd$' || true) LIBVIRT_GROUP=${LIBVIRT_GROUP:-libvirt} else LIBVIRT_GROUP=libvirtd fi if ! getent group $LIBVIRT_GROUP >/dev/null; then sudo groupadd $LIBVIRT_GROUP fi # NOTE(vsaienko) Add stack to libvirt group when installing without nova. if ! is_service_enabled nova; then # Disable power state change callbacks to nova. iniset $IRONIC_CONF_FILE nova send_power_notifications false add_user_to_group $STACK_USER $LIBVIRT_GROUP # This is the basic set of devices allowed / required by all virtual machines. # Add /dev/net/tun to cgroup_device_acl, needed for type=ethernet interfaces if ! sudo grep -q '^cgroup_device_acl' /etc/libvirt/qemu.conf; then cat <${IRONIC_VM_LOG_DIR}/README << EOF This directory contains the serial console log files from the virtual Ironic bare-metal nodes. The *_console_* log files are the original log files and include ANSI control codes which can make the output difficult to read. The *_no_ansi_* log files have had ANSI control codes removed from the file and are easier to read. On some occasions there won't be a corresponding *_no_ansi_* log file, for example if the job failed due to a time-out. You may see a log file without a date/time in the file name. In that case you can display the logfile in your console by doing: $ curl URL_TO_LOGFILE This will have your terminal process the ANSI escape codes. Another option, if you have the 'pv' executable installed, is to simulate a low-speed connection. In this example simulate a 300 Bytes/second connection. $ curl URL_TO_LOGFILE | pv -q -L 300 This can allow you to see some of the content before the screen is cleared by an ANSI escape sequence. EOF } function initialize_libvirt_storage_pool { [ -d $LIBVIRT_STORAGE_POOL_PATH ] || sudo mkdir -p $LIBVIRT_STORAGE_POOL_PATH if ! sudo virsh pool-list --all | grep -q $LIBVIRT_STORAGE_POOL; then sudo virsh pool-define-as --name $LIBVIRT_STORAGE_POOL dir \ --target $LIBVIRT_STORAGE_POOL_PATH >&2 sudo virsh pool-autostart $LIBVIRT_STORAGE_POOL >&2 sudo virsh pool-start $LIBVIRT_STORAGE_POOL >&2 fi pool_state=$(sudo virsh pool-info $LIBVIRT_STORAGE_POOL | grep State | awk '{ print $2 }') if [ "$pool_state" != "running" ] ; then sudo virsh pool-start $LIBVIRT_STORAGE_POOL >&2 fi } function create_bridge_and_vms { # Call libvirt setup scripts in a new shell to ensure any new group membership sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/setup-network.sh $IRONIC_VM_NETWORK_BRIDGE $PUBLIC_BRIDGE_MTU" if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then local log_arg="-l $IRONIC_VM_LOG_DIR" if [[ "$IRONIC_VM_LOG_ROTATE" == "True" ]] ; then setup_qemu_log_hook fi else local log_arg="" fi local vbmc_port=$IRONIC_VBMC_PORT_RANGE_START local pdu_outlet=$IRONIC_VPDU_PORT_RANGE_START local vm_name local vm_opts="" if [[ -n "$IRONIC_VM_EMULATOR" ]]; then vm_opts+=" -e $IRONIC_VM_EMULATOR" fi vm_opts+=" -E $IRONIC_VM_ENGINE" if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then vm_opts+=" -L $UEFI_LOADER_PATH -N $UEFI_NVRAM_PATH" fi if [[ -n "$LIBVIRT_NIC_DRIVER" ]]; then vm_opts+=" -D $LIBVIRT_NIC_DRIVER" elif [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then # Note(derekh) UEFI for the moment doesn't work with the e1000 net driver vm_opts+=" -D virtio" fi initialize_libvirt_storage_pool local bridge_mac bridge_mac=$(ip link show dev $IRONIC_VM_NETWORK_BRIDGE | grep -Eo "ether [A-Za-z0-9:]+"|sed "s/ether\ //") for vm_name in $(_ironic_bm_vm_names); do # pick up the $LIBVIRT_GROUP we have possibly joint newgrp $LIBVIRT_GROUP <> $IRONIC_VM_MACS_CSV_FILE SUBSHELL if is_deployed_by_ipmi; then vbmc --no-daemon add $vm_name --port $vbmc_port vbmc --no-daemon start $vm_name fi echo " ${bridge_mac} $IRONIC_VM_NETWORK_BRIDGE" >> $IRONIC_VM_MACS_CSV_FILE vbmc_port=$((vbmc_port+1)) pdu_outlet=$((pdu_outlet+1)) # It is sometimes useful to dump out the VM configuration to validate it. sudo virsh dumpxml $vm_name done if [[ -z "${IRONIC_PROVISION_NETWORK_NAME}" ]]; then local ironic_net_id ironic_net_id=$(openstack network show "$PRIVATE_NETWORK_NAME" -c id -f value) create_ovs_taps $ironic_net_id # NOTE(vsaienko) Neutron no longer setup routing to private network. # https://github.com/openstack-dev/devstack/commit/1493bdeba24674f6634160d51b8081c571df4017 # Add route here to have connection to VMs during provisioning. local pub_router_id local r_net_gateway pub_router_id=$(openstack router show $Q_ROUTER_NAME -f value -c id) r_net_gateway=$(sudo ip netns exec qrouter-$pub_router_id ip -4 route get 8.8.8.8 |grep dev | awk '{print $7}') local replace_range=${SUBNETPOOL_PREFIX_V4} if [[ -z "${SUBNETPOOL_V4_ID}" ]]; then replace_range=${FIXED_RANGE} fi sudo ip route replace $replace_range via $r_net_gateway fi # Here is a good place to restart tcpdump to begin capturing packets. # See: https://docs.openstack.org/devstack/latest/debugging.html # stop_tcpdump # start_tcpdump } function wait_for_nova_resources { # After nodes have been enrolled, we need to wait for both ironic and # nova's periodic tasks to populate the resource tracker with available # nodes and resources. Wait up to 2 minutes for a given resource before # timing out. local expected_count=$1 local resource_class=${IRONIC_DEFAULT_RESOURCE_CLASS^^} # TODO(dtantsur): switch to Placement OSC plugin, once it exists local token token=$(openstack token issue -f value -c id) local endpoint endpoint=$(openstack endpoint list --service placement --interface public -f value -c URL) die_if_not_set $LINENO endpoint "Cannot find Placement API endpoint" local i local count echo_summary "Waiting up to 3 minutes for placement to pick up $expected_count nodes" for i in $(seq 1 12); do # Fetch provider UUIDs from Placement local providers providers=$(curl -sH "X-Auth-Token: $token" $endpoint/resource_providers \ | jq -r '.resource_providers[].uuid') local p # Total count of the resource class, has to be equal to nodes count count=0 for p in $providers; do local amount # A resource class inventory record looks something like # {"max_unit": 1, "min_unit": 1, "step_size": 1, "reserved": 0, "total": 1, "allocation_ratio": 1} # Subtrack reserved from total (defaulting both to 0) amount=$(curl -sH "X-Auth-Token: $token" $endpoint/resource_providers/$p/inventories \ | jq ".inventories.CUSTOM_$resource_class as \$cls | (\$cls.total // 0) - (\$cls.reserved // 0)") # Check whether the resource provider has all expected traits # registered against it. rp_traits=$(curl -sH "X-Auth-Token: $token" \ -H "OpenStack-API-Version: placement 1.6" \ $endpoint/resource_providers/$p/traits) for trait in $IRONIC_DEFAULT_TRAITS; do if [[ $(echo "$rp_traits" | jq ".traits | contains([\"$trait\"])") == false ]]; then amount=0 fi done if [ $amount -gt 0 ]; then count=$(( count + $amount )) fi done if [ $count -ge $expected_count ]; then return 0 fi if is_service_enabled n-api; then $TOP_DIR/tools/discover_hosts.sh fi sleep 15 done die $LINENO "Timed out waiting for Nova to track $expected_count nodes" } function _clean_ncpu_failure { SCREEN_NAME=${SCREEN_NAME:-stack} SERVICE_DIR=${SERVICE_DIR:-${DEST}/status} n_cpu_failure="$SERVICE_DIR/$SCREEN_NAME/n-cpu.failure" if [ -f ${n_cpu_failure} ]; then mv ${n_cpu_failure} "${n_cpu_failure}.before-restart-by-ironic" fi } function provide_nodes { local nodes=$@ for node_id in $nodes; do $IRONIC_CMD node provide $node_id done local attempt for attempt in $(seq 1 $IRONIC_CLEANING_ATTEMPTS); do local available available=$(openstack baremetal node list --provision-state available -f value -c UUID) local nodes_not_finished= for node_id in $nodes; do if ! echo $available | grep -q $node_id; then nodes_not_finished+=" $node_id" fi done nodes=$nodes_not_finished if [[ "$nodes" == "" ]]; then break fi echo "Waiting for nodes to become available: $nodes" echo "Currently available: $available" sleep $IRONIC_CLEANING_DELAY done if [[ "$nodes" != "" ]]; then die $LINENO "Some nodes did not finish cleaning: $nodes" fi } function wait_for_ironic_neutron_agent_report_state_for_all_nodes { local nodes=$@ echo "Waiting for ironic-neutron-agent to report state for nodes: $nodes" local attempt for attempt in $(seq 1 $IRONIC_NEUTRON_AGENT_REPORT_STATE_ATTEMPTS); do local reported reported=$(openstack network agent list -f value -c Host -c Binary | grep ironic-neutron-agent | cut -d ' ' -f 1 | paste -s -d ' ') echo "Currently reported nodes: $reported" local can_break for node_id in $nodes; do if echo $reported | grep -q $node_id; then can_break="True" else can_break="False" break fi done if [[ $can_break == "True" ]]; then break fi sleep $IRONIC_NEUTRON_AGENT_REPORT_STATE_DELAY done if [[ "$can_break" == "False" ]]; then die $LINENO "ironic-neutron-agent did not report some nodes." fi } function enroll_nodes { local chassis_id chassis_id=$($IRONIC_CMD chassis create --description "ironic test chassis" -f value -c uuid) die_if_not_set $LINENO chassis_id "Failed to create chassis" local node_prefix node_prefix=$(get_ironic_node_prefix) local interface_info if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then local ironic_node_cpu=$IRONIC_VM_SPECS_CPU local ironic_node_ram=$IRONIC_VM_SPECS_RAM local ironic_node_disk=$IRONIC_VM_SPECS_DISK local ironic_ephemeral_disk=$IRONIC_VM_EPHEMERAL_DISK local ironic_node_arch=x86_64 local ironic_hwinfo_file=$IRONIC_VM_MACS_CSV_FILE if is_deployed_by_ipmi; then local node_options="\ --driver-info ipmi_address=${HOST_IP} \ --driver-info ipmi_username=admin \ --driver-info ipmi_password=password" elif is_deployed_by_snmp; then local node_options="\ --driver-info snmp_driver=${IRONIC_VPDU_SNMPDRIVER} \ --driver-info snmp_address=${HOST_IP} \ --driver-info snmp_port=${IRONIC_VPDU_LISTEN_PORT} \ --driver-info snmp_protocol=2c \ --driver-info snmp_community=${IRONIC_VPDU_COMMUNITY}" elif is_deployed_by_redfish; then local node_options="\ --driver-info redfish_address=http://${HOST_IP}:${IRONIC_REDFISH_EMULATOR_PORT} \ --driver-info redfish_username=admin \ --driver-info redfish_password=password" fi else local ironic_node_cpu=$IRONIC_HW_NODE_CPU local ironic_node_ram=$IRONIC_HW_NODE_RAM local ironic_node_disk=$IRONIC_HW_NODE_DISK local ironic_ephemeral_disk=$IRONIC_HW_EPHEMERAL_DISK local ironic_node_arch=$IRONIC_HW_ARCH local ironic_hwinfo_file=$IRONIC_HWINFO_FILE fi local total_nodes=0 local total_cpus=0 local node_uuids= local node_id while read hardware_info; do local node_name node_name=$node_prefix-$total_nodes local node_capabilities="" if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then node_capabilities+=" --property capabilities=boot_mode:uefi" fi if [[ "$IRONIC_SECURE_BOOT" == "True" ]]; then if [[ -n "$node_capabilities" ]]; then node_capabilities+=",secure_boot:true" else node_capabilities+=" --property capabilities=secure_boot:true" fi fi if [[ "$IRONIC_IS_HARDWARE" == "False" ]]; then interface_info=$(echo $hardware_info | awk '{print $1}') if is_deployed_by_ipmi; then local vbmc_port vbmc_port=$(echo $hardware_info | awk '{print $2}') node_options+=" --driver-info ipmi_port=$vbmc_port" elif is_deployed_by_snmp; then local pdu_outlet pdu_outlet=$(echo $hardware_info | awk '{print $3}') node_options+=" --driver-info snmp_outlet=$pdu_outlet" elif is_deployed_by_redfish; then node_options+=" --driver-info redfish_system_id=/redfish/v1/Systems/$node_name" fi # Local-link-connection options local llc_opts="" if [[ "${IRONIC_USE_LINK_LOCAL}" == "True" ]]; then local switch_info local switch_id switch_id=$(echo $hardware_info |awk '{print $4}') switch_info=$(echo $hardware_info |awk '{print $5}') # NOTE(vsaienko) we will add port_id later in the code. llc_opts="--local-link-connection switch_id=${switch_id} \ --local-link-connection switch_info=${switch_info} " fi if [[ "${IRONIC_STORAGE_INTERFACE}" == "cinder" ]]; then local connector_iqn="iqn.2017-05.org.openstack.$node_prefix-$total_nodes" if [[ -n "$node_capabilities" ]]; then node_capabilities+=",iscsi_boot:True" else node_capabilities+=" --property capabilities=iscsi_boot:True" fi fi else # Currently we require all hardware platform have same CPU/RAM/DISK info # in future, this can be enhanced to support different type, and then # we create the bare metal flavor with minimum value local bmc_address bmc_address=$(echo $hardware_info |awk '{print $1}') local mac_address mac_address=$(echo $hardware_info |awk '{print $2}') local bmc_username bmc_username=$(echo $hardware_info |awk '{print $3}') local bmc_passwd bmc_passwd=$(echo $hardware_info |awk '{print $4}') local node_options="" if is_deployed_by_ipmi; then node_options+=" --driver-info ipmi_address=$bmc_address \ --driver-info ipmi_password=$bmc_passwd \ --driver-info ipmi_username=$bmc_username" elif is_deployed_by_ilo; then node_options+=" --driver-info ilo_address=$bmc_address \ --driver-info ilo_password=$bmc_passwd \ --driver-info ilo_username=$bmc_username" if [[ $IRONIC_ENABLED_BOOT_INTERFACES == *"ilo-virtual-media"* ]]; then node_options+=" --driver-info ilo_deploy_iso=$IRONIC_DEPLOY_ISO_ID" fi elif is_deployed_by_drac; then node_options+=" --driver-info drac_address=$bmc_address \ --driver-info drac_password=$bmc_passwd \ --driver-info drac_username=$bmc_username" elif is_deployed_by_redfish; then local bmc_redfish_system_id bmc_redfish_system_id=$(echo $hardware_info |awk '{print $5}') node_options+=" --driver-info redfish_address=https://$bmc_address \ --driver-info redfish_system_id=$bmc_redfish_system_id \ --driver-info redfish_password=$bmc_passwd \ --driver-info redfish_username=$bmc_username \ --driver-info redfish_verify_ca=False" elif is_deployed_by_irmc; then node_options+=" --driver-info irmc_address=$bmc_address \ --driver-info irmc_password=$bmc_passwd \ --driver-info irmc_username=$bmc_username" if [[ -n "$IRONIC_DEPLOY_ISO_ID" ]]; then node_options+=" --driver-info irmc_deploy_iso=$IRONIC_DEPLOY_ISO_ID" fi elif is_deployed_by_xclarity; then local xclarity_hardware_id xclarity_hardware_id=$(echo $hardware_info |awk '{print $5}') node_options+=" --driver-info xclarity_manager_ip=$bmc_address \ --driver-info xclarity_password=$bmc_passwd \ --driver-info xclarity_username=$bmc_username \ --driver-info xclarity_hardware_id=$xclarity_hardware_id" fi interface_info="${mac_address}" fi # First node created will be used for testing in ironic w/o glance # scenario, so we need to know its UUID. local standalone_node_uuid="" if [ $total_nodes -eq 0 ]; then standalone_node_uuid="--uuid $IRONIC_NODE_UUID" fi # TODO(dtantsur): it would be cool to test with different resource # classes, but for now just use the same. node_id=$($IRONIC_CMD node create $standalone_node_uuid \ --chassis $chassis_id \ --driver $IRONIC_DEPLOY_DRIVER \ --name $node_name \ --resource-class $IRONIC_DEFAULT_RESOURCE_CLASS \ --property cpu_arch=$ironic_node_arch \ $node_capabilities \ $node_options \ -f value -c uuid) die_if_not_set $LINENO node_id "Failed to create node" node_uuids+=" $node_id" if [[ -n $IRONIC_DEFAULT_TRAITS ]]; then $IRONIC_CMD node add trait $node_id $IRONIC_DEFAULT_TRAITS fi $IRONIC_CMD node manage $node_id --wait $IRONIC_MANAGE_TIMEOUT || \ die $LINENO "Node did not reach manageable state in $IRONIC_MANAGE_TIMEOUT seconds" # NOTE(vsaienko) IPA didn't automatically recognize root devices less than 4Gb. # Setting root hint allows to install OS on such devices. # 0x1af4 is VirtIO vendor device ID. if [[ "$ironic_node_disk" -lt "4" && is_deployed_by_agent ]]; then $IRONIC_CMD node set $node_id --property \ root_device='{"vendor": "0x1af4"}' fi # In case we using portgroups, we should API version that support them. # Othervise API will return 406 ERROR # NOTE(vsaienko) interface_info is in the following format here: # mac1,tap-node0i1;mac2,tap-node0i2;...;macN,tap-node0iN for info in ${interface_info//;/ }; do local mac_address="" local port_id="" local llc_port_opt="" local physical_network="" mac_address=$(echo $info| awk -F ',' '{print $1}') port_id=$(echo $info| awk -F ',' '{print $2}') if [[ "${IRONIC_USE_LINK_LOCAL}" == "True" ]]; then llc_port_opt+=" --local-link-connection port_id=${port_id} " fi if [[ "${IRONIC_USE_NEUTRON_SEGMENTS}" == "True" ]]; then physical_network=" --physical-network ${PHYSICAL_NETWORK} " fi $IRONIC_CMD port create --node $node_id $llc_opts $llc_port_opt $mac_address $physical_network done # NOTE(vsaienko) use node-update instead of specifying network_interface # during node creation. If node is added with latest version of API it # will NOT go to available state automatically. if [[ -n "${IRONIC_NETWORK_INTERFACE}" ]]; then $IRONIC_CMD node set $node_id --network-interface $IRONIC_NETWORK_INTERFACE || \ die $LINENO "Failed to update network interface for node" fi if [[ -n "${IRONIC_STORAGE_INTERFACE}" ]]; then $IRONIC_CMD node set $node_id --storage-interface $IRONIC_STORAGE_INTERFACE || \ die $LINENO "Failed to update storage interface for node $node_id" if [[ -n "${connector_iqn}" ]]; then $IRONIC_CMD volume connector create --node $node_id --type iqn \ --connector-id $connector_iqn || \ die $LINENO "Failed to create volume connector for node $node_id" fi fi total_nodes=$((total_nodes+1)) done < $ironic_hwinfo_file # NOTE(hjensas): ensure ironic-neutron-agent has done report_state for all # nodes we attempt cleaning. if [[ "${IRONIC_USE_NEUTRON_SEGMENTS}" == "True" ]]; then wait_for_ironic_neutron_agent_report_state_for_all_nodes $node_uuids fi # NOTE(dtantsur): doing it outside of the loop, because of cleaning provide_nodes $node_uuids if is_service_enabled nova && [[ "$VIRT_DRIVER" == "ironic" ]]; then if [[ "$HOST_TOPOLOGY_ROLE" != "subnode" ]]; then local adjusted_disk adjusted_disk=$(($ironic_node_disk - $ironic_ephemeral_disk)) openstack flavor create --ephemeral $ironic_ephemeral_disk --ram $ironic_node_ram --disk $adjusted_disk --vcpus $ironic_node_cpu baremetal local resource_class=${IRONIC_DEFAULT_RESOURCE_CLASS^^} openstack flavor set baremetal --property "resources:CUSTOM_$resource_class"="1" openstack flavor set baremetal --property "resources:DISK_GB"="0" openstack flavor set baremetal --property "resources:MEMORY_MB"="0" openstack flavor set baremetal --property "resources:VCPU"="0" openstack flavor set baremetal --property "cpu_arch"="$ironic_node_arch" if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then openstack flavor set baremetal --property "capabilities:boot_mode"="uefi" fi for trait in $IRONIC_DEFAULT_TRAITS; do openstack flavor set baremetal --property "trait:$trait"="required" done if [[ "$IRONIC_SECURE_BOOT" == "True" ]]; then openstack flavor set baremetal --property "capabilities:secure_boot"="true" fi # NOTE(dtantsur): sometimes nova compute fails to start with ironic due # to keystone restarting and not being able to authenticate us. # Restart it just to be sure (and avoid gate problems like bug 1537076) stop_nova_compute || /bin/true # NOTE(pas-ha) if nova compute failed before restart, .failure file # that was created will fail the service_check in the end of the deployment _clean_ncpu_failure start_nova_compute else # NOTE(vsaienko) we enrolling IRONIC_VM_COUNT on each node. So on subnode # we expect to have 2 x total_cpus total_nodes=$(( total_nodes * 2 )) fi wait_for_nova_resources $total_nodes fi } function die_if_module_not_loaded { if ! grep -q $1 /proc/modules; then die $LINENO "$1 kernel module is not loaded" fi } function configure_iptables { # enable tftp natting for allowing connections to HOST_IP's tftp server if ! running_in_container; then sudo modprobe nf_conntrack_tftp sudo modprobe nf_nat_tftp else die_if_module_not_loaded nf_conntrack_tftp die_if_module_not_loaded nf_nat_tftp fi # explicitly allow DHCP - packets are occasionally being dropped here sudo iptables -I INPUT -p udp --dport 67:68 --sport 67:68 -j ACCEPT || true # nodes boot from TFTP and callback to the API server listening on $HOST_IP sudo iptables -I INPUT -d $IRONIC_TFTPSERVER_IP -p udp --dport 69 -j ACCEPT || true # To use named /baremetal endpoint we should open default apache port if [[ "$IRONIC_USE_WSGI" == "False" ]]; then sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true # open ironic API on baremetal network sudo iptables -I INPUT -d $IRONIC_HTTP_SERVER -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true # allow IPA to connect to ironic API on subnode sudo iptables -I FORWARD -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true else sudo iptables -I INPUT -d $HOST_IP -p tcp --dport 80 -j ACCEPT || true sudo iptables -I INPUT -d $HOST_IP -p tcp --dport 443 -j ACCEPT || true # open ironic API on baremetal network sudo iptables -I INPUT -d $IRONIC_HTTP_SERVER -p tcp --dport 80 -j ACCEPT || true sudo iptables -I INPUT -d $IRONIC_HTTP_SERVER -p tcp --dport 443 -j ACCEPT || true fi if is_deployed_by_agent; then # agent ramdisk gets instance image from swift sudo iptables -I INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $GLANCE_SERVICE_PORT -j ACCEPT || true fi if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then sudo iptables -I INPUT -d $IRONIC_HTTP_SERVER -p tcp --dport $IRONIC_HTTP_PORT -j ACCEPT || true fi if [[ "${IRONIC_STORAGE_INTERFACE}" == "cinder" ]]; then sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $ISCSI_SERVICE_PORT -s $FLOATING_RANGE -j ACCEPT || true fi # (rpittau) workaround to allow TFTP traffic on ubuntu bionic with conntrack helper disabled local qrouter qrouter=$(sudo ip netns list | grep qrouter | awk '{print $1;}') if [[ ! -z "$qrouter" ]]; then sudo ip netns exec $qrouter /sbin/iptables -A PREROUTING -t raw -p udp --dport 69 -j CT --helper tftp fi } function configure_tftpd { # stop tftpd and setup serving via xinetd stop_service tftpd-hpa || true [ -f /etc/init/tftpd-hpa.conf ] && echo "manual" | sudo tee /etc/init/tftpd-hpa.override sudo cp $IRONIC_TEMPLATES_DIR/tftpd-xinetd.template /etc/xinetd.d/tftp sudo sed -e "s|%TFTPBOOT_DIR%|$IRONIC_TFTPBOOT_DIR|g" -i /etc/xinetd.d/tftp sudo sed -e "s|%MAX_BLOCKSIZE%|$IRONIC_TFTP_BLOCKSIZE|g" -i /etc/xinetd.d/tftp # setup tftp file mapping to satisfy requests at the root (booting) and # /tftpboot/ sub-dir (as per deploy-ironic elements) # this section is only for ubuntu and fedora if [[ "$IRONIC_IPXE_ENABLED" == "False" && \ ( "$IRONIC_BOOT_MODE" == "uefi" || "$IRONIC_SECURE_BOOT" == "True" ) && \ "$IRONIC_UEFI_BOOT_LOADER" == "grub2" ]]; then local grub_dir echo "re ^($IRONIC_TFTPBOOT_DIR/) $IRONIC_TFTPBOOT_DIR/\2" >$IRONIC_TFTPBOOT_DIR/map-file echo "re ^$IRONIC_TFTPBOOT_DIR/ $IRONIC_TFTPBOOT_DIR/" >>$IRONIC_TFTPBOOT_DIR/map-file echo "re ^(^/) $IRONIC_TFTPBOOT_DIR/\1" >>$IRONIC_TFTPBOOT_DIR/map-file echo "re ^([^/]) $IRONIC_TFTPBOOT_DIR/\1" >>$IRONIC_TFTPBOOT_DIR/map-file sudo cp $IRONIC_GRUB2_SHIM_FILE $IRONIC_TFTPBOOT_DIR/bootx64.efi if is_fedora; then grub_subdir="EFI/fedora" elif is_ubuntu; then grub_subdir="boot/grub" fi grub_dir=$IRONIC_TFTPBOOT_DIR/$grub_subdir mkdir -p $grub_dir # Grub looks for numerous files when the grubnetx.efi binary is used :\ # specifically .lst files which define module lists which we can't seem # to find on disk. That being said, the grub-mknetdir utility generates # these files for us. grub-mknetdir --net-directory="$IRONIC_TFTPBOOT_DIR" --subdir="$grub_subdir" sudo cp $grub_dir/x86_64-efi/core.efi $IRONIC_TFTPBOOT_DIR/grubx64.efi cat << EOF > $grub_dir/grub.cfg set default=master set timeout=1 set hidden_timeout_quiet=false menuentry "master" { configfile $IRONIC_TFTPBOOT_DIR/\$net_default_mac.conf } EOF chmod 644 $grub_dir/grub.cfg iniset $IRONIC_CONF_FILE pxe uefi_pxe_config_template '$pybasedir/drivers/modules/pxe_grub_config.template' iniset $IRONIC_CONF_FILE pxe uefi_pxe_bootfile_name "bootx64.efi" else echo "r ^([^/]) $IRONIC_TFTPBOOT_DIR/\1" >$IRONIC_TFTPBOOT_DIR/map-file echo "r ^(/tftpboot/) $IRONIC_TFTPBOOT_DIR/\2" >>$IRONIC_TFTPBOOT_DIR/map-file fi sudo chmod -R 0755 $IRONIC_TFTPBOOT_DIR restart_service xinetd } function build_ipa_ramdisk { local kernel_path=$1 local ramdisk_path=$2 local iso_path=$3 case $IRONIC_RAMDISK_TYPE in 'tinyipa') build_tinyipa_ramdisk $kernel_path $ramdisk_path $iso_path ;; 'dib') build_ipa_dib_ramdisk $kernel_path $ramdisk_path $iso_path ;; *) die $LINENO "Unrecognised IRONIC_RAMDISK_TYPE: $IRONIC_RAMDISK_TYPE. Expected either of 'dib' or 'tinyipa'." ;; esac } function setup_ipa_builder { git_clone $IRONIC_PYTHON_AGENT_BUILDER_REPO $IRONIC_PYTHON_AGENT_BUILDER_DIR $IRONIC_PYTHON_AGENT_BUILDER_BRANCH } function build_tinyipa_ramdisk { echo "Building ironic-python-agent deploy ramdisk" local kernel_path=$1 local ramdisk_path=$2 local iso_path=$3 cd $IRONIC_PYTHON_AGENT_BUILDER_DIR/tinyipa export BUILD_AND_INSTALL_TINYIPA=true if is_ansible_deploy_enabled; then export AUTHORIZE_SSH=true export SSH_PUBLIC_KEY=$IRONIC_ANSIBLE_SSH_KEY.pub fi make cp tinyipa.gz $ramdisk_path cp tinyipa.vmlinuz $kernel_path if is_deploy_iso_required; then make iso cp tinyipa.iso $iso_path fi make clean cd - } function rebuild_tinyipa_for_ansible { local ansible_tinyipa_ramdisk_name pushd $IRONIC_PYTHON_AGENT_BUILDER_DIR/tinyipa export TINYIPA_RAMDISK_FILE=$IRONIC_DEPLOY_RAMDISK export SSH_PUBLIC_KEY=$IRONIC_ANSIBLE_SSH_KEY.pub make addssh ansible_tinyipa_ramdisk_name="ansible-$(basename $IRONIC_DEPLOY_RAMDISK)" mv $ansible_tinyipa_ramdisk_name $TOP_DIR/files make clean popd IRONIC_DEPLOY_RAMDISK=$TOP_DIR/files/$ansible_tinyipa_ramdisk_name } # install_diskimage_builder() - Collect source and prepare or install from pip function install_diskimage_builder { if use_library_from_git "diskimage-builder"; then git_clone_by_name "diskimage-builder" setup_dev_lib -bindep "diskimage-builder" else local bindep_file bindep_file=$(mktemp) curl -o "$bindep_file" "$IRONIC_DIB_BINDEP_FILE" install_bindep "$bindep_file" pip_install_gr "diskimage-builder" fi } function build_ipa_dib_ramdisk { local kernel_path=$1 local ramdisk_path=$2 local iso_path=$3 local tempdir tempdir=$(mktemp -d --tmpdir=${DEST}) # install diskimage-builder if not present if ! $(type -P disk-image-create > /dev/null); then install_diskimage_builder fi echo "Building IPA ramdisk with DIB options: $IRONIC_DIB_RAMDISK_OPTIONS" if is_deploy_iso_required; then IRONIC_DIB_RAMDISK_OPTIONS+=" iso" fi git_clone $IRONIC_PYTHON_AGENT_BUILDER_REPO $IRONIC_PYTHON_AGENT_BUILDER_DIR $IRONIC_PYTHON_AGENT_BUILDER_BRANCH ELEMENTS_PATH="$IRONIC_PYTHON_AGENT_BUILDER_DIR/dib" \ DIB_DHCP_TIMEOUT=$IRONIC_DIB_DHCP_TIMEOUT \ DIB_RELEASE=$IRONIC_DIB_RAMDISK_RELEASE \ DIB_REPOLOCATION_ironic_python_agent="$IRONIC_PYTHON_AGENT_DIR" \ DIB_REPOLOCATION_requirements="$DEST/requirements" \ disk-image-create "$IRONIC_DIB_RAMDISK_OPTIONS" \ -x -o "$tempdir/ironic-agent" \ ironic-python-agent-ramdisk chmod -R +r $tempdir mv "$tempdir/ironic-agent.kernel" "$kernel_path" mv "$tempdir/ironic-agent.initramfs" "$ramdisk_path" if is_deploy_iso_required; then mv "$tempdir/ironic-agent.iso" "$iso_path" fi rm -rf $tempdir } # download EFI boot loader image and upload it to glance # this function sets ``IRONIC_EFIBOOT_ID`` function upload_baremetal_ironic_efiboot { declare -g IRONIC_EFIBOOT_ID local efiboot_name efiboot_name=$(basename $IRONIC_EFIBOOT) echo_summary "Building and uploading EFI boot image for ironic" if [ ! -e "$IRONIC_EFIBOOT" ]; then local efiboot_path efiboot_path=$(mktemp -d --tmpdir=${DEST})/$efiboot_name local efiboot_mount efiboot_mount=$(mktemp -d --tmpdir=${DEST}) dd if=/dev/zero \ of=$efiboot_path \ bs=4096 count=1024 mkfs.fat -s 4 -r 512 -S 4096 $efiboot_path sudo mount $efiboot_path $efiboot_mount sudo mkdir -p $efiboot_mount/efi/boot sudo grub-mkimage \ -C xz \ -O x86_64-efi \ -p /boot/grub \ -o $efiboot_mount/efi/boot/bootx64.efi \ boot linux linuxefi search normal configfile \ part_gpt btrfs ext2 fat iso9660 loopback \ test keystatus gfxmenu regexp probe \ efi_gop efi_uga all_video gfxterm font \ echo read ls cat png jpeg halt reboot sudo umount $efiboot_mount mv $efiboot_path $IRONIC_EFIBOOT fi # load efiboot into glance IRONIC_EFIBOOT_ID=$(openstack \ image create \ $efiboot_name \ --public --disk-format=raw \ --container-format=bare \ -f value -c id \ < $IRONIC_EFIBOOT) die_if_not_set $LINENO IRONIC_EFIBOOT_ID "Failed to load EFI bootloader image into glance" iniset $IRONIC_CONF_FILE conductor bootloader $IRONIC_EFIBOOT_ID } # build deploy kernel+ramdisk, then upload them to glance # this function sets ``IRONIC_DEPLOY_KERNEL_ID``, ``IRONIC_DEPLOY_RAMDISK_ID`` function upload_baremetal_ironic_deploy { declare -g IRONIC_DEPLOY_KERNEL_ID IRONIC_DEPLOY_RAMDISK_ID local ironic_deploy_kernel_name local ironic_deploy_ramdisk_name ironic_deploy_kernel_name=$(basename $IRONIC_DEPLOY_KERNEL) ironic_deploy_ramdisk_name=$(basename $IRONIC_DEPLOY_RAMDISK) if [[ "$HOST_TOPOLOGY_ROLE" != "subnode" ]]; then echo_summary "Creating and uploading baremetal images for ironic" if [ ! -e "$IRONIC_DEPLOY_RAMDISK" ] || \ [ ! -e "$IRONIC_DEPLOY_KERNEL" ] || \ ( is_deploy_iso_required && [ ! -e "$IRONIC_DEPLOY_ISO" ] ); then # setup IRONIC_PYTHON_AGENT_BUILDER_DIR setup_ipa_builder # files don't exist, need to build them if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then # we can build them only if we're not offline if [ "$OFFLINE" != "True" ]; then build_ipa_ramdisk $IRONIC_DEPLOY_KERNEL $IRONIC_DEPLOY_RAMDISK $IRONIC_DEPLOY_ISO else die $LINENO "Deploy kernel+ramdisk or iso files don't exist and cannot be built in OFFLINE mode" fi else # Grab the agent image tarball, either from a local file or remote URL if [[ "$IRONIC_AGENT_KERNEL_URL" =~ "file://" ]]; then cp ${IRONIC_AGENT_KERNEL_URL:7} $IRONIC_DEPLOY_KERNEL else wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_DEPLOY_KERNEL fi if [[ "$IRONIC_AGENT_RAMDISK_URL" =~ "file://" ]]; then cp ${IRONIC_AGENT_RAMDISK_URL:7} $IRONIC_DEPLOY_RAMDISK else wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_DEPLOY_RAMDISK fi if is_ansible_with_tinyipa; then # NOTE(pas-ha) if using ansible-deploy and tinyipa, # this will rebuild ramdisk and override $IRONIC_DEPLOY_RAMDISK rebuild_tinyipa_for_ansible fi fi fi # load them into glance if ! is_deploy_iso_required; then IRONIC_DEPLOY_KERNEL_ID=$(openstack \ image create \ $ironic_deploy_kernel_name \ --public --disk-format=aki \ --container-format=aki \ < $IRONIC_DEPLOY_KERNEL | grep ' id ' | get_field 2) die_if_not_set $LINENO IRONIC_DEPLOY_KERNEL_ID "Failed to load kernel image into glance" IRONIC_DEPLOY_RAMDISK_ID=$(openstack \ image create \ $ironic_deploy_ramdisk_name \ --public --disk-format=ari \ --container-format=ari \ < $IRONIC_DEPLOY_RAMDISK | grep ' id ' | get_field 2) die_if_not_set $LINENO IRONIC_DEPLOY_RAMDISK_ID "Failed to load ramdisk image into glance" else IRONIC_DEPLOY_ISO_ID=$(openstack \ image create \ $(basename $IRONIC_DEPLOY_ISO) \ --public --disk-format=iso \ --container-format=bare \ < $IRONIC_DEPLOY_ISO -f value -c id) die_if_not_set $LINENO IRONIC_DEPLOY_ISO_ID "Failed to load deploy iso into glance" fi else if is_ansible_with_tinyipa; then ironic_deploy_ramdisk_name="ansible-$ironic_deploy_ramdisk_name" fi IRONIC_DEPLOY_KERNEL_ID=$(openstack image show $ironic_deploy_kernel_name -f value -c id) IRONIC_DEPLOY_RAMDISK_ID=$(openstack image show $ironic_deploy_ramdisk_name -f value -c id) fi iniset $IRONIC_CONF_FILE conductor deploy_kernel $IRONIC_DEPLOY_KERNEL_ID iniset $IRONIC_CONF_FILE conductor deploy_ramdisk $IRONIC_DEPLOY_RAMDISK_ID iniset $IRONIC_CONF_FILE conductor rescue_kernel $IRONIC_DEPLOY_KERNEL_ID iniset $IRONIC_CONF_FILE conductor rescue_ramdisk $IRONIC_DEPLOY_RAMDISK_ID } function prepare_baremetal_basic_ops { if [[ "$IRONIC_BAREMETAL_BASIC_OPS" != "True" ]]; then return 0 fi if ! is_service_enabled nova && [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then local image_file_path if [[ ${IRONIC_WHOLEDISK_IMAGE_NAME} =~ \.img$ ]]; then image_file_path=$FILES/${IRONIC_WHOLEDISK_IMAGE_NAME} else image_file_path=$FILES/${IRONIC_WHOLEDISK_IMAGE_NAME}.img fi sudo install -g $LIBVIRT_GROUP -o $STACK_USER -m 644 $image_file_path $IRONIC_HTTP_DIR fi upload_baremetal_ironic_deploy if [[ "$IRONIC_BOOT_MODE" == "uefi" && is_deployed_by_redfish ]]; then upload_baremetal_ironic_efiboot fi configure_tftpd configure_iptables } function cleanup_baremetal_basic_ops { if [[ "$IRONIC_BAREMETAL_BASIC_OPS" != "True" ]]; then return 0 fi rm -f $IRONIC_VM_MACS_CSV_FILE sudo rm -rf $IRONIC_DATA_DIR $IRONIC_STATE_PATH local vm_name for vm_name in $(_ironic_bm_vm_names); do # Delete the Virtual BMCs if is_deployed_by_ipmi; then vbmc --no-daemon list | grep -a $vm_name && vbmc --no-daemon delete $vm_name || /bin/true fi # pick up the $LIBVIRT_GROUP we have possibly joint newgrp $LIBVIRT_GROUP < DocumentRoot "%HTTPROOT%" Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all Require all granted ErrorLog %APACHELOGDIR%/ipxe_error.log ErrorLogFormat "%{cu}t [%-m:%l] [pid %P:tid %T] %7F: %E: [client\ %a] [frontend\ %A] %M% ,\ referer\ %{Referer}i" LogLevel info CustomLog %APACHELOGDIR%/ipxe_access.log "%{%Y-%m-%d}t %{%T}t.%{msec_frac}t [%l] %a \"%r\" %>s %b" ironic-15.0.0/devstack/files/apache-ironic-api-redirect.template0000664000175000017500000000144113652514273024625 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 1.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is an example Apache2 configuration file for using the # Ironic API through mod_wsgi. This version assumes you are # running devstack to configure the software. Redirect 307 /baremetal %IRONIC_SERVICE_PROTOCOL%://%IRONIC_SERVICE_HOST%/baremetal ironic-15.0.0/devstack/common_settings0000664000175000017500000000543013652514273020053 0ustar zuulzuul00000000000000#!/bin/bash if [[ -f $TOP_DIR/../../old/devstack/.localrc.auto ]]; then source <(cat $TOP_DIR/../../old/devstack/.localrc.auto | grep -v 'enable_plugin') fi CIRROS_VERSION=0.4.0 # Whether configure the nodes to boot in Legacy BIOS or UEFI mode. Accepted # values are: "bios" or "uefi", defaults to "bios". # # WARNING: UEFI is EXPERIMENTAL. The CirrOS images uploaded by DevStack by # default WILL NOT WORK with UEFI. IRONIC_BOOT_MODE=${IRONIC_BOOT_MODE:-bios} IRONIC_DEFAULT_IMAGE_NAME=cirros-${CIRROS_VERSION}-x86_64-uec if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then IRONIC_DEFAULT_IMAGE_NAME=cirros-d160722-x86_64-uec fi IRONIC_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-$IRONIC_DEFAULT_IMAGE_NAME} # Add link to download queue, ignore if already exist. # TODO(vsaienko) Move to devstack https://review.opendev.org/420656 function add_image_link { local i_link="$1" if ! [[ "$IMAGE_URLS" =~ "$i_link" ]]; then if [[ -z "$IMAGE_URLS" || "${IMAGE_URLS: -1}" == "," ]]; then IMAGE_URLS+="$i_link" else IMAGE_URLS+=",$i_link" fi fi } if [[ "$IRONIC_BOOT_MODE" == "uefi" ]]; then add_image_link http://download.cirros-cloud.net/daily/20160722/cirros-d160722-x86_64-uec.tar.gz add_image_link http://download.cirros-cloud.net/daily/20160722/cirros-d160722-x86_64-disk.img else # NOTE (vsaienko) We are going to test mixed drivers/partitions in single setup. # Do not restrict downloading image only for specific case. Download both disk and uec images. # NOTE (vdrok): Here the images are actually pre-cached by devstack, in # the files folder, so they won't be downloaded again. add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-disk.img fi export IRONIC_WHOLEDISK_IMAGE_NAME=${IRONIC_WHOLEDISK_IMAGE_NAME:-${IRONIC_IMAGE_NAME/-uec/-disk}} export IRONIC_PARTITIONED_IMAGE_NAME=${IRONIC_PARTITIONED_IMAGE_NAME:-${IRONIC_IMAGE_NAME/-disk/-uec}} # These parameters describe which image will be used to provision a node in # tempest tests if [[ -z "$IRONIC_TEMPEST_WHOLE_DISK_IMAGE" && "$IRONIC_VM_EPHEMERAL_DISK" == 0 ]]; then IRONIC_TEMPEST_WHOLE_DISK_IMAGE=True fi IRONIC_TEMPEST_WHOLE_DISK_IMAGE=$(trueorfalse False IRONIC_TEMPEST_WHOLE_DISK_IMAGE) if [[ "$IRONIC_TEMPEST_WHOLE_DISK_IMAGE" == "True" ]]; then export IRONIC_IMAGE_NAME=$IRONIC_WHOLEDISK_IMAGE_NAME else export IRONIC_IMAGE_NAME=$IRONIC_PARTITIONED_IMAGE_NAME fi # NOTE(vsaienko) set DEFAULT_IMAGE_NAME here, as it is still used by grenade # https://github.com/openstack-dev/grenade/blob/90c4ead2f2a7ed48c873c51cef415b83d655752e/projects/60_nova/resources.sh#L31 export DEFAULT_IMAGE_NAME=$IRONIC_IMAGE_NAME ironic-15.0.0/devstack/tools/0000775000175000017500000000000013652514443016055 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/tools/ironic/0000775000175000017500000000000013652514443017340 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/tools/ironic/templates/0000775000175000017500000000000013652514443021336 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/tools/ironic/templates/vm.xml0000664000175000017500000000462613652514273022513 0ustar zuulzuul00000000000000 {{ name }} {{ memory }} {{ cpus }} hvm {% if bootdev == 'network' and not uefi_loader %} {% endif %} {% if uefi_loader %} {{ uefi_loader }} {% if uefi_nvram %} {{ uefi_nvram }}-{{ name }} {% endif %} {% endif %} {% if engine == 'kvm' %} {% endif %} destroy destroy restart {{ emulator }} {% for (imagefile, letter) in images %} {% if uefi_loader %}
{% else %}
{% endif %} {% endfor %}
{% for n in range(1, interface_count+1) %} {% if n == 1 and mac %} {% endif %}
{% if uefi_loader and bootdev == 'network' %} {% endif %} {% endfor %} {{ console }}
ironic-15.0.0/devstack/tools/ironic/templates/tftpd-xinetd.template0000664000175000017500000000070413652514273025507 0ustar zuulzuul00000000000000service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = root server = /usr/sbin/in.tftpd server_args = -v -v -v -v -v --blocksize %MAX_BLOCKSIZE% --map-file %TFTPBOOT_DIR%/map-file %TFTPBOOT_DIR% disable = no # This is a workaround for Fedora, where TFTP will listen only on # IPv6 endpoint, if IPv4 flag is not used. flags = IPv4 } ironic-15.0.0/devstack/tools/ironic/templates/brbm.xml0000664000175000017500000000020013652514273022773 0ustar zuulzuul00000000000000 brbm ironic-15.0.0/devstack/tools/ironic/scripts/0000775000175000017500000000000013652514443021027 5ustar zuulzuul00000000000000ironic-15.0.0/devstack/tools/ironic/scripts/create-node.sh0000775000175000017500000001025213652514273023555 0ustar zuulzuul00000000000000#!/usr/bin/env bash # **create-nodes** # Creates baremetal poseur nodes for ironic testing purposes set -ex # Make tracing more educational export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: ' # Keep track of the DevStack directory TOP_DIR=$(cd $(dirname "$0")/.. && pwd) while getopts "n:c:i:m:M:d:a:b:e:E:p:o:f:l:L:N:A:D:v:P:" arg; do case $arg in n) NAME=$OPTARG;; c) CPU=$OPTARG;; i) INTERFACE_COUNT=$OPTARG;; M) INTERFACE_MTU=$OPTARG;; m) MEM=$(( 1024 * OPTARG ));; # Extra G to allow fuzz for partition table : flavor size and registered # size need to be different to actual size. d) DISK=$(( OPTARG + 1 ));; a) ARCH=$OPTARG;; b) BRIDGE=$OPTARG;; e) EMULATOR=$OPTARG;; E) ENGINE=$OPTARG;; p) VBMC_PORT=$OPTARG;; o) PDU_OUTLET=$OPTARG;; f) DISK_FORMAT=$OPTARG;; l) LOGDIR=$OPTARG;; L) UEFI_LOADER=$OPTARG;; N) UEFI_NVRAM=$OPTARG;; A) MAC_ADDRESS=$OPTARG;; D) NIC_DRIVER=$OPTARG;; v) VOLUME_COUNT=$OPTARG;; P) STORAGE_POOL=$OPTARG;; esac done shift $(( $OPTIND - 1 )) if [ -z "$UEFI_LOADER" ] && [ ! -z "$UEFI_NVRAM" ]; then echo "Parameter -N (UEFI NVRAM) cannot be used without -L (UEFI Loader)" exit 1 fi LIBVIRT_NIC_DRIVER=${NIC_DRIVER:-"e1000"} LIBVIRT_STORAGE_POOL=${STORAGE_POOL:-"default"} LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} export VIRSH_DEFAULT_CONNECT_URI=$LIBVIRT_CONNECT_URI if [ -n "$LOGDIR" ] ; then mkdir -p "$LOGDIR" fi PREALLOC= if [ -f /etc/debian_version -a "$DISK_FORMAT" == "qcow2" ]; then PREALLOC="--prealloc-metadata" fi if [ -n "$LOGDIR" ] ; then VM_LOGGING="--console-log $LOGDIR/${NAME}_console.log" else VM_LOGGING="" fi UEFI_OPTS="" if [ ! -z "$UEFI_LOADER" ]; then UEFI_OPTS="--uefi-loader $UEFI_LOADER" if [ ! -z "$UEFI_NVRAM" ]; then UEFI_OPTS+=" --uefi-nvram $UEFI_NVRAM" fi fi # Create bridge and add VM interface to it. # Additional interface will be added to this bridge and # it will be plugged to OVS. # This is needed in order to have interface in OVS even # when VM is in shutdown state INTERFACE_COUNT=${INTERFACE_COUNT:-1} for int in $(seq 1 $INTERFACE_COUNT); do tapif=tap-${NAME}i${int} ovsif=ovs-${NAME}i${int} # NOTE(vsaienko) use veth pair here to ensure that interface # exists in OVS even when VM is powered off. sudo ip link add dev $tapif type veth peer name $ovsif for l in $tapif $ovsif; do sudo ip link set dev $l up sudo ip link set $l mtu $INTERFACE_MTU done sudo ovs-vsctl add-port $BRIDGE $ovsif done if [ -n "$MAC_ADDRESS" ] ; then MAC_ADDRESS="--mac $MAC_ADDRESS" fi VOLUME_COUNT=${VOLUME_COUNT:-1} if ! virsh list --all | grep -q $NAME; then vm_opts="" for int in $(seq 1 $VOLUME_COUNT); do if [[ "$int" == "1" ]]; then # Compatibility with old naming vol_name="$NAME.$DISK_FORMAT" else vol_name="$NAME-$int.$DISK_FORMAT" fi virsh vol-list --pool $LIBVIRT_STORAGE_POOL | grep -q $vol_name && virsh vol-delete $vol_name --pool $LIBVIRT_STORAGE_POOL >&2 virsh vol-create-as $LIBVIRT_STORAGE_POOL ${vol_name} ${DISK}G --format $DISK_FORMAT $PREALLOC >&2 volume_path=$(virsh vol-path --pool $LIBVIRT_STORAGE_POOL $vol_name) # Pre-touch the VM to set +C, as it can only be set on empty files. sudo touch "$volume_path" sudo chattr +C "$volume_path" || true vm_opts+="--image $volume_path " done if [[ -n "$EMULATOR" ]]; then vm_opts+="--emulator $EMULATOR " fi $PYTHON $TOP_DIR/scripts/configure-vm.py \ --bootdev network --name $NAME \ --arch $ARCH --cpus $CPU --memory $MEM --libvirt-nic-driver $LIBVIRT_NIC_DRIVER \ --disk-format $DISK_FORMAT $VM_LOGGING --engine $ENGINE $UEFI_OPTS $vm_opts \ --interface-count $INTERFACE_COUNT $MAC_ADDRESS >&2 fi # echo mac in format mac1,ovs-node-0i1;mac2,ovs-node-0i2;...;macN,ovs-node0iN VM_MAC=$(echo -n $(virsh domiflist $NAME |awk '/tap-/{print $5","$3}')|tr ' ' ';' |sed s/tap-/ovs-/g) echo -n "$VM_MAC $VBMC_PORT $PDU_OUTLET" ironic-15.0.0/devstack/tools/ironic/scripts/setup-network.sh0000775000175000017500000000210213652514273024211 0ustar zuulzuul00000000000000#!/usr/bin/env bash # **setup-network** # Setups openvswitch libvirt network suitable for # running baremetal poseur nodes for ironic testing purposes set -exu # Make tracing more educational export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: ' LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} # Keep track of the DevStack directory TOP_DIR=$(cd $(dirname "$0")/.. && pwd) BRIDGE_NAME=${1:-brbm} PUBLIC_BRIDGE_MTU=${2:-1500} export VIRSH_DEFAULT_CONNECT_URI="$LIBVIRT_CONNECT_URI" # Only add bridge if missing. Bring it UP. (sudo ovs-vsctl list-br | grep ${BRIDGE_NAME}) || sudo ovs-vsctl add-br ${BRIDGE_NAME} sudo ip link set dev ${BRIDGE_NAME} up # Remove bridge before replacing it. (virsh net-list | grep "${BRIDGE_NAME} ") && virsh net-destroy ${BRIDGE_NAME} (virsh net-list --inactive | grep "${BRIDGE_NAME} ") && virsh net-undefine ${BRIDGE_NAME} virsh net-define <(sed s/brbm/$BRIDGE_NAME/ $TOP_DIR/templates/brbm.xml) virsh net-autostart ${BRIDGE_NAME} virsh net-start ${BRIDGE_NAME} sudo ip link set dev ${BRIDGE_NAME} mtu $PUBLIC_BRIDGE_MTU ironic-15.0.0/devstack/tools/ironic/scripts/cleanup-node.sh0000775000175000017500000000165013652514273023743 0ustar zuulzuul00000000000000#!/usr/bin/env bash # **cleanup-nodes** # Cleans up baremetal poseur nodes and volumes created during ironic setup # Assumes calling user has proper libvirt group membership and access. set -exu # Make tracing more educational export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: ' LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"} LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"} NAME=$1 export VIRSH_DEFAULT_CONNECT_URI=$LIBVIRT_CONNECT_URI VOL_NAME="$NAME.qcow2" virsh list | grep -q $NAME && virsh destroy $NAME virsh list --inactive | grep -q $NAME && virsh undefine $NAME --nvram if virsh pool-list | grep -q $LIBVIRT_STORAGE_POOL ; then virsh vol-list $LIBVIRT_STORAGE_POOL | grep -q $VOL_NAME && virsh vol-delete $VOL_NAME --pool $LIBVIRT_STORAGE_POOL fi sudo brctl delif br-$NAME ovs-$NAME || true sudo ip link set dev br-$NAME down || true sudo brctl delbr br-$NAME || true ironic-15.0.0/devstack/tools/ironic/scripts/configure-vm.py0000775000175000017500000001214513652514273024011 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import os.path import string import sys import jinja2 import libvirt templatedir = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'templates') CONSOLE_LOG = """ """ CONSOLE_PTY = """ """ def main(): parser = argparse.ArgumentParser( description="Configure a kvm virtual machine for the seed image.") parser.add_argument('--name', default='seed', help='the name to give the machine in libvirt.') parser.add_argument('--image', action='append', default=[], help='Use a custom image file (must be qcow2).') parser.add_argument('--engine', default='qemu', help='The virtualization engine to use') parser.add_argument('--arch', default='i686', help='The architecture to use') parser.add_argument('--memory', default='2097152', help="Maximum memory for the VM in KB.") parser.add_argument('--cpus', default='1', help="CPU count for the VM.") parser.add_argument('--bootdev', default='hd', help="What boot device to use (hd/network).") parser.add_argument('--libvirt-nic-driver', default='virtio', help='The libvirt network driver to use') parser.add_argument('--interface-count', default=1, type=int, help='The number of interfaces to add to VM.'), parser.add_argument('--mac', default=None, help='The mac for the first interface on the vm') parser.add_argument('--console-log', help='File to log console') parser.add_argument('--emulator', default=None, help='Path to emulator bin for vm template') parser.add_argument('--disk-format', default='qcow2', help='Disk format to use.') parser.add_argument('--uefi-loader', default='', help='The absolute path of the UEFI firmware blob.') parser.add_argument('--uefi-nvram', default='', help=('The absolute path of the non-volatile memory ' 'to store the UEFI variables. Should be used ' 'only when --uefi-loader is also specified.')) args = parser.parse_args() env = jinja2.Environment(loader=jinja2.FileSystemLoader(templatedir)) template = env.get_template('vm.xml') images = list(zip(args.image, string.ascii_lowercase)) if not images or len(images) > 6: # 6 is an artificial limitation because of the way we generate PCI IDs sys.exit("Up to 6 images are required") params = { 'name': args.name, 'images': images, 'engine': args.engine, 'arch': args.arch, 'memory': args.memory, 'cpus': args.cpus, 'bootdev': args.bootdev, 'interface_count': args.interface_count, 'mac': args.mac, 'nicdriver': args.libvirt_nic_driver, 'emulator': args.emulator, 'disk_format': args.disk_format, 'uefi_loader': args.uefi_loader, 'uefi_nvram': args.uefi_nvram, } if args.emulator: params['emulator'] = args.emulator else: qemu_kvm_locations = ['/usr/bin/kvm', '/usr/bin/qemu-kvm', '/usr/libexec/qemu-kvm'] for location in qemu_kvm_locations: if os.path.exists(location): params['emulator'] = location break else: raise RuntimeError("Unable to find location of kvm executable") if args.console_log: params['console'] = CONSOLE_LOG % {'console_log': args.console_log} else: params['console'] = CONSOLE_PTY libvirt_template = template.render(**params) conn = libvirt.open("qemu:///system") a = conn.defineXML(libvirt_template) print("Created machine %s with UUID %s" % (args.name, a.UUIDString())) if __name__ == '__main__': main() ironic-15.0.0/devstack/settings0000664000175000017500000000136413652514273016505 0ustar zuulzuul00000000000000enable_service ironic ir-api ir-cond source $DEST/ironic/devstack/common_settings # NOTE(vsaienko) mtu calculation has been changed recently to 1450 # https://github.com/openstack/neutron/commit/51a697 # and caused https://bugs.launchpad.net/ironic/+bug/1631875 # Get the smallest local MTU local_mtu=$(ip link show | sed -ne 's/.*mtu \([0-9]\+\).*/\1/p' | sort -n | head -1) # 50 bytes is overhead for vxlan (which is greater than GRE # allowing us to use either overlay option with this MTU. # However, if traffic is flowing over IPv6 tunnels, then # The overhead is essentially another 100 bytes. In order to # handle both cases, lets go ahead and drop the maximum by # 100 bytes. PUBLIC_BRIDGE_MTU=${OVERRIDE_PUBLIC_BRIDGE_MTU:-$((local_mtu - 100))} ironic-15.0.0/doc/0000775000175000017500000000000013652514443013656 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/0000775000175000017500000000000013652514443015156 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/_exts/0000775000175000017500000000000013652514443016300 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/_exts/automated_steps.py0000664000175000017500000001427213652514273022062 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import defaultdict import inspect import itertools import operator import os.path from docutils import nodes from docutils.parsers import rst from docutils.parsers.rst import directives from docutils.statemachine import ViewList from sphinx.util import logging from sphinx.util.nodes import nested_parse_with_titles import stevedore from ironic.common import driver_factory LOG = logging.getLogger(__name__) def _list_table(add, headers, data, title='', columns=None): """Build a list-table directive. :param add: Function to add one row to output. :param headers: List of header values. :param data: Iterable of row data, yielding lists or tuples with rows. """ add('.. list-table:: %s' % title) add(' :header-rows: 1') if columns: add(' :widths: %s' % (','.join(str(c) for c in columns))) add('') add(' - * %s' % headers[0]) for h in headers[1:]: add(' * %s' % h) for row in data: add(' - * %s' % row[0]) for r in row[1:]: lines = str(r).splitlines() if not lines: # empty string add(' * ') else: # potentially multi-line string add(' * %s' % lines[0]) for l in lines[1:]: add(' %s' % l) add('') def _format_doc(doc): "Format one method docstring to be shown in the step table." paras = doc.split('\n\n') if paras[-1].startswith(':'): # Remove the field table that commonly appears at the end of a # docstring. paras = paras[:-1] return '\n\n'.join(paras) _clean_steps = {} def _init_steps_by_driver(): "Load step information from drivers." # NOTE(dhellmann): This reproduces some of the logic of # ironic.drivers.base.BaseInterface.__new__ and # ironic.common.driver_factory but does so without # instantiating the interface classes, which means that if # some of the preconditions aren't met we can still inspect # the methods of the class. for interface_name in sorted(driver_factory.driver_base.ALL_INTERFACES): LOG.info('[{}] probing available plugins for interface {}'.format( __name__, interface_name)) loader = stevedore.ExtensionManager( 'ironic.hardware.interfaces.{}'.format(interface_name), invoke_on_load=False, ) for plugin in loader: if plugin.name == 'fake': continue steps = [] for method_name, method in inspect.getmembers(plugin.plugin): if not getattr(method, '_is_clean_step', False): continue step = { 'step': method.__name__, 'priority': method._clean_step_priority, 'abortable': method._clean_step_abortable, 'argsinfo': method._clean_step_argsinfo, 'interface': interface_name, 'doc': _format_doc(inspect.getdoc(method)), } LOG.info('[{}] interface {!r} driver {!r} STEP {}'.format( __name__, interface_name, plugin.name, step)) steps.append(step) if steps: if interface_name not in _clean_steps: _clean_steps[interface_name] = {} _clean_steps[interface_name][plugin.name] = steps def _format_args(argsinfo): argsinfo = argsinfo or {} return '\n\n'.join( '``{}``{}{} {}'.format( argname, ' (*required*)' if argdetail.get('required') else '', ' --' if argdetail.get('description') else '', argdetail.get('description', ''), ) for argname, argdetail in sorted(argsinfo.items()) ) class AutomatedStepsDirective(rst.Directive): option_spec = { 'phase': directives.unchanged, } def run(self): series = self.options.get('series', 'cleaning') if series != 'cleaning': raise NotImplementedError('Showing deploy steps not implemented') source_name = '<{}>'.format(__name__) result = ViewList() for interface_name in ['power', 'management', 'deploy', 'bios', 'raid']: interface_info = _clean_steps.get(interface_name, {}) if not interface_info: continue title = '{} Interface'.format(interface_name.capitalize()) result.append(title, source_name) result.append('~' * len(title), source_name) for driver_name, steps in sorted(interface_info.items()): _list_table( title='{} cleaning steps'.format(driver_name), add=lambda x: result.append(x, source_name), headers=['Name', 'Details', 'Priority', 'Stoppable', 'Arguments'], columns=[20, 30, 10, 10, 30], data=( ('``{}``'.format(s['step']), s['doc'], s['priority'], 'yes' if s['abortable'] else 'no', _format_args(s['argsinfo']), ) for s in steps ), ) # NOTE(dhellmann): Useful for debugging. # print('\n'.join(result)) node = nodes.section() node.document = self.state.document nested_parse_with_titles(self.state, result, node) return node.children def setup(app): app.add_directive('show-steps', AutomatedStepsDirective) _init_steps_by_driver() ironic-15.0.0/doc/source/configuration/0000775000175000017500000000000013652514443020025 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/configuration/policy.rst0000664000175000017500000000035413652514273022061 0ustar zuulzuul00000000000000======== Policies ======== The following is an overview of all available policies in Ironic. For a sample configuration file, refer to :doc:`sample-policy`. .. show-policy:: :config-file: tools/policy/ironic-policy-generator.conf ironic-15.0.0/doc/source/configuration/config.rst0000664000175000017500000000044013652514273022023 0ustar zuulzuul00000000000000===================== Configuration Options ===================== The following is an overview of all available configuration options in Ironic. For a sample configuration file, refer to :doc:`sample-config`. .. show-options:: :config-file: tools/config/ironic-config-generator.conf ironic-15.0.0/doc/source/configuration/sample-policy.rst0000664000175000017500000000062613652514273023342 0ustar zuulzuul00000000000000============= Ironic Policy ============= The following is a sample Ironic policy file, autogenerated from Ironic when this documentation is built. To prevent conflicts, ensure your version of Ironic aligns with the version of this documentation. The sample policy can also be downloaded as a :download:`file `. .. literalinclude:: /_static/ironic.policy.yaml.sample ironic-15.0.0/doc/source/configuration/sample-config.rst0000664000175000017500000000111613652514273023303 0ustar zuulzuul00000000000000========================= Sample Configuration File ========================= The following is a sample Ironic configuration for adaptation and use. For a detailed overview of all available configuration options, refer to :doc:`config`. The sample configuration can also be viewed in :download:`file form `. .. important:: The sample configuration file is auto-generated from Ironic when this documentation is built. You must ensure your version of Ironic matches the version of this documentation. .. literalinclude:: /_static/ironic.conf.sample ironic-15.0.0/doc/source/configuration/index.rst0000664000175000017500000000102613652514273021666 0ustar zuulzuul00000000000000======================= Configuration Reference ======================= Many aspects of the Bare Metal service are specific to the environment it is deployed in. The following pages describe configuration options that can be used to adjust the service to your particular situation. .. toctree:: :maxdepth: 1 Configuration Options Policies .. only:: html Sample files ------------ .. toctree:: :maxdepth: 1 Sample Config File Sample Policy File ironic-15.0.0/doc/source/contributor/0000775000175000017500000000000013652514443017530 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/contributor/drivers.rst0000664000175000017500000002016213652514273021742 0ustar zuulzuul00000000000000.. _pluggable_drivers: ================= Pluggable Drivers ================= Ironic supports a pluggable driver model. This allows contributors to easily add new drivers, and operators to use third-party drivers or write their own. A driver is built at runtime from a *hardware type* and *hardware interfaces*. See :doc:`/install/enabling-drivers` for a detailed explanation of these concepts. Hardware types and interfaces are loaded by the ``ironic-conductor`` service during initialization from the setuptools entrypoints ``ironic.hardware.types`` and ``ironic.hardware.interfaces.`` where ```` is an interface type (for example, ``deploy``). Only hardware types listed in the configuration option ``enabled_hardware_types`` and interfaces listed in configuration options ``enabled__interfaces`` are loaded. A complete list of hardware types available on the system may be found by enumerating this entrypoint by running the following python script:: #!/usr/bin/env python import pkg_resources as pkg print [p.name for p in pkg.iter_entry_points("ironic.hardware.types") if not p.name.startswith("fake")] A list of drivers enabled in a running Ironic service may be found by issuing the following command against that API end point:: openstack baremetal driver list Writing a hardware type ----------------------- A hardware type is a Python class, inheriting :py:class:`ironic.drivers.hardware_type.AbstractHardwareType` and listed in the setuptools entry point ``ironic.hardware.types``. Most of the real world hardware types inherit :py:class:`ironic.drivers.generic.GenericHardware` instead. This helper class provides useful implementations for interfaces that are usually the same for all hardware types, such as ``deploy``. The minimum required interfaces are: * :doc:`boot ` that specifies how to boot ramdisks and instances on the hardware. A generic ``pxe`` implementation is provided by the ``GenericHardware`` base class. * :doc:`deploy ` that orchestrates the deployment. A few common implementations are provided by the ``GenericHardware`` base class. As of the Rocky release, a deploy interface should decorate its deploy method to indicate that it is a deploy step. Conventionally, the deploy method uses a priority of 100. .. code-block:: python @ironic.drivers.base.deploy_step(priority=100) def deploy(self, task): .. note:: Most of the hardware types should not override this interface. * `power` implements power actions for the hardware. These common implementations may be used, if supported by the hardware: * :py:class:`ironic.drivers.modules.ipmitool.IPMIPower` * :py:class:`ironic.drivers.modules.redfish.power.RedfishPower` Otherwise, you need to write your own implementation by subclassing :py:class:`ironic.drivers.base.PowerInterface` and providing missing methods. .. note:: Power actions in Ironic are blocking - methods of a power interface should not return until the power action is finished or errors out. * `management` implements additional out-of-band management actions, such as setting a boot device. A few common implementations exist and may be used, if supported by the hardware: * :py:class:`ironic.drivers.modules.ipmitool.IPMIManagement` * :py:class:`ironic.drivers.modules.redfish.management.RedfishManagement` Some hardware types, such as ``snmp`` do not support out-of-band management. They use the fake implementation in :py:class:`ironic.drivers.modules.fake.FakeManagement` instead. Otherwise, you need to write your own implementation by subclassing :py:class:`ironic.drivers.base.ManagementInterface` and providing missing methods. Combine the interfaces in a hardware type by populating the lists of supported interfaces. These lists are prioritized, with the most preferred implementation first. For example: .. code-block:: python class MyHardware(generic.GenericHardware): @property def supported_management_interfaces(self): """List of supported management interfaces.""" return [MyManagement, ipmitool.IPMIManagement] @property def supported_power_interfaces(self): """List of supported power interfaces.""" return [MyPower, ipmitool.IPMIPower] .. note:: In this example, all interfaces, except for ``management`` and ``power`` are taken from the ``GenericHardware`` base class. Finally, give the new hardware type and new interfaces human-friendly names and create entry points for them in the ``setup.cfg`` file:: ironic.hardware.types = my-hardware = ironic.drivers.my_hardware:MyHardware ironic.hardware.interfaces.power = my-power = ironic.drivers.modules.my_hardware:MyPower ironic.hardware.interfaces.management = my-management = ironic.drivers.modules.my_hardware:MyManagement Supported Drivers ----------------- For a list of supported drivers (those that are continuously tested on every upstream commit) please consult the :doc:`drivers page `. Node Vendor Passthru -------------------- Drivers may implement a passthrough API, which is accessible via the ``/v1/nodes//vendor_passthru?method={METHOD}`` endpoint. Beyond basic checking, Ironic does not introspect the message body and simply "passes it through" to the relevant driver. A method: * can support one or more HTTP methods (for example, GET, POST) * is asynchronous or synchronous + For asynchronous methods, a 202 (Accepted) HTTP status code is returned to indicate that the request was received, accepted and is being acted upon. No body is returned in the response. + For synchronous methods, a 200 (OK) HTTP status code is returned to indicate that the request was fulfilled. The response may include a body. * can require an exclusive lock on the node. This only occurs if the method doesn't specify require_exclusive_lock=False in the decorator. If an exclusive lock is held on the node, other requests for the node will be delayed and may fail with an HTTP 409 (Conflict) error code. This endpoint exposes a node's driver directly, and as such, it is expressly not part of Ironic's standard REST API. There is only a single HTTP endpoint exposed, and the semantics of the message body are determined solely by the driver. Ironic makes no guarantees about backwards compatibility; this is solely up to the discretion of each driver's author. To get information about all the methods available via the vendor_passthru endpoint for a particular node, you can issue an HTTP GET request:: GET /v1/nodes//vendor_passthru/methods The response's JSON body will contain information for each method, such as the method's name, a description, the HTTP methods supported, and whether it's asynchronous or synchronous. Driver Vendor Passthru ---------------------- Drivers may implement an API for requests not related to any node, at ``/v1/drivers//vendor_passthru?method={METHOD}``. A method: * can support one or more HTTP methods (for example, GET, POST) * is asynchronous or synchronous + For asynchronous methods, a 202 (Accepted) HTTP status code is returned to indicate that the request was received, accepted and is being acted upon. No body is returned in the response. + For synchronous methods, a 200 (OK) HTTP status code is returned to indicate that the request was fulfilled. The response may include a body. .. note:: Unlike methods in `Node Vendor Passthru`_, a request does not lock any resource, so it will not delay other requests and will not fail with an HTTP 409 (Conflict) error code. Ironic makes no guarantees about the semantics of the message BODY sent to this endpoint. That is left up to each driver's author. To get information about all the methods available via the driver vendor_passthru endpoint, you can issue an HTTP GET request:: GET /v1/drivers//vendor_passthru/methods The response's JSON body will contain information for each method, such as the method's name, a description, the HTTP methods supported, and whether it's asynchronous or synchronous. ironic-15.0.0/doc/source/contributor/notifications.rst0000664000175000017500000001470613652514273023144 0ustar zuulzuul00000000000000.. _develop-notifications: ============================ Developing New Notifications ============================ Ironic notifications are events intended for consumption by external services. Notifications are sent to these services over a message bus by :oslo.messaging-doc:`oslo.messaging's Notifier class `. For more information about configuring notifications and available notifications, see :ref:`deploy-notifications`. Ironic also has a set of base classes that assist in clearly defining the notification itself, the payload, and the other fields not auto-generated by oslo (level, event_type and publisher_id). Below describes how to use these base classes to add a new notification to ironic. Adding a new notification to ironic =================================== To add a new notification to ironic, a new versioned notification class should be created by subclassing the NotificationBase class to define the notification itself and the NotificationPayloadBase class to define which fields the new notification will contain inside its payload. You may also define a schema to allow the payload to be automatically populated by the fields of an ironic object. Here's an example:: # The ironic object whose fields you want to use in your schema @base.IronicObjectRegistry.register class ExampleObject(base.IronicObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(), 'uuid': fields.UUIDField(), 'a_useful_field': fields.StringField(), 'not_useful_field': fields.StringField() } # A class for your new notification @base.IronicObjectRegistry.register class ExampleNotification(notification.NotificationBase): # Version 1.0: Initial version VERSION = '1.0' fields = { 'payload': fields.ObjectField('ExampleNotifPayload') } # A class for your notification's payload @base.IronicObjectRegistry.register class ExampleNotifPayload(notification.NotificationPayloadBase): # Schemas are optional. They just allow you to reuse other objects' # fields by passing in that object and calling populate_schema with # a kwarg set to the other object. SCHEMA = { 'a_useful_field': ('example_obj', 'a_useful_field') } # Version 1.0: Initial version VERSION = '1.0' fields = { 'a_useful_field': fields.StringField(), 'an_extra_field': fields.StringField(nullable=True) } Note that both the payload and notification classes are :oslo.versionedobjects-doc:`oslo versioned objects <>`. Modifications to these require a version bump so that consumers of notifications know when the notifications have changed. SCHEMA defines how to populate the payload fields. It's an optional attribute that subclasses may use to easily populate notifications with data from other objects. It is a dictionary where every key value pair has the following format:: : (, ) The ```` is the name where the data will be stored in the payload object; this field has to be defined as a field of the payload. The ```` shall refer to name of the parameter passed as kwarg to the payload's ``populate_schema()`` call and this object will be used as the source of the data. The ```` shall be a valid field of the passed argument. The SCHEMA needs to be applied with the ``populate_schema()`` call before the notification can be emitted. The value of the ``payload.`` field will be set by the ``.`` field. The ```` will not be part of the payload object internal or external representation. Payload fields that are not set by the SCHEMA can be filled in the same way as in any versioned object. Then, to create a payload, you would do something like the following. Note that if you choose to define a schema in the SCHEMA class variable, you must populate the schema by calling ``populate_schema(example_obj=my_example_obj)`` before emitting the notification is allowed:: my_example_obj = ExampleObject(id=1, a_useful_field='important', not_useful_field='blah') # an_extra_field is optional since it's not a part of the SCHEMA and is a # nullable field in the class fields my_notify_payload = ExampleNotifyPayload(an_extra_field='hello') # populate the schema with the ExampleObject fields my_notify_payload.populate_schema(example_obj=my_example_obj) You then create the notification with the oslo required fields (event_type, publisher_id, and level, all sender fields needed by oslo that are defined in the ironic notification base classes) and emit it:: notify = ExampleNotification( event_type=notification.EventType(object='example_obj', action='do_something', status=fields.NotificationStatus.START), publisher=notification.NotificationPublisher( service='ironic-conductor', host='hostname01'), level=fields.NotificationLevel.DEBUG, payload=my_notify_payload) notify.emit(context) When specifying the event_type, ``object`` will specify the object being acted on, ``action`` will be a string describing what action is being performed on that object, and ``status`` will be one of "start", "end", "error", or "success". "start" and "end" are used to indicate when actions that are not immediate begin and succeed. "success" is used to indicate when actions that are immediate succeed. "error" is used to indicate when any type of action fails, regardless of whether it's immediate or not. As a result of specifying these parameters, event_type will be formatted as ``baremetal...`` on the message bus. This example will send the following notification over the message bus:: { "priority": "debug", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"ExampleNotifyPayload", "ironic_object.version":"1.0", "ironic_object.data":{ "a_useful_field":"important", "an_extra_field":"hello" } }, "event_type":"baremetal.example_obj.do_something.start", "publisher_id":"ironic-conductor.hostname01" } ironic-15.0.0/doc/source/contributor/osprofiler-support.rst0000664000175000017500000000741113652514273024164 0ustar zuulzuul00000000000000.. _OSProfiler-support: ================ About OSProfiler ================ OSProfiler is an OpenStack cross-project profiling library. Its API provides different ways to add a new trace point. Trace points contain two messages (start and stop). Messages like below are sent to a collector:: { "name": -(start|stop), "base_id": , "parent_id": , "trace_id": , "info": } The fields are defined as follows: ``base_id`` - that is same for all trace points that belong to one trace. This is used to simplify the process of retrieving all trace points (related to one trace) from the collector. ``parent_id`` - of parent trace point. ``trace_id`` - of current trace point. ``info`` - the dictionary that contains user information passed when calling profiler start() & stop() methods. The profiler uses ceilometer as a centralized collector. Two other alternatives for ceilometer are pure MongoDB driver and Elasticsearch. A notifier is setup to send notifications to ceilometer using oslo.messaging and ceilometer API is used to retrieve all messages related to one trace. OSProfiler has entry point that allows the user to retrieve information about traces and present it in HTML/JSON using CLI. For more details see :osprofiler-doc:`OSProfiler – Cross-project profiling library `. How to Use OSProfiler with Ironic in Devstack ============================================= To use or test OSProfiler in ironic, the user needs to setup Devstack with OSProfiler and ceilometer. In addition to the setup described at :ref:`deploy_devstack`, the user needs to do the following: Add the following to ``localrc`` to enable OSProfiler and ceilometer:: enable_plugin panko https://opendev.org/openstack/panko enable_plugin ceilometer https://opendev.org/openstack/ceilometer enable_plugin osprofiler https://opendev.org/openstack/osprofiler # Enable the following services CEILOMETER_NOTIFICATION_TOPICS=notifications,profiler ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral ENABLED_SERVICES+=,ceilometer-anotification,ceilometer-collector ENABLED_SERVICES+=,ceilometer-alarm-evaluator,ceilometer-alarm-notifier ENABLED_SERVICES+=,ceilometer-api Run stack.sh. Once Devstack environment is setup, edit ``ironic.conf`` to set the following profiler options and restart ironic services:: [profiler] enabled = True hmac_keys = SECRET_KEY # default value used across several OpenStack projects trace_sqlalchemy = True In order to trace ironic using OSProfiler, use openstackclient to run baremetal commands with ``--os-profile SECRET_KEY``. For example, the following will cause a to be printed after node list:: $ openstack --os-profile SECRET_KEY baremetal node list Output of the above command will include the following:: Trace ID: Display trace with command: osprofiler trace show --html The trace results can be seen using this command:: $ osprofiler trace show --html The trace results can be saved in a file with ``--out file-name`` option:: $ osprofiler trace show --html --out trace.html The trace results show the time spent in ironic-api, ironic-conductor, and db calls. More detailed db tracing is enabled if ``trace_sqlalchemy`` is set to true. Sample Trace: .. figure:: ../images/sample_trace.svg :width: 660px :align: left :alt: Sample Trace Each trace has embedded trace point details as shown below: .. figure:: ../images/sample_trace_details.svg :width: 660px :align: left :alt: Sample Trace Details References ========== - :osprofiler-doc:`OSProfiler – Cross-project profiling library ` - :ref:`deploy_devstack` ironic-15.0.0/doc/source/contributor/adding-new-job.rst0000664000175000017500000000452713652514273023060 0ustar zuulzuul00000000000000.. _adding-new-job: ================ Adding a new Job ================ Are you familiar with Zuul? =========================== Before start trying to figure out how Zuul works, take some time and read about `Zuul Config `_ and the `Zuul Best Practices `_. .. _zuul_config: https://zuul-ci.org/docs/zuul/user/config.html .. _zuul_best_practices: https://docs.openstack.org/infra/manual/creators.html#zuul-best-practices Where can I find the existing jobs? =================================== The jobs for the Ironic project are defined under the zuul.d_ folder in the root directory, that contains three files, whose function is described below. * ironic-jobs.yaml_: Contains the configuration of each Ironic Job converted to Zuul v3. * legacy-ironic-jobs.yaml_: Contains the configuration of each Ironic Job that haven't been converted to Zuul v3 yet. * project.yaml_: Contains the jobs that will run during check and gate phase. .. _zuul.d: https://opendev.org/openstack/ironic/src/branch/master/zuul.d .. _ironic-jobs.yaml: https://opendev.org/openstack/ironic/src/branch/master/zuul.d/ironic-jobs.yaml .. _legacy-ironic-jobs.yaml: https://opendev.org/openstack/ironic/src/branch/master/zuul.d/legacy-ironic-jobs.yaml .. _project.yaml: https://opendev.org/openstack/ironic/src/branch/master/zuul.d/project.yaml Create a new Job ================ Identify among the existing jobs the one that most closely resembles the scenario you want to test, the existing job will be used as `parent` in your job definition. Now you will only need to either overwrite or add variables to your job definition under the `vars` section to represent the desired scenario. The code block below shows the minimal structure of a new job definition that you need to add to ironic-jobs.yaml_. .. code-block:: yaml - job: name: description: parent: vars: : After having the definition of your new job you just need to add the job name to the project.yaml_ under `check` and `gate`. Only jobs that are voting should be in the `gate` section. .. code-block:: yaml - project: check: jobs: - gate: queue: ironic jobs: - ironic-15.0.0/doc/source/contributor/states.rst0000664000175000017500000002373513652514273021600 0ustar zuulzuul00000000000000.. _states: ====================== Ironic's State Machine ====================== State Machine Diagram ===================== The diagram below shows the provisioning states that an Ironic node goes through during the lifetime of a node. The diagram also depicts the events that transition the node to different states. Stable states are highlighted with a thicker border. All transitions from stable states are initiated by API requests. There are a few other API-initiated-transitions that are possible from non-stable states. The events for these API-initiated transitions are indicated with '(via API)'. Internally, the conductor initiates the other transitions (depicted in gray). .. figure:: ../images/states.svg :width: 660px :align: left :alt: Ironic state transitions State Descriptions ================== enroll (stable state) This is the state that all nodes start off in when created using API version 1.11 or newer. When a node is in the ``enroll`` state, the only thing ironic knows about it is that it exists, and ironic cannot take any further action by itself. Once a node has its driver/interfaces and their required information set in ``node.driver_info``, the node can be transitioned to the ``verifying`` state by setting the node's provision state using the ``manage`` verb. verifying ironic will validate that it can manage the node using the information given in ``node.driver_info`` and with either the driver/hardware type and interfaces it has been assigned. This involves going out and confirming that the credentials work to access whatever node control mechanism they talk to. manageable (stable state) Once ironic has verified that it can manage the node using the driver/interfaces and credentials passed in at node create time, the node will be transitioned to the ``manageable`` state. From ``manageable``, nodes can transition to: * ``manageable`` (through ``cleaning``) by setting the node's provision state using the ``clean`` verb. * ``manageable`` (through ``inspecting``) by setting the node's provision state using the ``inspect`` verb. * ``available`` (through ``cleaning`` if automatic cleaning is enabled) by setting the node's provision state using the ``provide`` verb. * ``active`` (through ``adopting``) by setting the node's provision state using the ``adopt`` verb. ``manageable`` is the state that a node should be moved into when any updates need to be made to it such as changes to fields in driver_info and updates to networking information on ironic ports assigned to the node. ``manageable`` is also the only stable state that can be transitioned to, from these failure states: * ``adopt failed`` * ``clean failed`` * ``inspect failed`` inspecting ``inspecting`` will utilize node introspection to update hardware-derived node properties to reflect the current state of the hardware. Typically, the node will transition to ``manageable`` if inspection is synchronous, or ``inspect wait`` if asynchronous. The node will transition to ``inspect failed`` if error occurred. inspect wait This is the provision state used when an asynchronous inspection is in progress. A successfully inspected node shall transition to ``manageable`` state. inspect failed This is the state a node will move into when inspection of the node fails. From here the node can transitioned to: * ``inspecting`` by setting the node's provision state using the ``inspect`` verb. * ``manageable`` by setting the node's provision state using the ``manage`` verb cleaning Nodes in the ``cleaning`` state are being scrubbed and reprogrammed into a known configuration. When a node is in the ``cleaning`` state it means that the conductor is executing the clean step (for out-of-band clean steps) or preparing the environment (building PXE configuration files, configuring the DHCP, etc) to boot the ramdisk for running in-band clean steps. clean wait Just like the ``cleaning`` state, the nodes in the ``clean wait`` state are being scrubbed and reprogrammed. The difference is that in the ``clean wait`` state the conductor is waiting for the ramdisk to boot or the clean step which is running in-band to finish. The cleaning process of a node in the ``clean wait`` state can be interrupted by setting the node's provision state using the ``abort`` verb if the task that is running allows it. available (stable state) After nodes have been successfully preconfigured and cleaned, they are moved into the ``available`` state and are ready to be provisioned. From ``available``, nodes can transition to: * ``active`` (through ``deploying``) by setting the node's provision state using the ``active`` verb. * ``manageable`` by setting the node's provision state using the ``manage`` verb deploying Nodes in ``deploying`` are being prepared to run a workload on them. This consists of running a series of tasks, such as: * Setting appropriate BIOS configurations * Partitioning drives and laying down file systems. * Creating any additional resources (node-specific network config, a config drive partition, etc.) that may be required by additional subsystems. wait call-back Just like the ``deploying`` state, the nodes in ``wait call-back`` are being deployed. The difference is that in ``wait call-back`` the conductor is waiting for the ramdisk to boot or execute parts of the deployment which need to run in-band on the node (for example, installing the bootloader, or writing the image to the disk). The deployment of a node in ``wait call-back`` can be interrupted by setting the node's provision state using the ``deleted`` verb. deploy failed This is the state a node will move into when a deployment fails, for example a timeout waiting for the ramdisk to PXE boot. From here the node can be transitioned to: * ``active`` (through ``deploying``) by setting the node's provision state using either the ``active`` or ``rebuild`` verbs. * ``available`` (through ``deleting`` and ``cleaning``) by setting the node's provision state using the ``deleted`` verb. active (stable state) Nodes in ``active`` have a workload running on them. ironic may collect out-of-band sensor information (including power state) on a regular basis. Nodes in ``active`` can transition to: * ``available`` (through ``deleting`` and ``cleaning``) by setting the node's provision state using the ``deleted`` verb. * ``active`` (through ``deploying``) by setting the node's provision state using the ``rebuild`` verb. * ``rescue`` (through ``rescuing``) by setting the node's provision state using the ``rescue`` verb. deleting Nodes in ``deleting`` state are being torn down from running an active workload. In ``deleting``, ironic tears down and removes any configuration and resources it added in ``deploying`` or ``rescuing``. error (stable state) This is the state a node will move into when deleting an active deployment fails. From ``error``, nodes can transition to: * ``available`` (through ``deleting`` and ``cleaning``) by setting the node's provision state using the ``deleted`` verb. adopting This state allows ironic to take over management of a baremetal node with an existing workload on it. Ordinarily when a baremetal node is enrolled and managed by ironic, it must transition through ``cleaning`` and ``deploying`` to reach ``active`` state. However, those baremetal nodes that have an existing workload on them, do not need to be deployed or cleaned again, so this transition allows these nodes to move directly from ``manageable`` to ``active``. rescuing Nodes in ``rescuing`` are being prepared to perform rescue operations. This consists of running a series of tasks, such as: * Setting appropriate BIOS configurations. * Creating any additional resources (node-specific network config, etc.) that may be required by additional subsystems. rescue wait Just like the ``rescuing`` state, the nodes in ``rescue wait`` are being rescued. The difference is that in ``rescue wait`` the conductor is waiting for the ramdisk to boot or execute parts of the rescue which need to run in-band on the node (for example, setting the password for user named ``rescue``). The rescue operation of a node in ``rescue wait`` can be aborted by setting the node's provision state using the ``abort`` verb. rescue failed This is the state a node will move into when a rescue operation fails, for example a timeout waiting for the ramdisk to PXE boot. From here the node can be transitioned to: * ``rescue`` (through ``rescuing``) by setting the node's provision state using the ``rescue`` verb. * ``active`` (through ``unrescuing``) by setting the node's provision state using the ``unrescue`` verb. * ``available`` (through ``deleting``) by setting the node's provision state using the ``deleted`` verb. rescue (stable state) Nodes in ``rescue`` have a rescue ramdisk running on them. Ironic may collect out-of-band sensor information (including power state) on a regular basis. Nodes in ``rescue`` can transition to: * ``active`` (through ``unrescuing``) by setting the node's provision state using the ``unrescue`` verb. * ``available`` (through ``deleting``) by setting the node's provision state using the ``deleted`` verb. unrescuing Nodes in ``unrescuing`` are being prepared to transition to ``active`` state from ``rescue`` state. This consists of running a series of tasks, such as setting appropriate BIOS configurations such as changing boot device. unrescue failed This is the state a node will move into when an unrescue operation fails. From here the node can be transitioned to: * ``rescue`` (through ``rescuing``) by setting the node's provision state using the ``rescue`` verb. * ``active`` (through ``unrescuing``) by setting the node's provision state using the ``unrescue`` verb. * ``available`` (through ``deleting``) by setting the node's provision state using the ``deleted`` verb. ironic-15.0.0/doc/source/contributor/rolling-upgrades.rst0000664000175000017500000006132113652514273023544 0ustar zuulzuul00000000000000.. _rolling-upgrades-dev: ================ Rolling Upgrades ================ The ironic (ironic-api and ironic-conductor) services support rolling upgrades, starting with a rolling upgrade from the Ocata to the Pike release. This describes the design of rolling upgrades, followed by notes for developing new features or modifying an IronicObject. Design ====== Rolling upgrades between releases --------------------------------- Ironic follows the `release-cycle-with-intermediary release model `_. The releases are `semantic-versioned `_, in the form ... We refer to a ``named release`` of ironic as the release associated with a development cycle like Pike. In addition, ironic follows the `standard deprecation policy `_, which says that the deprecation period must be at least three months and a cycle boundary. This means that there will never be anything that is both deprecated *and* removed between two named releases. Rolling upgrades will be supported between: * named release N to N+1 (starting with N == Ocata) * any named release to its latest revision, containing backported bug fixes. Because those bug fixes can contain improvements to the upgrade process, the operator should patch the system before upgrading between named releases. * most recent named release N (and semver releases newer than N) to master. As with the above bullet point, there may be a bug or a feature introduced on a master branch, that we want to remove before publishing a named release. Deprecation policy allows to do this in a 3 month time frame. If the feature was included and removed in intermediate releases, there should be a release note added, with instructions on how to do a rolling upgrade to master from an affected release or release span. This would typically instruct the operator to upgrade to a particular intermediate release, before upgrading to master. Rolling upgrade process ----------------------- Ironic supports rolling upgrades as described in the :doc:`upgrade guide <../admin/upgrade-guide>`. The upgrade process will cause the ironic services to be running the ``FromVer`` and ``ToVer`` releases in this order: 0. Upgrade ironic code and run database schema migrations via the ``ironic-dbsync upgrade`` command. 1. Upgrade code and restart ironic-conductor services, one at a time. 2. Upgrade code and restart ironic-api services, one at a time. 3. Unpin API, RPC and object versions so that the services can now use the latest versions in ``ToVer``. This is done via updating the configuration option described below in `API, RPC and object version pinning`_ and then restarting the services. ironic-conductor services should be restarted first, followed by the ironic-api services. This is to ensure that when new functionality is exposed on the unpinned API service (via API micro version), it is available on the backend. +------+---------------------------------+---------------------------------+ | step | ironic-api | ironic-conductor | +======+=================================+=================================+ | 0 | all FromVer | all FromVer | +------+---------------------------------+---------------------------------+ | 1.1 | all FromVer | some FromVer, some ToVer-pinned | +------+---------------------------------+---------------------------------+ | 1.2 | all FromVer | all ToVer-pinned | +------+---------------------------------+---------------------------------+ | 2.1 | some FromVer, some ToVer-pinned | all ToVer-pinned | +------+---------------------------------+---------------------------------+ | 2.2 | all ToVer-pinned | all ToVer-pinned | +------+---------------------------------+---------------------------------+ | 3.1 | all ToVer-pinned | some ToVer-pinned, some ToVer | +------+---------------------------------+---------------------------------+ | 3.2 | all ToVer-pinned | all ToVer | +------+---------------------------------+---------------------------------+ | 3.3 | some ToVer-pinned, some ToVer | all ToVer | +------+---------------------------------+---------------------------------+ | 3.4 | all ToVer | all ToVer | +------+---------------------------------+---------------------------------+ Policy for changes to the DB model ---------------------------------- The policy for changes to the DB model is as follows: * Adding new items to the DB model is supported. * The dropping of columns or tables and corresponding objects' fields is subject to ironic's `deprecation policy `_. But its alembic script has to wait one more deprecation period, otherwise an ``unknown column`` exception will be thrown when ``FromVer`` services access the DB. This is because :command:`ironic-dbsync upgrade` upgrades the DB schema but ``FromVer`` services still contain the dropped field in their SQLAlchemy DB model. * An ``alembic.op.alter_column()`` to rename or resize a column is not allowed. Instead, split it into multiple operations, with one operation per release cycle (to maintain compatibility with an old SQLAlchemy model). For example, to rename a column, add the new column in release N, then remove the old column in release N+1. * Some implementations of SQL's ``ALTER TABLE``, such as adding foreign keys in PostgreSQL, may impose table locks and cause downtime. If the change cannot be avoided and the impact is significant (e.g. the table can be frequently accessed and/or store a large dataset), these cases must be mentioned in the release notes. API, RPC and object version pinning ----------------------------------- For the ironic services to be running old and new releases at the same time during a rolling upgrade, the services need to be able to handle different API, RPC and object versions. This versioning is handled via the configuration option: ``[DEFAULT]/pin_release_version``. It is used to pin the API, RPC and IronicObject (e.g., Node, Conductor, Chassis, Port, and Portgroup) versions for all the ironic services. The default value of empty indicates that ironic-api and ironic-conductor will use the latest versions of API, RPC and IronicObjects. Its possible values are releases, named (e.g. ``ocata``) or sem-versioned (e.g. ``7.0``). Internally, in `common/release_mappings.py `_, ironic maintains a mapping that indicates the API, RPC and IronicObject versions associated with each release. This mapping is maintained manually. During a rolling upgrade, the services using the new release will set the configuration option value to be the name (or version) of the old release. This will indicate to the services running the new release, which API, RPC and object versions that they should be compatible with, in order to communicate with the services using the old release. Handling API versions --------------------- When the (newer) service is pinned, the maximum API version it supports will be the pinned version -- which the older service supports (as described above at `API, RPC and object version pinning`_). The ironic-api service returns HTTP status code 406 for any requests with API versions that are higher than this maximum version. Handling RPC versions --------------------- `ConductorAPI.__init__() `_ sets the ``version_cap`` variable to the desired (latest or pinned) RPC API version and passes it to the ``RPCClient`` as an initialization parameter. This variable is then used to determine the maximum requested message version that the ``RPCClient`` can send. Each RPC call can customize the request according to this ``version_cap``. The `Ironic RPC versions`_ section below has more details about this. Handling IronicObject versions ------------------------------ Internally, ironic services deal with IronicObjects in their latest versions. Only at these boundaries, when the IronicObject enters or leaves the service, do we deal with object versioning: * getting objects from the database: convert to latest version * saving objects to the database: if pinned, save in pinned version; else save in latest version * serializing objects (to send over RPC): if pinned, send pinned version; else send latest version * deserializing objects (receiving objects from RPC): convert to latest version The ironic-api service also has to handle API requests/responses based on whether or how a feature is supported by the API version and object versions. For example, when the ironic-api service is pinned, it can only allow actions that are available to the object's pinned version, and cannot allow actions that are only available for the latest version of that object. To support this: * All the database tables (SQLAlchemy models) of the IronicObjects have a column named ``version``. The value is the version of the object that is saved in the database. * The method ``IronicObject.get_target_version()`` returns the target version. If pinned, the pinned version is returned. Otherwise, the latest version is returned. * The method ``IronicObject.convert_to_version()`` converts the object into the target version. The target version may be a newer or older version than the existing version of the object. The bulk of the work is done in the helper method ``IronicObject._convert_to_version()``. Subclasses that have new versions redefine this to perform the actual conversions. In the following, * The old release is ``FromVer``; it uses version 1.14 of a Node object. * The new release is ``ToVer``. It uses version 1.15 of a Node object -- this has a deprecated ``extra`` field and a new ``meta`` field that replaces ``extra``. * db_obj['meta'] and db_obj['extra'] are the database representations of those node fields. Getting objects from the database (API/conductor <-- DB) :::::::::::::::::::::::::::::::::::::::::::::::::::::::: Both ironic-api and ironic-conductor services read values from the database. These values are converted to IronicObjects via the method ``IronicObject._from_db_object()``. This method always returns the IronicObject in its latest version, even if it was in an older version in the database. This is done regardless of the service being pinned or not. Note that if an object is converted to a later version, that IronicObject will retain any changes (in its ``_changed_fields`` field) resulting from that conversion. This is needed in case the object gets saved later, in the latest version. For example, if the node in the database is in version 1.14 and has db_obj['extra'] set: * a ``FromVer`` service will get a Node with node.extra = db_obj['extra'] (and no knowledge of node.meta since it doesn't exist) * a ``ToVer`` service (pinned or unpinned), will get a Node with: * node.meta = db_obj['extra'] * node.extra = None * node._changed_fields = ['meta', 'extra'] Saving objects to the database (API/conductor --> DB) ::::::::::::::::::::::::::::::::::::::::::::::::::::: The version used for saving IronicObjects to the database is determined as follows: * For an unpinned service, the object is saved in its latest version. Since objects are always in their latest version, no conversions are needed. * For a pinned service, the object is saved in its pinned version. Since objects are always in their latest version, the object needs to be converted to the pinned version before being saved. The method ``IronicObject.do_version_changes_for_db()`` handles this logic, returning a dictionary of changed fields and their new values (similar to the existing ``oslo.versionedobjects.VersionedObject.obj_get_changes()``). Since we do not keep track internally, of the database version of an object, the object's ``version`` field will always be part of these changes. The `Rolling upgrade process`_ (at step 3.1) ensures that by the time an object can be saved in its latest version, all services are running the newer release (although some may still be pinned) and can handle the latest object versions. An interesting situation can occur when the services are as described in step 3.1. It is possible for an IronicObject to be saved in a newer version and subsequently get saved in an older version. For example, a ``ToVer`` unpinned conductor might save a node in version 1.5. A subsequent request may cause a ``ToVer`` pinned conductor to replace and save the same node in version 1.4! Sending objects via RPC (API/conductor -> RPC) :::::::::::::::::::::::::::::::::::::::::::::: When a service makes an RPC request, any IronicObjects that are sent as part of that request are serialized into entities or primitives via ``IronicObjectSerializer.serialize_entity()``. The version used for objects being serialized is as follows: * For an unpinned service, the object is serialized to its latest version. Since objects are always in their latest version, no conversions are needed. * For a pinned service, the object is serialized to its pinned version. Since objects are always in their latest version, the object is converted to the pinned version before being serialized. The converted object includes changes that resulted from the conversion; this is needed so that the service at the other end of the RPC request has the necessary information if that object will be saved to the database. Receiving objects via RPC (API/conductor <- RPC) :::::::::::::::::::::::::::::::::::::::::::::::: When a service receives an RPC request, any entities that are part of the request need to be deserialized (via ``oslo.versionedobjects.VersionedObjectSerializer.deserialize_entity()``). For entities that represent IronicObjects, we want the deserialization process (via ``IronicObjectSerializer._process_object()``) to result in IronicObjects that are in their latest version, regardless of the version they were sent in and regardless of whether the receiving service is pinned or not. Again, any objects that are converted will retain the changes that resulted from the conversion, useful if that object is later saved to the database. For example, a ``FromVer`` ironic-api could issue an ``update_node()`` RPC request with a node in version 1.4, where node.extra was changed (so node._changed_fields = ['extra']). This node will be serialized in version 1.4. The receiving ``ToVer`` pinned ironic-conductor deserializes it and converts it to version 1.5. The resulting node will have node.meta set (to the changed value from node.extra in v1.4), node.extra = None, and node._changed_fields = ['meta', 'extra']. When developing a new feature or modifying an IronicObject ========================================================== When adding a new feature or changing an IronicObject, they need to be coded so that things work during a rolling upgrade. The following describe areas where the code may need to be changed, as well as some points to keep in mind when developing code. ironic-api ---------- During a rolling upgrade, the new, pinned ironic-api is talking to a new conductor that might also be pinned. There may also be old ironic-api services. So the new, pinned ironic-api service needs to act like it was the older service: * New features should not be made available, unless they are somehow totally supported in the old and new releases. Pinning the API version is in place to handle this. * If, for whatever reason, the API version pinning doesn't prevent a request from being handled that cannot or should not be handled, it should be coded so that the response has HTTP status code 406 (Not Acceptable). This is the same response to requests that have an incorrect (old) version specified. Ironic RPC versions ------------------- When the signature (arguments) of an RPC method is changed or new methods are added, the following needs to be considered: - The RPC version must be incremented and be the same value for both the client (``ironic/conductor/rpcapi.py``, used by ironic-api) and the server (``ironic/conductor/manager.py``, used by ironic-conductor). It should also be updated in ``ironic/common/release_mappings.py``. - Until there is a major version bump, new arguments of an RPC method can only be added as optional. Existing arguments cannot be removed or changed in incompatible ways with the method in older RPC versions. - ironic-api (client-side) sets a version cap (by passing the version cap to the constructor of oslo_messaging.RPCClient). This "pinning" is in place during a rolling upgrade when the ``[DEFAULT]/pin_release_version`` configuration option is set. - New RPC methods are not available when the service is pinned to the older release version. In this case, the corresponding REST API function should return a server error or implement alternative behaviours. - Methods which change arguments should run ``client.can_send_version()`` to see if the version of the request is compatible with the version cap of the RPC Client. Otherwise the request needs to be created to work with a previous version that is supported. - ironic-conductor (server-side) should tolerate older versions of requests in order to keep working during the rolling upgrade process. The behaviour of ironic-conductor will depend on the input parameters passed from the client-side. - Old methods can be removed only after they are no longer used by a previous named release. Object versions --------------- When subclasses of ``ironic.objects.base.IronicObject`` are modified, the following needs to be considered: - Any change of fields or change in signature of remotable methods needs a bump of the object version. The object versions are also maintained in ``ironic/common/release_mappings.py``. - New objects must be added to ``ironic/common/release_mappings.py``. Also for the first releases they should be excluded from the version check by adding their class names to the ``NEW_MODELS`` list in ``ironic/cmd/dbsync.py``. - The arguments of remotable methods (methods which are remoted to the conductor via RPC) can only be added as optional. They cannot be removed or changed in an incompatible way (to the previous release). - Field types cannot be changed. Instead, create a new field and deprecate the old one. - There is a `unit test `_ that generates the hash of an object using its fields and the signatures of its remotable methods. Objects that have a version bump need to be updated in the `expected_object_fingerprints `_ dictionary; otherwise this test will fail. A failed test can also indicate to the developer that their change(s) to an object require a version bump. - When new version objects communicate with old version objects and when reading or writing to the database, ``ironic.objects.base.IronicObject._convert_to_version()`` will be called to convert objects to the target version. Objects should implement their own ._convert_to_version() to remove or alter fields which were added or changed after the target version:: def _convert_to_version(self, target_version, remove_unavailable_fields=True): """Convert to the target version. Subclasses should redefine this method, to do the conversion of the object to the target version. Convert the object to the target version. The target version may be the same, older, or newer than the version of the object. This is used for DB interactions as well as for serialization/deserialization. The remove_unavailable_fields flag is used to distinguish these two cases: 1) For serialization/deserialization, we need to remove the unavailable fields, because the service receiving the object may not know about these fields. remove_unavailable_fields is set to True in this case. 2) For DB interactions, we need to set the unavailable fields to their appropriate values so that these fields are saved in the DB. (If they are not set, the VersionedObject magic will not know to save/update them to the DB.) remove_unavailable_fields is set to False in this case. :param target_version: the desired version of the object :param remove_unavailable_fields: True to remove fields that are unavailable in the target version; set this to True when (de)serializing. False to set the unavailable fields to appropriate values; set this to False for DB interactions. This method must handle: * converting from an older version to a newer version * converting from a newer version to an older version * making sure, when converting, that you take into consideration other object fields that may have been affected by a field (value) only available in a newer version. For example, if field 'new' is only available in Node version 1.5 and Node.affected = Node.new+3, when converting to 1.4 (an older version), you may need to change the value of Node.affected too. Online data migrations ---------------------- The ``ironic-dbsync online_data_migrations`` command will perform online data migrations. Keep in mind the `Policy for changes to the DB model`_. Future incompatible changes in SQLAlchemy models, like removing or renaming columns and tables can break rolling upgrades (when ironic services are run with different release versions simultaneously). It is forbidden to remove these database resources when they may still be used by the previous named release. When `creating new Alembic migrations `_ which modify existing models, make sure that any new columns default to NULL. Test the migration out on a non-empty database to make sure that any new constraints don't cause the database to be locked out for normal operations. You can find an overview on what DDL operations may cause downtime in https://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html. (You should also check older, widely deployed InnoDB versions for issues.) In the case of PostgreSQL, adding a foreign key may lock a whole table for writes. Make sure to add a release note if there are any downtime-related concerns. Backfilling default values, and migrating data between columns or between tables must be implemented inside an online migration script. A script is a database API method (added to ``ironic/db/api.py`` and ``ironic/db/sqlalchemy/api.py``) which takes two arguments: - context: an admin context - max_count: this is used to limit the query. It is the maximum number of objects to migrate; >= 0. If zero, all the objects will be migrated. It returns a two-tuple: - the total number of objects that need to be migrated, at the start of the method, and - the number of migrated objects. In this method, the version column can be used to select and update old objects. The method name should be added to the list of ``ONLINE_MIGRATIONS`` in ``ironic/cmd/dbsync.py``. The method should be removed in the next named release after this one. After online data migrations are completed and the SQLAlchemy models no longer contain old fields, old columns can be removed from the database. This takes at least 3 releases, since we have to wait until the previous named release no longer contains references to the old schema. Before removing any resources from the database by modifying the schema, make sure that your implementation checks that all objects in the affected tables have been migrated. This check can be implemented using the version column. "ironic-dbsync upgrade" command ------------------------------- The ``ironic-dbsync upgrade`` command first checks that the versions of the objects are compatible with the (new) release of ironic, before it will make any DB schema changes. If one or more objects are not compatible, the upgrade will not be performed. This check is done by comparing the objects' ``version`` field in the database with the expected (or supported) versions of these objects. The supported versions are the versions specified in ``ironic.common.release_mappings.RELEASE_MAPPING``. The newly created tables cannot pass this check and thus have to be excluded by adding their object class names (e.g. ``Node``) to ``ironic.cmd.dbsync.NEW_MODELS``. ironic-15.0.0/doc/source/contributor/architecture.rst0000664000175000017500000000731313652514273022751 0ustar zuulzuul00000000000000.. _architecture: =================== System Architecture =================== High Level description ====================== An Ironic deployment will be composed of the following components: - An admin-only RESTful `API service`_, by which privileged users, such as cloud operators and other services within the cloud control plane, may interact with the managed bare metal servers. - A `Conductor service`_, which does the bulk of the work. Functionality is exposed via the `API service`_. The Conductor and API services communicate via RPC. - A Database and `DB API`_ for storing the state of the Conductor and Drivers. - A Deployment Ramdisk or Deployment Agent, which provide control over the hardware which is not available remotely to the Conductor. A ramdisk should be built which contains one of these agents, eg. with `diskimage-builder`_. This ramdisk can be booted on-demand. .. note:: The agent is never run inside a tenant instance. .. _`architecture_drivers`: Drivers ======= The internal driver API provides a consistent interface between the Conductor service and the driver implementations. A driver is defined by a *hardware type* deriving from the AbstractHardwareType_ class, defining supported *hardware interfaces*. See :doc:`/install/enabling-drivers` for a more detailed explanation. See :doc:`drivers` for an explanation on how to write new hardware types and interfaces. Driver-Specific Periodic Tasks ------------------------------ Drivers may run their own periodic tasks, i.e. actions run repeatedly after a certain amount of time. Such a task is created by using the periodic_ decorator on an interface method. For example :: from futurist import periodics class FakePower(base.PowerInterface): @periodics.periodic(spacing=42) def task(self, manager, context): pass # do something Here the ``spacing`` argument is a period in seconds for a given periodic task. For example 'spacing=5' means every 5 seconds. Driver-Specific Steps --------------------- Drivers may have specific steps that may need to be executed or offered to a user to execute in order to perform specific configuration tasks. These steps should ideally be located on the management interface to enable consistent user experience of the hardware type. What should be avoided is duplication of existing interfaces such as the deploy interface to enable vendor specific cleaning or deployment steps. Message Routing =============== Each Conductor registers itself in the database upon start-up, and periodically updates the timestamp of its record. Contained within this registration is a list of the drivers which this Conductor instance supports. This allows all services to maintain a consistent view of which Conductors and which drivers are available at all times. Based on their respective driver, all nodes are mapped across the set of available Conductors using a `consistent hashing algorithm`_. Node-specific tasks are dispatched from the API tier to the appropriate conductor using conductor-specific RPC channels. As Conductor instances join or leave the cluster, nodes may be remapped to different Conductors, thus triggering various driver actions such as take-over or clean-up. .. _API service: webapi.html .. _AbstractHardwareType: api/ironic.drivers.hardware_type.html#ironic.drivers.hardware_type.AbstractHardwareType .. _Conductor service: api/ironic.conductor.manager.html .. _DB API: api/ironic.db.api.html .. _diskimage-builder: https://docs.openstack.org/diskimage-builder/latest/ .. _consistent hashing algorithm: https://docs.openstack.org/tooz/latest/user/tutorial/hashring.html .. _periodic: https://docs.openstack.org/futurist/latest/reference/index.html#futurist.periodics.periodic ironic-15.0.0/doc/source/contributor/third-party-ci.rst0000664000175000017500000000333713652514273023131 0ustar zuulzuul00000000000000.. _third-party-ci: ================================== Third Party Continuous Integration ================================== .. NOTE:: This document is a work-in-progress. Unfilled sections will be worked in follow-up patchsets. This version is to get a basic outline and index done so that we can then build on it. (krtaylor) This document provides tips and guidelines for third-party driver developers setting up their continuous integration test systems. CI Architecture Overview ======================== Requirements Cookbook ===================== Sizing ------ Infrastructure -------------- This section describes what changes you'll need to make to a your CI system to add an ironic job. jenkins changes ############### nodepool changes ################ neutron changes ############### pre-test hook ############# cleanup hook ############ Ironic ------ Hardware Pool Management ======================== Problem ------- If you are using actual hardware as target machines for your CI testing then the problem of two jobs trying to use the name target arises. If you have one target machine and a maximum number of one jobs running on your ironic pipeline at a time, then you won't run into this problem. However, one target may not handle the load of ironic's daily patch submissions. Solutions --------- Zuul v3 ####### Molten Iron ########### `molteniron `_ is a tool that allows you to reserve hardware from a pool at the last minute to use in your job. Once finished testing, you can unreserve the hardware making it available for the next test job. Tips and Tricks =============== Optimize Run Time ----------------- Image Server ############ Other References ---------------- ironic-15.0.0/doc/source/contributor/releasing.rst0000664000175000017500000001776313652514273022252 0ustar zuulzuul00000000000000========================= Releasing Ironic Projects ========================= Since the responsibility for releases will move between people, we document that process here. A full list of projects that ironic manages is available in the `governance site`_. .. _`governance site`: https://governance.openstack.org/reference/projects/ironic.html Who is responsible for releases? ================================ The current PTL is ultimately responsible for making sure code gets released. They may choose to delegate this responsibility to a liaison, which is documented in the `cross-project liaison wiki`_. Anyone may submit a release request per the process below, but the PTL or liaison must +1 the request for it to be processed. .. _`cross-project liaison wiki`: https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management Release process =============== Releases are managed by the OpenStack release team. The release process is documented in the `Project Team Guide`_. .. _`Project Team Guide`: https://docs.openstack.org/project-team-guide/release-management.html#how-to-release What do we have to release? =========================== The ironic project has a number of deliverables under its governance. The ultimate source of truth for this is `projects.yaml `__ in the governance repository. These deliverables have varying release models, and these are defined in the `deliverables YAML files `__ in the releases repository. In general, ironic deliverables follow the `cycle-with-intermediary `__ release model. Non-client libraries -------------------- The following deliverables are non-client libraries: * ironic-lib * metalsmith * sushy Client libraries ---------------- The following deliverables are client libraries: * python-ironicclient * python-ironic-inspector-client * sushy-cli Normal release -------------- The following deliverables are Neutron plugins: * networking-baremetal * networking-generic-switch The following deliverables are Horizon plugins: * ironic-ui The following deliverables are Tempest plugins: * ironic-tempest-plugin The following deliverables are services, or treated as such: * bifrost * ironic * ironic-inspector * ironic-prometheus-exporter * ironic-python-agent Independent ----------- The following deliverables are released `independently `__: * ironic-python-agent-builder * molteniron * sushy-tools * tenks * virtualbmc Not released ------------ The following deliverables do not need to be released: * ironic-inspector-specs * ironic-specs Things to do before releasing ============================= * Review the unreleased release notes, if the project uses them. Make sure they follow our :ref:`standards `, are coherent, and have proper grammar. Combine release notes if necessary (for example, a release note for a feature and another release note to add to that feature may be combined). * For ironic releases only, not ironic-inspector releases: if any new API microversions have been added since the last release, update the REST API version history (``doc/source/contributor/webapi-version-history.rst``) to indicate that they were part of the new release. * To support rolling upgrades, add this new release version (and release name if it is a named release) into ``ironic/common/release_mappings.py``: * in ``RELEASE_MAPPING`` make a copy of the ``master`` entry, and rename the first ``master`` entry to the new semver release version. * If this is a named release, add a ``RELEASE_MAPPING`` entry for the named release. Its value should be the same as that of the latest semver one (that you just added above). It is important to do this before a stable/ branch is made (or if `the grenade switch is made `_ to use the latest release from stable as the 'old' release). Otherwise, once it is made, CI (the grenade job that tests new-release -> master) will fail. Things to do after releasing ============================ When a release is done that results in a stable branch ------------------------------------------------------ When a release is done that results in a stable branch for the project, several changes need to be made. The release automation will push a number of changes that need to be approved. This includes: * In the new stable branch: * a change to point ``.gitreview`` at the branch * a change to update the upper constraints file used by ``tox`` * In the master branch: * updating the release notes RST to include the new branch. The generated RST does not include the version range in the title, so we typically submit a follow-up patch to do that. An example of this patch is `here `__. * update the `templates` in `.zuul.yaml` or `zuul.d/project.yaml`. The update is necessary to use the job for the next release `openstack-python3--jobs`. An example of this patch is `here `__. We need to submit patches for changes in the stable branch to: * update the ironic devstack plugin to point at the branched tarball for IPA. An example of this patch is `here `_. * update links in the documentation (``ironic/doc/source/``) to point to the branched versions of any openstack projects' (that branch) documents. As of Pike release, the only outlier is `diskimage-builder `_. * set appropriate defaults for ``TEMPEST_BAREMETAL_MIN_MICROVERSION`` and ``TEMPEST_BAREMETAL_MAX_MICROVERSION`` in ``devstack/lib/ironic`` to make sure that unsupported API tempest tests are skipped on stable branches. E.g. `patch 495319 `_. We need to submit patches for changes on master to: * create an empty commit with a ``Sem-Ver`` tag to bump the generated minor version. See `example `_ and `pbr documentation `_ for details. * to support rolling upgrades, since the release was a named release, we need to make these changes. Note that we need to wait until *after* the switch in grenade is made to test the latest release (N) with master (e.g. `for stable/queens `_). Doing these changes sooner -- after the ironic release and before the switch when grenade is testing the prior release (N-1) with master, will cause the tests to fail. (You may want to ask/remind infra/qa team, as to when they will do this switch.) * In ``ironic/common/release_mappings.py``, delete any entries from ``RELEASE_MAPPING`` associated with the oldest named release. Since we support upgrades between adjacent named releases, the master branch will only support upgrades from the most recent named release to master. * remove any DB migration scripts from ``ironic.cmd.dbsync.ONLINE_MIGRATIONS`` and remove the corresponding code from ironic. (These migration scripts are used to migrate from an old release to this latest release; they shouldn't be needed after that.) * remove any model class names from ``ironic.cmd.dbsync.NEW_MODELS``. As **ironic-tempest-plugin** is branchless, we need to submit a patch adding stable jobs to its master branch. `Example for Queens `_. For all releases ---------------- For all releases, whether or not it results in a stable branch: * update the specs repo to mark any specs completed in the release as implemented. * remove any -2s on patches that were blocked until after the release. ironic-15.0.0/doc/source/contributor/jobs-description.rst0000664000175000017500000001271513652514273023547 0ustar zuulzuul00000000000000.. _jobs-description: ================ Jobs description ================ The description of each jobs that runs in the CI when you submit a patch for `openstack/ironic` is visible in :ref:`table_jobs_description`. .. _table_jobs_description: .. list-table:: Table. OpenStack Ironic CI jobs description :widths: 53 47 :header-rows: 1 * - Job name - Description * - ironic-tox-unit-with-driver-libs-python3 - Runs Ironic unit tests with the driver dependencies installed under Python3 * - ironic-standalone - Deploys Ironic in standalone mode and runs tempest tests that match the regex `ironic_standalone`. * - ironic-tempest-functional-python3 - Deploys Ironic in standalone mode and runs tempest functional tests that matches the regex `ironic_tempest_plugin.tests.api` under Python3 * - ironic-grenade-dsvm - Deploys Ironic in a DevStack and runs upgrade for all enabled services. * - ironic-grenade-dsvm-multinode-multitenant - Deploys Ironic in a multinode DevStack and runs upgrade for all enabled services. * - ironic-tempest-ipa-partition-pxe_ipmitool - Deploys Ironic in DevStack under Python3, configured to use dib ramdisk partition image with `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploy 1 virtual baremetal. * - ironic-tempest-partition-bios-redfish-pxe - Deploys Ironic in DevStack, configured to use dib ramdisk partition image with `pxe` boot and `redfish` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual baremetal. * - ironic-tempest-ipa-partition-uefi-pxe_ipmitool - Deploys Ironic in DevStack, configured to use dib ramdisk partition image with `uefi` boot and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual baremetal. * - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode - Deploys Ironic in a multinode DevStack, configured to use a pre-build tinyipa ramdisk wholedisk image that is downloaded from a Swift temporary url, `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `(ironic_tempest_plugin.tests.scenario|test_schedule_to_all_nodes)` and deploys 7 virtual baremetal. * - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa - Deploys Ironic in DevStack, configured to use a pre-build tinyipa ramdisk wholedisk image that is downloaded from a Swift temporary url, `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. * - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-indirect - Deploys Ironic in DevStack, configured to use a pre-built dib ramdisk wholedisk image that is downloaded from http url, `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. * - ironic-tempest-ipa-partition-bios-agent_ipmitool-indirect - Deploys Ironic in DevStack, configured to use a pre-built dib ramdisk partition image that is downloaded from http url, `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. * - ironic-tempest-bfv - Deploys Ironic in DevStack with cinder enabled, so it can deploy baremetal using boot from volume. Runs tempest tests that match the regex `baremetal_boot_from_volume` and deploys 3 virtual baremetal nodes using boot from volume. * - ironic-tempest-ipa-partition-uefi-pxe-grub2 - Deploys Ironic in DevStack, configured to use pxe with uefi and grub2 and `ipmi` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. * - ironic-tox-bandit - Runs bandit security tests in a tox environment to find known issues in the Ironic code. * - ironic-tempest-ipa-wholedisk-bios-pxe_snmp - Deploys Ironic in DevStack, configured to use a pre-built dib ramdisk wholedisk image that is downloaded from a Swift temporary url, `pxe` boot and `snmp` driver. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario` and deploys 1 virtual baremetal. * - ironic-inspector-tempest - Deploys Ironic and Ironic Inspector in DevStack, configured to use a pre-build tinyipa ramdisk wholedisk image that is downloaded from a Swift temporary url, `pxe` boot and `ipmi` driver. Runs tempest tests that match the regex `InspectorBasicTest` and deploys 1 virtual baremetal. * - bifrost-integration-tinyipa-ubuntu-bionic - Tests the integration between Ironic and Bifrost. * - metalsmith-integration-glance-localboot-centos7 - Tests the integration between Ironic and Metalsmith using Glance as image source and CentOS7 with local boot. * - ironic-tempest-pxe_ipmitool-postgres - Deploys Ironic in DevStack, configured to use tinyipa ramdisk partition image with `pxe` boot and `ipmi` driver and postgres instead of mysql. Runs tempest tests that match the regex `ironic_tempest_plugin.tests.scenario`, also deploys 1 virtual baremetal. ironic-15.0.0/doc/source/contributor/governance.rst0000664000175000017500000000265413652514273022421 0ustar zuulzuul00000000000000=========================== Ironic Governance Structure =========================== The ironic project manages a number of repositories that contribute to our mission. The full list of repositories that ironic manages is available in the `governance site`_. .. _`governance site`: https://governance.openstack.org/reference/projects/ironic.html What belongs in ironic governance? ================================== For a repository to be part of the Ironic project: * It must comply with the TC's `rules for a new project `_. * It must not be intended for use with only a single vendor's hardware. A library that implements a standard to manage hardware from multiple vendors (such as IPMI or redfish) is okay. * It must align with Ironic's `mission statement `_. Lack of contributor diversity is a chicken-egg problem, and as such a repository where only a single company is contributing is okay, with the hope that other companies will contribute after joining the ironic project. Repositories that are no longer maintained should be pruned from governance regularly. Proposing a new project to ironic governance ============================================ Bring the proposal to the ironic `weekly meeting `_ to discuss with the team. ironic-15.0.0/doc/source/contributor/vision.rst0000664000175000017500000000631713652514273021601 0ustar zuulzuul00000000000000.. _vision: ================== Contributor Vision ================== Background ========== During the Rocky Project Teams Gathering (Feburary/March 2018), The contributors in the room at that time took a few minutes to write out each contributor's vision of where they see ironic in five years time. After everyone had a chance to spend a few minutes writing, we went around the room and gave every contributor the chance to read their vision and allow other contributors to ask questions to better understand what each individual contributor wrote. While we were doing that, we also took time to capture the common themes. This entire exercise did result in some laughs and a common set of words, and truly helped to ensure that the entire team proceeded to use the same "words" to describe various aspects as the sessions progressed during the week. We also agreed that we should write a shared vision, to have something to reference and remind us of where we want to go as a community. Rocky Vision: For 2022-2023 =========================== Common Themes ------------- Below is an entirely unscientific summary of common themes that arose during the discussion among fourteen contributors. * Contributors picked a time between 2020, and 2023. * 4 Contributors foresee ironic being the leading Open Source baremetal deployment technology * 2 Contributors foresee ironic reaching feature parity with Nova. * 2 Contributors foresee users moving all workloads "to the cloud" * 1 Contributor foresees Kubernetes and Container integration being the major focus of Bare Metal as a Service further down the road. * 2 Contributors foresee greater composible hardware being more common. * 1 Contributor foresees ironic growing into or supporting CMDBs. * 2 Contributors foresee that features are more micro-service oriented. * 2 Contributors foresee that ironic supported all of the possible baremetal management needs * 1 Contributor foresees standalone use being more common. * 2 Contributors foresee the ironic's developer community growing * 2 Contributors foresee that auto-discovery will be more common. * 2 Contributors foresee ironic being used for devices beyond servers, such as lightbulbs, IOT, etc. Vision Statement ---------------- The year is 2022. We're meeting to plan the Z release of Ironic. We stopped to reflect upon the last few years of Ironic's growth, how we had come such a long way to become the defacto open source baremetal deployment technology. How we had grown our use cases, and support for consumers such as containers, and users who wished to managed specialized fleets of composed machines. New contributors and their different use cases have brought us closer to parity with virtual machines. Everyday we're gaining word of more operators adopting the ironic community's CMDB integration to leverage hardware discovery. We've heard of operators deploying racks upon racks of new hardware by just connecting the power and network cables, and from there the operators have discovered time to write the world's greatest operator novel with the time saved in commissioning new racks of hardware. Time has brought us closer and taught us to be more collaborative across the community, and we look forward to our next release together. ironic-15.0.0/doc/source/contributor/ironic-boot-from-volume.rst0000664000175000017500000001060513652514273024757 0ustar zuulzuul00000000000000===================================== Ironic Boot-from-Volume with DevStack ===================================== This guide shows how to setup DevStack for enabling boot-from-volume feature, which has been supported from the Pike release. This scenario shows how to setup DevStack to enable nodes to boot from volumes managed by cinder with VMs as baremetal servers. DevStack Configuration ====================== The following is ``local.conf`` that will setup DevStack with 3 VMs that are registered in ironic. A volume connector with IQN is created for each node. These connectors can be used to connect volumes created by cinder. The detailed description for DevStack is at :ref:`deploy_devstack`. :: [[local|localrc]] enable_plugin ironic https://opendev.org/openstack/ironic IRONIC_STORAGE_INTERFACE=cinder # Credentials ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=password SWIFT_HASH=password SWIFT_TEMPURL_KEY=password # Enable Neutron which is required by Ironic and disable nova-network. disable_service n-net disable_service n-novnc enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron # Enable Swift for the direct deploy interface. enable_service s-proxy enable_service s-object enable_service s-container enable_service s-account # Disable Horizon disable_service horizon # Disable Heat disable_service heat h-api h-api-cfn h-api-cw h-eng # Swift temp URL's are required for the direct deploy interface. SWIFT_ENABLE_TEMPURLS=True # Create 3 virtual machines to pose as Ironic's baremetal nodes. IRONIC_VM_COUNT=3 IRONIC_BAREMETAL_BASIC_OPS=True DEFAULT_INSTANCE_TYPE=baremetal # Enable additional hardware types, if needed. #IRONIC_ENABLED_HARDWARE_TYPES=ipmi,fake-hardware # Don't forget that many hardware types require enabling of additional # interfaces, most often power and management: #IRONIC_ENABLED_MANAGEMENT_INTERFACES=ipmitool,fake #IRONIC_ENABLED_POWER_INTERFACES=ipmitool,fake # The default deploy interface is 'iscsi', you can use 'direct' with #IRONIC_DEFAULT_DEPLOY_INTERFACE=direct # Change this to alter the default driver for nodes created by devstack. # This driver should be in the enabled list above. IRONIC_DEPLOY_DRIVER=ipmi # The parameters below represent the minimum possible values to create # functional nodes. IRONIC_VM_SPECS_RAM=1280 IRONIC_VM_SPECS_DISK=10 # Size of the ephemeral partition in GB. Use 0 for no ephemeral partition. IRONIC_VM_EPHEMERAL_DISK=0 # To build your own IPA ramdisk from source, set this to True IRONIC_BUILD_DEPLOY_RAMDISK=False VIRT_DRIVER=ironic # By default, DevStack creates a 10.0.0.0/24 network for instances. # If this overlaps with the hosts network, you may adjust with the # following. NETWORK_GATEWAY=10.1.0.1 FIXED_RANGE=10.1.0.0/24 FIXED_NETWORK_SIZE=256 # Log all output to files LOGFILE=$HOME/devstack.log LOGDIR=$HOME/logs IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs After the environment is built, you can create a volume with cinder and request an instance with the volume to nova:: . ~/devstack/openrc # query the image id of the default cirros image image=$(openstack image show $DEFAULT_IMAGE_NAME -f value -c id) # create keypair ssh-keygen openstack keypair create --public-key ~/.ssh/id_rsa.pub default # create volume volume=$(openstack volume create --image $image --size 1 my-volume -f value -c id) # spawn instance openstack server create --flavor baremetal --volume $volume --key-name default testing You can also run an integration test that an instance is booted from a remote volume with tempest in the environment:: cd /opt/stack/tempest tox -e all-plugin -- ironic_tempest_plugin.tests.scenario.test_baremetal_boot_from_volume Please note that the storage interface will only indicate errors based upon the state of the node and the configuration present. As such a node does not exclusively have to boot via a remote volume, and as such `validate` actions upon nodes may be slightly misleading. If an appropriate `volume target` is defined, no error should be returned for the boot interface. ironic-15.0.0/doc/source/contributor/webapi-version-history.rst0000664000175000017500000004251313652514273024721 0ustar zuulzuul00000000000000======================== REST API Version History ======================== 1.65 (Ussuri, 15.0) --------------------- Added ``lessee`` field to the node object. The field should match the ``project_id`` of the intended lessee. If an allocation has an owner, then the allocation process will only match the allocation with a node that has the same ``owner`` or ``lessee``. 1.64 (Ussuri, 15.0) --------------------- Added the ``network_type`` to the port objects ``local_link_connection`` field. The ``network_type`` can be set to either ``managed`` or ``unmanaged``. When the type is ``unmanaged`` other fields are not required. Use ``unmanaged`` when the neutron ``network_interface`` is required, but the network is in fact a flat network where no actual switch management is done. 1.63 (Ussuri, 15.0) --------------------- Added the following new endpoints for indicator management: * ``GET /v1/nodes//management/indicators`` to list all available indicators names for each of the hardware component. Currently known components are: ``chassis``, ``system``, ``disk``, ``power`` and ``nic``. * ``GET /v1/nodes//management/indicators//`` to retrieve all indicators and their states for the hardware component. * ``PUT /v1/nodes//management/indicators//`` change state of the desired indicators of the component. 1.62 (Ussuri, 15.0) --------------------- This version of the API is to signify capability of an ironic deployment to support the ``agent token`` functionality with the ``ironic-python-agent``. 1.61 (Ussuri, 14.0) --------------------- Added ``retired`` field to the node object to mark nodes for retirement. If set, this flag will move nodes to ``manageable`` upon automatic cleaning. ``manageable`` nodes which have this flag set cannot be moved to available. Also added ``retired_reason`` to specify the retirement reason. 1.60 (Ussuri, 14.0) --------------------- Added ``owner`` field to the allocation object. The field should match the ``project_id`` of the intended owner. If the ``owner`` field is set, the allocation process will only match the allocation with a node that has the same ``owner`` field set. 1.59 (Ussuri, 14.0) --------------------- Added the ability to specify a ``vendor_data`` dictionary field in the ``configdrive`` parameter submitted with the deployment of a node. The value is a dictionary which is served as ``vendor_data2.json`` in the config drive. 1.58 (Train, 12.2.0) -------------------- Added the ability to backfill allocations for already deployed nodes by creating an allocation with ``node`` set. 1.57 (Train, 12.2.0) -------------------- Added the following new endpoint for allocation: * ``PATCH /v1/allocations/`` that allows updating ``name`` and ``extra`` fields for an existing allocation. 1.56 (Stein, 12.1.0) -------------------- Added the ability for the ``configdrive`` parameter submitted with the deployment of a node, to include a ``meta_data``, ``network_data`` and ``user_data`` dictionary fields. Ironic will now use the supplied data to create a configuration drive for the user. Prior uses of the ``configdrive`` field are unaffected. 1.55 (Stein, 12.1.0) -------------------- Added the following new endpoints for deploy templates: * ``GET /v1/deploy_templates`` to list all deploy templates. * ``GET /v1/deploy_templates/`` to retrieve details of a deploy template. * ``POST /v1/deploy_templates`` to create a deploy template. * ``PATCH /v1/deploy_templates/`` to update a deploy template. * ``DELETE /v1/deploy_templates/`` to delete a deploy template. 1.54 (Stein, 12.1.0) -------------------- Added new endpoints for external ``events``: * POST /v1/events for creating events. (This endpoint is only intended for internal consumption.) 1.53 (Stein, 12.1.0) -------------------- Added ``is_smartnic`` field to the port object to enable Smart NIC port creation in addition to local link connection attributes ``port_id`` and ``hostname``. 1.52 (Stein, 12.1.0) -------------------- Added allocation API, allowing reserving a node for deployment based on resource class and traits. The new endpoints are: * ``POST /v1/allocations`` to request an allocation. * ``GET /v1/allocations`` to list all allocations. * ``GET /v1/allocations/`` to retrieve the allocation details. * ``GET /v1/nodes//allocation`` to retrieve an allocation associated with the node. * ``DELETE /v1/allocations/`` to remove the allocation. * ``DELETE /v1/nodes//allocation`` to remove an allocation associated with the node. Also added a new field ``allocation_uuid`` to the node resource. 1.51 (Stein, 12.1.0) -------------------- Added ``description`` field to the node object to enable operators to store any information relates to the node. The field is limited to 4096 characters. 1.50 (Stein, 12.1.0) -------------------- Added ``owner`` field to the node object to enable operators to store information in relation to the owner of a node. The field is up to 255 characters and MAY be used in a later point in time to allow designation and deligation of permissions. 1.49 (Stein, 12.0.0) -------------------- Added new endpoints for retrieving conductors information, and added a ``conductor`` field to node object. 1.48 (Stein, 12.0.0) -------------------- Added ``protected`` field to the node object to allow protecting deployed nodes from undeploying, rebuilding or deletion. Also added ``protected_reason`` to specify the reason of making the node protected. 1.47 (Stein, 12.0.0) -------------------- Added ``automated_clean`` field to the node object, enabling cleaning per node. 1.46 (Rocky, 11.1.0) -------------------- Added ``conductor_group`` field to the node and the node response, as well as support to the API to return results by matching the parameter. 1.45 (Rocky, 11.1.0) -------------------- Added ``reset_interfaces`` parameter to node's PATCH request, to specify whether to reset hardware interfaces to their defaults on driver's update. 1.44 (Rocky, 11.1.0) -------------------- Added ``deploy_step`` to the node object, to indicate the current deploy step (if any) being performed on the node. 1.43 (Rocky, 11.0.0) -------------------- Added ``?detail=`` boolean query to the API list endpoints to provide a more RESTful alternative to the existing ``/nodes/detail`` and similar endpoints. 1.42 (Rocky, 11.0.0) -------------------- Added ``fault`` to the node object, to indicate currently detected fault on the node. 1.41 (Rocky, 11.0.0) -------------------- Added support to abort inspection of a node in the ``inspect wait`` state. 1.40 (Rocky, 11.0.0) -------------------- Added BIOS properties as sub resources of nodes: * GET /v1/nodes//bios * GET /v1/nodes//bios/ Added ``bios_interface`` field to the node object to allow getting and setting the interface. 1.39 (Rocky, 11.0.0) -------------------- Added ``inspect wait`` to available provision states. A node is shown as ``inspect wait`` instead of ``inspecting`` during asynchronous inspection. 1.38 (Queens, 10.1.0) --------------------- Added provision_state verbs ``rescue`` and ``unrescue`` along with the following states: ``rescue``, ``rescue failed``, ``rescue wait``, ``rescuing``, ``unrescue failed``, and ``unrescuing``. After rescuing a node, it will be left in the ``rescue`` state running a rescue ramdisk, configured with the ``rescue_password``, and listening with ssh on the specified network interfaces. Unrescuing a node will return it to ``active``. Added ``rescue_interface`` to the node object, to allow setting the rescue interface for a dynamic driver. 1.37 (Queens, 10.1.0) --------------------- Adds support for node traits, with the following new endpoints. * GET /v1/nodes//traits lists the traits for a node. * PUT /v1/nodes//traits sets all traits for a node. * PUT /v1/nodes//traits/ adds a trait to a node. * DELETE /v1/nodes//traits removes all traits from a node. * DELETE /v1/nodes//traits/ removes a trait from a node. A node's traits are also included the following node query and list responses: * GET /v1/nodes/ * GET /v1/nodes/detail * GET /v1/nodes?fields=traits Traits cannot be specified on node creation, nor can they be updated via a PATCH request on the node. 1.36 (Queens, 10.0.0) --------------------- Added ``agent_version`` parameter to deploy heartbeat request for version negotiation with Ironic Python Agent features. 1.35 (Queens, 9.2.0) -------------------- Added ability to provide ``configdrive`` when node is updated to ``rebuild`` provision state. 1.34 (Pike, 9.0.0) ------------------ Adds a ``physical_network`` field to the port object. All ports in a portgroup must have the same value in their ``physical_network`` field. 1.33 (Pike, 9.0.0) ------------------ Added ``storage_interface`` field to the node object to allow getting and setting the interface. Added ``default_storage_interface`` and ``enabled_storage_interfaces`` fields to the driver object to show the information. 1.32 (Pike, 9.0.0) ------------------ Added new endpoints for remote volume configuration: * GET /v1/volume as a root for volume resources * GET /v1/volume/connectors for listing volume connectors * POST /v1/volume/connectors for creating a volume connector * GET /v1/volume/connectors/ for showing a volume connector * PATCH /v1/volume/connectors/ for updating a volume connector * DELETE /v1/volume/connectors/ for deleting a volume connector * GET /v1/volume/targets for listing volume targets * POST /v1/volume/targets for creating a volume target * GET /v1/volume/targets/ for showing a volume target * PATCH /v1/volume/targets/ for updating a volume target * DELETE /v1/volume/targets/ for deleting a volume target Volume resources also can be listed as sub resources of nodes: * GET /v1/nodes//volume * GET /v1/nodes//volume/connectors * GET /v1/nodes//volume/targets 1.31 (Ocata, 7.0.0) ------------------- Added the following fields to the node object, to allow getting and setting interfaces for a dynamic driver: * boot_interface * console_interface * deploy_interface * inspect_interface * management_interface * power_interface * raid_interface * vendor_interface 1.30 (Ocata, 7.0.0) ------------------- Added dynamic driver APIs: * GET /v1/drivers now accepts a ``type`` parameter (optional, one of ``classic`` or ``dynamic``), to limit the result to only classic drivers or dynamic drivers (hardware types). Without this parameter, both classic and dynamic drivers are returned. * GET /v1/drivers now accepts a ``detail`` parameter (optional, one of ``True`` or ``False``), to show all fields for a driver. Defaults to ``False``. * GET /v1/drivers now returns an additional ``type`` field to show if the driver is classic or dynamic. * GET /v1/drivers/ now returns an additional ``type`` field to show if the driver is classic or dynamic. * GET /v1/drivers/ now returns additional fields that are null for classic drivers, and set as following for dynamic drivers: * The value of the default__interface is the entrypoint name of the calculated default interface for that type: * default_boot_interface * default_console_interface * default_deploy_interface * default_inspect_interface * default_management_interface * default_network_interface * default_power_interface * default_raid_interface * default_vendor_interface * The value of the enabled__interfaces is a list of entrypoint names of the enabled interfaces for that type: * enabled_boot_interfaces * enabled_console_interfaces * enabled_deploy_interfaces * enabled_inspect_interfaces * enabled_management_interfaces * enabled_network_interfaces * enabled_power_interfaces * enabled_raid_interfaces * enabled_vendor_interfaces 1.29 (Ocata, 7.0.0) ------------------- Add a new management API to support inject NMI, 'PUT /v1/nodes/(node_ident)/management/inject_nmi'. 1.28 (Ocata, 7.0.0) ------------------- Add '/v1/nodes//vifs' endpoint for attach, detach and list of VIFs. 1.27 (Ocata, 7.0.0) ------------------- Add ``soft rebooting`` and ``soft power off`` as possible values for the ``target`` field of the power state change payload, and also add ``timeout`` field to it. 1.26 (Ocata, 7.0.0) ------------------- Add portgroup ``mode`` and ``properties`` fields. 1.25 (Ocata, 7.0.0) ------------------- Add possibility to unset chassis_uuid from a node. 1.24 (Ocata, 7.0.0) ------------------- Added new endpoints '/v1/nodes//portgroups' and '/v1/portgroups//ports'. Added new field ``port.portgroup_uuid``. 1.23 (Ocata, 7.0.0) ------------------- Added '/v1/portgroups/ endpoint. 1.22 (Newton, 6.1.0) -------------------- Added endpoints for deployment ramdisks. 1.21 (Newton, 6.1.0) -------------------- Add node ``resource_class`` field. 1.20 (Newton, 6.1.0) -------------------- Add node ``network_interface`` field. 1.19 (Newton, 6.1.0) -------------------- Add ``local_link_connection`` and ``pxe_enabled`` fields to the port object. 1.18 (Newton, 6.1.0) -------------------- Add ``internal_info`` readonly field to the port object, that will be used by ironic to store internal port-related information. 1.17 (Newton, 6.0.0) -------------------- Addition of provision_state verb ``adopt`` which allows an operator to move a node from ``manageable`` state to ``active`` state without performing a deployment operation on the node. This is intended for nodes that have already been deployed by external means. 1.16 (Mitaka, 5.0.0) -------------------- Add ability to filter nodes by driver. 1.15 (Mitaka, 5.0.0) -------------------- Add ability to do manual cleaning when a node is in the manageable provision state via PUT v1/nodes//states/provision, target:clean, clean_steps:[...]. 1.14 (Liberty, 4.2.0) --------------------- Make the following endpoints discoverable via Ironic API: * '/v1/nodes//states' * '/v1/drivers//properties' 1.13 (Liberty, 4.2.0) --------------------- Add a new verb ``abort`` to the API used to abort nodes in ``CLEANWAIT`` state. 1.12 (Liberty, 4.2.0) --------------------- This API version adds the following abilities: * Get/set ``node.target_raid_config`` and to get ``node.raid_config``. * Retrieve the logical disk properties for the driver. 1.11 (Liberty, 4.0.0, breaking change) -------------------------------------- Newly registered nodes begin in the ``enroll`` provision state by default, instead of ``available``. To get them to the ``available`` state, the ``manage`` action must first be run to verify basic hardware control. On success the node moves to ``manageable`` provision state. Then the ``provide`` action must be run. Automated cleaning of the node is done and the node is made ``available``. 1.10 (Liberty, 4.0.0) --------------------- Logical node names support all RFC 3986 unreserved characters. Previously only valid fully qualified domain names could be used. 1.9 (Liberty, 4.0.0) -------------------- Add ability to filter nodes by provision state. 1.8 (Liberty, 4.0.0) -------------------- Add ability to return a subset of resource fields. 1.7 (Liberty, 4.0.0) -------------------- Add node ``clean_step`` field. 1.6 (Kilo) ---------- Add :ref:`inspection` process: introduce ``inspecting`` and ``inspectfail`` provision states, and ``inspect`` action that can be used when a node is in ``manageable`` provision state. 1.5 (Kilo) ---------- Add logical node names that can be used to address a node in addition to the node UUID. Name is expected to be a valid `fully qualified domain name`_ in this version of API. 1.4 (Kilo) ---------- Add ``manageable`` state and ``manage`` transition, which can be used to move a node to ``manageable`` state from ``available``. The node cannot be deployed in ``manageable`` state. This change is mostly a preparation for future inspection work and introduction of ``enroll`` provision state. 1.3 (Kilo) ---------- Add node ``driver_internal_info`` field. 1.2 (Kilo, breaking change) --------------------------- Renamed NOSTATE (``None`` in Python, ``null`` in JSON) node state to ``available``. This is needed to reduce confusion around ``None`` state, especially when future additions to the state machine land. 1.1 (Kilo) ---------- This was the initial version when API versioning was introduced. Includes the following changes from Kilo release cycle: * Add node ``maintenance_reason`` field and an API endpoint to set/unset the node maintenance mode. * Add sync and async support for vendor passthru methods. * Vendor passthru endpoints support different HTTP methods, not only ``POST``. * Make vendor methods discoverable via the Ironic API. * Add logic to store the config drive passed by Nova. This has been the minimum supported version since versioning was introduced. 1.0 (Juno) ---------- This version denotes Juno API and was never explicitly supported, as API versioning was not implemented in Juno, and 1.1 became the minimum supported version in Kilo. .. _fully qualified domain name: https://en.wikipedia.org/wiki/Fully_qualified_domain_name ironic-15.0.0/doc/source/contributor/dev-quickstart.rst0000664000175000017500000010100413652514273023225 0ustar zuulzuul00000000000000.. _dev-quickstart: ===================== Developer Quick-Start ===================== This is a quick walkthrough to get you started developing code for Ironic. This assumes you are already familiar with submitting code reviews to an OpenStack project. The gate currently runs the unit tests under Python 3.6 and Python 3.7. It is strongly encouraged to run the unit tests locally prior to submitting a patch. .. note:: Do not run unit tests on the same environment as devstack due to conflicting configuration with system dependencies. .. note:: This document is compatible with Python (3.7), Ubuntu (18.04) and Fedora (31). When referring to different versions of Python and OS distributions, this is explicitly stated. .. seealso:: https://docs.openstack.org/infra/manual/developers.html#development-workflow Prepare Development System ========================== System Prerequisites -------------------- The following packages cover the prerequisites for a local development environment on most current distributions. Instructions for getting set up with non-default versions of Python and on older distributions are included below as well. - Ubuntu/Debian:: sudo apt-get install build-essential python-dev libssl-dev python-pip libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git git-review libffi-dev gettext ipmitool psmisc graphviz libjpeg-dev - RHEL7/CentOS7:: sudo yum install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel If using RHEL and yum reports "No package python-pip available" and "No package git-review available", use the EPEL software repository. Instructions can be found at ``_. - Fedora:: sudo dnf install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel Additionally, if using Fedora 23, ``redhat-rpm-config`` package should be installed so that development virtualenv can be built successfully. - openSUSE/SLE 12:: sudo zypper install git git-review libffi-devel libmysqlclient-devel libopenssl-devel libxml2-devel libxslt-devel postgresql-devel python-devel python-nose python-pip gettext-runtime psmisc Graphviz is only needed for generating the state machine diagram. To install it on openSUSE or SLE 12, see ``_. To run the tests locally, it is a requirement that your terminal emulator supports unicode with the ``en_US.UTF8`` locale. If you use locale-gen to manage your locales, make sure you have enabled ``en_US.UTF8`` in ``/etc/locale.gen`` and rerun ``locale-gen``. Python Prerequisites -------------------- If your distro has at least tox 1.8, use similar command to install ``python-tox`` package. Otherwise install this on all distros:: sudo pip install -U tox You may need to explicitly upgrade virtualenv if you've installed the one from your OS distribution and it is too old (tox will complain). You can upgrade it individually, if you need to:: sudo pip install -U virtualenv Running Unit Tests Locally ========================== If you haven't already, Ironic source code should be pulled directly from git:: # from your home or source directory cd ~ git clone https://opendev.org/openstack/ironic cd ironic Running Unit and Style Tests ---------------------------- All unit tests should be run using tox. To run Ironic's entire test suite:: # to run the py3 unit tests, and the style tests tox To run a specific test or tests, use the "-e" option followed by the tox target name. For example:: # run the unit tests under py36 and also run the pep8 tests tox -epy36 -epep8 You may pass options to the test programs using positional arguments. To run a specific unit test, this passes the desired test (regex string) to `stestr `_:: # run a specific test for Python 3.6 tox -epy36 -- test_conductor Debugging unit tests -------------------- In order to break into the debugger from a unit test we need to insert a breaking point to the code: .. code-block:: python import pdb; pdb.set_trace() Then run ``tox`` with the debug environment as one of the following:: tox -e debug tox -e debug test_file_name tox -e debug test_file_name.TestClass tox -e debug test_file_name.TestClass.test_name For more information see the :oslotest-doc:`oslotest documentation `. Database Setup -------------- The unit tests need a local database setup, you can use ``tools/test-setup.sh`` to set up the database the same way as setup in the OpenStack test systems. Additional Tox Targets ---------------------- There are several additional tox targets not included in the default list, such as the target which builds the documentation site. See the ``tox.ini`` file for a complete listing of tox targets. These can be run directly by specifying the target name:: # generate the documentation pages locally tox -edocs # generate the sample configuration file tox -egenconfig Exercising the Services Locally =============================== In addition to running automated tests, sometimes it can be helpful to actually run the services locally, without needing a server in a remote datacenter. If you would like to exercise the Ironic services in isolation within your local environment, you can do this without starting any other OpenStack services. For example, this is useful for rapidly prototyping and debugging interactions over the RPC channel, testing database migrations, and so forth. Here we describe two ways to install and configure the dependencies, either run directly on your local machine or encapsulated in a virtual machine or container. Step 1: Create a Python virtualenv ---------------------------------- #. If you haven't already downloaded the source code, do that first:: cd ~ git clone https://opendev.org/openstack/ironic cd ironic #. Create the Python virtualenv:: tox -evenv --notest --develop -r #. Activate the virtual environment:: . .tox/venv/bin/activate #. Install the `openstack` client command utility:: pip install python-openstackclient #. Install the `openstack baremetal` client:: pip install python-ironicclient .. note:: You can install python-ironicclient from source by cloning the git repository and running `pip install .` while in the root of the cloned repository. #. Export some ENV vars so the client will connect to the local services that you'll start in the next section:: export OS_AUTH_TYPE=none export OS_ENDPOINT=http://localhost:6385/ Next, install and configure system dependencies. Step 2: Install System Dependencies Locally -------------------------------------------- This step will install MySQL on your local system. This may not be desirable in some situations (eg, you're developing from a laptop and do not want to run a MySQL server on it all the time). If you want to use SQLite, skip it and do not set the ``connection`` option. #. Install mysql-server: Ubuntu/Debian:: sudo apt-get install mysql-server RHEL7/CentOS7:: sudo yum install mariadb mariadb-server sudo systemctl start mariadb.service Fedora:: sudo dnf install mariadb mariadb-server sudo systemctl start mariadb.service openSUSE/SLE 12:: sudo zypper install mariadb sudo systemctl start mysql.service If using MySQL, you need to create the initial database:: mysql -u root -pMYSQL_ROOT_PWD -e "create schema ironic" .. note:: if you choose not to install mysql-server, ironic will default to using a local sqlite database. The database will then be stored in ``ironic/ironic.sqlite``. #. Create a configuration file within the ironic source directory:: # generate a sample config tox -egenconfig # copy sample config and modify it as necessary cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local # disable auth since we are not running keystone here sed -i "s/#auth_strategy = keystone/auth_strategy = noauth/" etc/ironic/ironic.conf.local # use the 'fake-hardware' test hardware type sed -i "s/#enabled_hardware_types = .*/enabled_hardware_types = fake-hardware/" etc/ironic/ironic.conf.local # use the 'fake' deploy and boot interfaces sed -i "s/#enabled_deploy_interfaces = .*/enabled_deploy_interfaces = fake/" etc/ironic/ironic.conf.local sed -i "s/#enabled_boot_interfaces = .*/enabled_boot_interfaces = fake/" etc/ironic/ironic.conf.local # enable both fake and ipmitool management and power interfaces sed -i "s/#enabled_management_interfaces = .*/enabled_management_interfaces = fake,ipmitool/" etc/ironic/ironic.conf.local sed -i "s/#enabled_power_interfaces = .*/enabled_power_interfaces = fake,ipmitool/" etc/ironic/ironic.conf.local # change the periodic sync_power_state_interval to a week, to avoid getting NodeLocked exceptions sed -i "s/#sync_power_state_interval = 60/sync_power_state_interval = 604800/" etc/ironic/ironic.conf.local # if you opted to install mysql-server, switch the DB connection from sqlite to mysql sed -i "s/#connection = .*/connection = mysql\+pymysql:\/\/root:MYSQL_ROOT_PWD@localhost\/ironic/" etc/ironic/ironic.conf.local # use JSON RPC to avoid installing rabbitmq locally sed -i "s/#rpc_transport = oslo/rpc_transport = json-rpc/" etc/ironic/ironic.conf.local Step 3: Start the Services -------------------------- From within the python virtualenv, run the following command to prepare the database before you start the ironic services:: # initialize the database for ironic ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema Next, open two new terminals for this section, and run each of the examples here in a separate terminal. In this way, the services will *not* be run as daemons; you can observe their output and stop them with Ctrl-C at any time. #. Start the API service in debug mode and watch its output:: cd ~/ironic . .tox/venv/bin/activate ironic-api -d --config-file etc/ironic/ironic.conf.local #. Start the Conductor service in debug mode and watch its output:: cd ~/ironic . .tox/venv/bin/activate ironic-conductor -d --config-file etc/ironic/ironic.conf.local Step 4: Interact with the running services ------------------------------------------ You should now be able to interact with ironic via the python client, which is present in the python virtualenv, and observe both services' debug outputs in the other two windows. This is a good way to test new features or play with the functionality without necessarily starting DevStack. To get started, export the following variables to point the client at the local instance of ironic and disable the authentication:: export OS_AUTH_TYPE=token_endpoint export OS_TOKEN=fake export OS_ENDPOINT=http://127.0.0.1:6385 Then list the available commands and resources:: # get a list of available commands openstack help baremetal # get the list of drivers currently supported by the available conductor(s) openstack baremetal driver list # get a list of nodes (should be empty at this point) openstack baremetal node list Here is an example walkthrough of creating a node:: MAC="aa:bb:cc:dd:ee:ff" # replace with the MAC of a data port on your node IPMI_ADDR="1.2.3.4" # replace with a real IP of the node BMC IPMI_USER="admin" # replace with the BMC's user name IPMI_PASS="pass" # replace with the BMC's password # enroll the node with the fake hardware type and IPMI-based power and # management interfaces. Note that driver info may be added at node # creation time with "--driver-info" NODE=$(openstack baremetal node create \ --driver fake-hardware \ --management-interface ipmitool \ --power-interface ipmitool \ --driver-info ipmi_address=$IPMI_ADDR \ --driver-info ipmi_username=$IPMI_USER \ -f value -c uuid) # driver info may also be added or updated later on openstack baremetal node set $NODE --driver-info ipmi_password=$IPMI_PASS # add a network port openstack baremetal port create $MAC --node $NODE # view the information for the node openstack baremetal node show $NODE # request that the node's driver validate the supplied information openstack baremetal node validate $NODE # you have now enrolled a node sufficiently to be able to control # its power state from ironic! openstack baremetal node power on $NODE If you make some code changes and want to test their effects, simply stop the services with Ctrl-C and restart them. Step 5: Fixing your test environment ------------------------------------ If you are testing changes that add or remove python entrypoints, or making significant changes to ironic's python modules, or simply keep the virtualenv around for a long time, your development environment may reach an inconsistent state. It may help to delete cached ".pyc" files, update dependencies, reinstall ironic, or even recreate the virtualenv. The following commands may help with that, but are not an exhaustive troubleshooting guide:: # clear cached pyc files cd ~/ironic/ironic find ./ -name '*.pyc' | xargs rm # reinstall ironic modules cd ~/ironic . .tox/venv/bin/activate pip uninstall ironic pip install -e . # install and upgrade ironic and all python dependencies cd ~/ironic . .tox/venv/bin/activate pip install -U -e . .. _`deploy_devstack`: Deploying Ironic with DevStack ============================== DevStack may be configured to deploy Ironic, setup Nova to use the Ironic driver and provide hardware resources (network, baremetal compute nodes) using a combination of OpenVSwitch and libvirt. It is highly recommended to deploy on an expendable virtual machine and not on your personal work station. Deploying Ironic with DevStack requires a machine running Ubuntu 16.04 (or later) or Fedora 24 (or later). Make sure your machine is fully up to date and has the latest packages installed before beginning this process. The ironic-tempest-plugin is necessary if you want to run integration tests, the section `Ironic with ironic-tempest-plugin`_ tells the extra steps you need to enable it in DevStack. .. seealso:: https://docs.openstack.org/devstack/latest/ .. note:: The devstack "demo" tenant is now granted the "baremetal_observer" role and thereby has read-only access to ironic's API. This is sufficient for all the examples below. Should you want to create or modify bare metal resources directly (ie. through ironic rather than through nova) you will need to use the devstack "admin" tenant. Devstack will no longer create the user 'stack' with the desired permissions, but does provide a script to perform the task:: git clone https://opendev.org/openstack/devstack.git devstack sudo ./devstack/tools/create-stack-user.sh Switch to the stack user and clone DevStack:: sudo su - stack git clone https://opendev.org/openstack/devstack.git devstack Ironic ------ Create devstack/local.conf with minimal settings required to enable Ironic. An example local.conf that enables both ``direct`` and ``iscsi`` :doc:`deploy interfaces ` and uses the ``ipmi`` hardware type by default:: cd devstack cat >local.conf <` and uses the ``ipmi`` hardware type by default:: cd devstack cat >local.conf <`_ to control the power state of the virtual baremetal nodes. .. note:: When running QEMU as non-root user (e.g. ``qemu`` on Fedora or ``libvirt-qemu`` on Ubuntu), make sure ``IRONIC_VM_LOG_DIR`` points to a directory where QEMU will be able to write. You can verify this with, for example:: # on Fedora sudo -u qemu touch $HOME/ironic-bm-logs/test.log # on Ubuntu sudo -u libvirt-qemu touch $HOME/ironic-bm-logs/test.log .. note:: To check out an in-progress patch for testing, you can add a Git ref to the ``enable_plugin`` line. For instance:: enable_plugin ironic https://opendev.org/openstack/ironic refs/changes/46/295946/15 For a patch in review, you can find the ref to use by clicking the "Download" button in Gerrit. You can also specify a different git repo, or a branch or tag:: enable_plugin ironic https://github.com/openstack/ironic stable/kilo For more details, see the `devstack plugin interface documentation `_. Run stack.sh:: ./stack.sh Source credentials, create a key, and spawn an instance as the ``demo`` user:: . ~/devstack/openrc # query the image id of the default cirros image image=$(openstack image show $DEFAULT_IMAGE_NAME -f value -c id) # create keypair ssh-keygen openstack keypair create --public-key ~/.ssh/id_rsa.pub default # spawn instance openstack server create --flavor baremetal --image $image --key-name default testing .. note:: Because devstack create multiple networks, we need to pass an additional parameter ``--nic net-id`` to the nova boot command when using the admin account, for example:: net_id=$(openstack network list | egrep "$PRIVATE_NETWORK_NAME"'[^-]' | awk '{ print $2 }') openstack server create --flavor baremetal --nic net-id=$net_id --image $image --key-name default testing You should now see a Nova instance building:: openstack server list --long +----------+---------+--------+------------+-------------+----------+------------+----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Availability Zone | Host | Properties | +----------+---------+--------+------------+-------------+----------+------------+----------+-------------------+------+------------+ | a2c7f812 | testing | BUILD | spawning | NOSTATE | | cirros-0.3 | 44d4092a | nova | | | | -e386-4a | | | | | | .5-x86_64- | -51ac-47 | | | | | 22-b393- | | | | | | disk | 51-9c50- | | | | | fe1802ab | | | | | | | fd6e2050 | | | | | d56e | | | | | | | faa1 | | | | +----------+---------+--------+------------+-------------+----------+------------+----------+-------------------+------+------------+ Nova will be interfacing with Ironic conductor to spawn the node. On the Ironic side, you should see an Ironic node associated with this Nova instance. It should be powered on and in a 'wait call-back' provisioning state:: openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | 9e592cbe-e492-4e4f-bf8f-4c9e0ad1868f | node-0 | None | power off | None | False | | ec0c6384-cc3a-4edf-b7db-abde1998be96 | node-1 | None | power off | None | False | | 4099e31c-576c-48f8-b460-75e1b14e497f | node-2 | a2c7f812-e386-4a22-b393-fe1802abd56e | power on | wait call-back | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ At this point, Ironic conductor has called to libvirt (via virtualbmc) to power on a virtual machine, which will PXE + TFTP boot from the conductor node and progress through the Ironic provisioning workflow. One libvirt domain should be active now:: sudo virsh list --all Id Name State ---------------------------------------------------- 2 node-2 running - node-0 shut off - node-1 shut off This provisioning process may take some time depending on the performance of the host system, but Ironic should eventually show the node as having an 'active' provisioning state:: openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | 9e592cbe-e492-4e4f-bf8f-4c9e0ad1868f | node-0 | None | power off | None | False | | ec0c6384-cc3a-4edf-b7db-abde1998be96 | node-1 | None | power off | None | False | | 4099e31c-576c-48f8-b460-75e1b14e497f | node-2 | a2c7f812-e386-4a22-b393-fe1802abd56e | power on | active | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ This should also be reflected in the Nova instance state, which at this point should be ACTIVE, Running and an associated private IP:: openstack server list --long +----------+---------+--------+------------+-------------+---------------+------------+----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Availability Zone | Host | Properties | +----------+---------+--------+------------+-------------+---------------+------------+----------+-------------------+------+------------+ | a2c7f812 | testing | ACTIVE | none | Running | private=10.1. | cirros-0.3 | 44d4092a | nova | | | | -e386-4a | | | | | 0.4, fd7d:1f3 | .5-x86_64- | -51ac-47 | | | | | 22-b393- | | | | | c:4bf1:0:f816 | disk | 51-9c50- | | | | | fe1802ab | | | | | :3eff:f39d:6d | | fd6e2050 | | | | | d56e | | | | | 94 | | faa1 | | | | +----------+---------+--------+------------+-------------+---------------+------------+----------+-------------------+------+------------+ The server should now be accessible via SSH:: ssh cirros@10.1.0.4 $ Running Tempest tests ===================== After :ref:`Deploying Ironic with DevStack ` with the ironic-tempest-plugin enabled, one might want to run integration tests against the running cloud. The Tempest project is the project that offers an integration test suite for OpenStack. First, navigate to Tempest directory:: cd /opt/stack/tempest To run all tests from the `Ironic plugin `_, execute the following command:: tox -e all -- ironic To limit the amount of tests that you would like to run, you can use a regex. For instance, to limit the run to a single test file, the following command can be used:: tox -e all -- ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ops Debugging Tempest tests ----------------------- It is sometimes useful to step through the test code, line by line, especially when the error output is vague. This can be done by running the tests in debug mode and using a debugger such as `pdb `_. For example, after editing the *test_baremetal_basic_ops* file and setting up the pdb traces you can invoke the ``run_tempest.sh`` script in the Tempest directory with the following parameters:: ./run_tempest.sh -N -d ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ops * The *-N* parameter tells the script to run the tests in the local environment (without a virtualenv) so it can find the Ironic tempest plugin. * The *-d* parameter enables the debug mode, allowing it to be used with pdb. For more information about the supported parameters see:: ./run_tempest.sh --help .. note:: Always be careful when running debuggers in time sensitive code, they may cause timeout errors that weren't there before. OSProfiler Tracing in Ironic ============================ OSProfiler is an OpenStack cross-project profiling library. It is being used among OpenStack projects to look at performance issues and detect bottlenecks. For details on how OSProfiler works and how to use it in ironic, please refer to `OSProfiler Support Documentation `_. Building developer documentation ================================ If you would like to build the documentation locally, eg. to test your documentation changes before uploading them for review, run these commands to build the documentation set: - On your local machine:: # activate your development virtualenv . .tox/venv/bin/activate # build the docs tox -edocs #Now use your browser to open the top-level index.html located at: ironic/doc/build/html/index.html - On a remote machine:: # Go to the directory that contains the docs cd ~/ironic/doc/source/ # Build the docs tox -edocs # Change directory to the newly built HTML files cd ~/ironic/doc/build/html/ # Create a server using python on port 8000 python -m SimpleHTTPServer 8000 #Now use your browser to open the top-level index.html located at: http://your_ip:8000 ironic-15.0.0/doc/source/contributor/webapi.rst0000664000175000017500000000543213652514273021536 0ustar zuulzuul00000000000000========================= REST API Conceptual Guide ========================= Versioning ========== The ironic REST API supports two types of versioning: - "major versions", which have dedicated urls. - "microversions", which can be requested through the use of the ``X-OpenStack-Ironic-API-Version`` header. There is only one major version supported currently, "v1". As such, most URLs in this documentation are written with the "/v1/" prefix. Starting with the Kilo release, ironic supports microversions. In this context, a version is defined as a string of 2 integers separated by a dot: **X.Y**. Here ``X`` is a major version, always equal to ``1``, and ``Y`` is a minor version. Server minor version is increased every time the API behavior is changed (note `Exceptions from Versioning`_). .. note:: :nova-doc:`Nova versioning documentation ` has a nice guide for developers on when to bump an API version. The server indicates its minimum and maximum supported API versions in the ``X-OpenStack-Ironic-API-Minimum-Version`` and ``X-OpenStack-Ironic-API-Maximum-Version`` headers respectively, returned with every response. Client may request a specific API version by providing ``X-OpenStack-Ironic-API-Version`` header with request. The requested microversion determines both the allowable requests and the response format for all requests. A resource may be represented differently based on the requested microversion. If no version is requested by the client, the minimum supported version will be assumed. In this way, a client is only exposed to those API features that are supported in the requested (explicitly or implicitly) API version (again note `Exceptions from Versioning`_, they are not covered by this rule). We recommend clients that require a stable API to always request a specific version of API that they have been tested against. .. note:: A special value ``latest`` can be requested instead a numerical microversion, which always requests the newest supported API version from the server. REST API Versions History ------------------------- .. toctree:: :maxdepth: 1 API Version History Exceptions from Versioning -------------------------- The following API-visible things are not covered by the API versioning: * Current node state is always exposed as it is, even if not supported by the requested API version, with exception of ``available`` state, which is returned in version 1.1 as ``None`` (in Python) or ``null`` (in JSON). * Data within free-form JSON attributes: ``properties``, ``driver_info``, ``instance_info``, ``driver_internal_info`` fields on a node object; ``extra`` fields on all objects. * Addition of new drivers. * All vendor passthru methods. ironic-15.0.0/doc/source/contributor/vision-reflection.rst0000664000175000017500000000434113652514273023724 0ustar zuulzuul00000000000000.. _vision_reflection: ================================================= Comparison to the 2018 OpenStack Technical Vision ================================================= In late-2018, the OpenStack Technical composed a `technical vision `_ of what OpenStack clouds should look like. While every component differs, and "cloudy" interactions change dramatically the closer to physical hardware one gets, there are a few areas where Ironic could use some improvement. This list is largely for the purposes of help wanted. It is also important to note that Ironic as a project has a `vision document `_ for itself. The Pillars of Cloud - Self Service =================================== * Ironic's mechanisms and tooling are low level infrastructure mechanisms and as such there has never been a huge emphasis or need on making Ironic be capable of offering direct multi-tenant interaction. Most users interact with the bare metal managed by Ironic via Nova, which abstracts away many of these issues. Eventually, we should offer direct multi-tenancy which is not oriented towards admin-only. Design Goals - Built-in Reliability and Durability ================================================== * Ironic presently considers in-flight operations as failed upon the restart of a controller that was previously performing a task, because we do not know the current status of the task upon re-start. In some cases, this makes sense, but potentially requires administrative intervention in the worst of cases. In a perfect universe, Ironic "conductors" would validate their perception, in case tasks actually finished. Design Goals - Graphical User Interface ======================================= * While a graphical interface was developed for Horizon in the form of `ironic-ui `_, currently ironic-ui receives only minimal housekeeping. As Ironic has evolved, ironic-ui is stuck on version `1.34` and knows nothing of our evolution since. Ironic ultimately needs a contributor with sufficient time to pick up ``ironic-ui`` or to completely replace it as a functional and customizable user interface. ironic-15.0.0/doc/source/contributor/contributing.rst0000664000175000017500000004142013652514273022773 0ustar zuulzuul00000000000000.. _code-contribution-guide: ============================ So You Want to Contribute... ============================ This document provides some necessary points for developers to consider when writing and reviewing Ironic code. The checklist will help developers get things right. Getting Started =============== If you're completely new to OpenStack and want to contribute to the ironic project, please start by familiarizing yourself with the `Infra Team's Developer Guide `_. This will help you get your accounts set up in Launchpad and Gerrit, familiarize you with the workflow for the OpenStack continuous integration and testing systems, and help you with your first commit. LaunchPad --------- Most of the tools used for OpenStack require a launchpad.net ID for authentication. Ironic previously used to track work on Launchpad, but we have not done so since migrating to Storyboard. .. seealso:: * https://launchpad.net Storyboard ---------- The ironic project moved from Launchpad to `StoryBoard `_ for work and task tracking. This provides an aggregate view called a "Project Group" and individual "Projects". A good starting place is the `project group `_ representing the whole of the ironic community, as opposed to the `ironic project `_ storyboard which represents ironic as a repository. Internet Relay Chat 'IRC' ------------------------- Daily contributor discussions take place on IRC in the '#openstack-ironic' channel on Freenode IRC. Please feel free to join us at irc://irc.freenode.net and join our channel! Everything Ironic ~~~~~~~~~~~~~~~~~ Ironic is a community of projects centered around the primary project repository 'ironic', which help facilitate the deployment and management of bare metal resources. This means there are a number of different repositories that fall into the responsibility of the project team and the community. Some of the repositories may not seem strictly hardware related, but they may be tools or things to just make an aspect easier. Related Projects ---------------- There are several projects that are tightly integrated with ironic and which are developed by the same community. .. seealso:: * :bifrost-doc:`Bifrost Documentation <>` * :ironic-inspector-doc:`Ironic Inspector Documentation <>` * :ironic-lib-doc:`Ironic Lib Documentation <>` * :ironic-python-agent-doc:`Ironic Python Agent (IPA) Documentation <>` * :python-ironicclient-doc:`Ironic Client Documentation <>` * :python-ironic-inspector-client-doc:`Ironic Inspector Client Documentation <>` Useful Links ------------ Bug/Task tracker https://storyboard.openstack.org/#!/project/943 Mailing list (prefix Subject line with ``[ironic]``) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss Code Hosting https://opendev.org/openstack/ironic Code Review https://review.opendev.org/#/q/status:open+project:openstack/ironic,n,z Whiteboard https://etherpad.openstack.org/p/IronicWhiteBoard Weekly Meeting Agenda https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting Adding New Features =================== Ironic tracks new features using RFEs (Requests for Feature Enhancements) instead of blueprints. These are stories with 'rfe' tag, and they should be submitted before a spec or code is proposed. When a member of the `ironic-core team `_ decides that the proposal is worth implementing, a spec (if needed) and code should be submitted, referencing the RFE task or story ID number. Contributors are welcome to submit a spec and/or code before the RFE is approved, however those patches will not land until the RFE is approved. Feature Submission Process -------------------------- #. Submit a bug report on the `ironic StoryBoard `_. There are two fields that must be filled: 'Title' and 'Description'. 'Tasks' can be added and are associated with a project. If you can't describe it in a sentence or two, it may mean that you are either trying to capture more than one RFE at once, or that you are having a hard time defining what you are trying to solve at all. This may also be a sign that your feature may require a specification document. #. Describe the proposed change in the 'Description' field. The description should provide enough details for a knowledgeable developer to understand what is the existing problem in the current platform that needs to be addressed, or what is the enhancement that would make the platform more capable, both from a functional and a non-functional standpoint. #. Submit the story, add an 'rfe' tag to it and assign yourself or whoever is going to work on this feature. #. As soon as a member of the team acknowledges the story, we will move the story to the 'Review' state. As time goes on, Discussion about the RFE, and whether to approve it will occur. #. Contributors will evaluate the RFE and may advise the submitter to file a spec in the ironic-specs repository to elaborate on the feature request. Typically this is when an RFE requires extra scrutiny, more design discussion, etc. For the spec submission process, please see the `Ironic Specs Process`_. A specific task should be created to track the creation of a specification. #. If a spec is not required, once the discussion has happened and there is positive consensus among the ironic-core team on the RFE, the RFE is 'approved', and its tag will move from 'rfe' to 'rfe-approved'. This means that the feature is approved and the related code may be merged. #. If a spec is required, the spec must be submitted (with a new task as part of the story referenced as 'Task' in the commit message), reviewed, and merged before the RFE will be 'approved' (and the tag changed to 'rfe-approved'). #. The tasks then goes through the usual process -- first to 'Review' when the spec/code is being worked on, then 'Merged' when it is implemented. #. If the RFE is rejected, the ironic-core team will move the story to "Invalid" status. Change Tracking --------------- We track our stories and tasks in Storyboard. https://storyboard.openstack.org/#!/project/ironic When working on an RFE, please be sure to tag your commits properly: "Story: #xxxx" or "Task: #xxxx". It is also helpful to set a consistent review topic, such as "story/xxxx" for all patches related to the RFE. If the RFE spans across several projects (e.g. ironic and python-ironicclient), but the main work is going to happen within ironic, please use the same story for all the code you're submitting, there is no need to create a separate RFE in every project. .. note:: **RFEs may only be approved by members of the ironic-core team**. .. note:: While not strictly required for minor changes and fixes, it is highly preferred by the Ironic community that any change which needs to be backported, have a recorded Story and Task in Storyboard. Managing Change Sets -------------------- If you would like some help, or if you (or some members of your team) are unable to continue working on the feature, updating and maintaining the changes, please let the rest of the ironic community know. You could leave a comment in one or more of the changes/patches, bring it up in IRC, the weekly meeting, or on the OpenStack development email list. Communicating this will make other contributors aware of the situation and allow for others to step forward and volunteer to continue with the work. In the event that a contributor leaves the community, do not expect the contributor's changes to be continued unless someone volunteers to do so. Getting Your Patch Merged ------------------------- Within the Ironic project, we generally require two core reviewers to sign-off (+2) change sets. We also will generally recognize non-core (+1) reviewers, and sometimes even reverse our decision to merge code based upon their reviews. We recognize that some repositories have less visibility, as such it is okay to ask for a review in our IRC channel. Please be prepared to stay in IRC for a little while in case we have questions. Sometimes we may also approve patches with a single core reviewer. This is generally discouraged, but sometimes necessary. When we do so, we try to explain why we do so. As a patch submitter, it equally helps us to understand why the change is important. Generally, more detail and context helps us understand the change faster. Timeline Expectations --------------------- As with any large project, it does take time for features and changes to be merged in any of the project repositories. This is largely due to limited review bandwidth coupled with varying reviewer priorities and focuses. When establishing an understanding of complexity, the following things should be kept in mind. * Generally, small and minor changes can gain consensus and merge fairly quickly. These sorts of changes would be: bug fixes, minor documentation updates, follow-up changes. * Medium changes generally consist of driver feature parity changes, where one driver is working to match functionality of another driver. * These changes generally only require an RFE for the purposes of tracking and correlating the change. * Documentation updates are expected to be submitted with or immediately following the initial change set. * Larger or controversial changes generally take much longer to merge. This is often due to the necessity of reviewers to gain additional context and for change sets to be iterated upon to reach a state where there is consensus. These sorts of changes include: database, object, internal interface additions, RPC, rest API changes. * These changes will very often require specifications to reach consensus, unless there are pre-existing patterns or code already present. * These changes may require many reviews and iterations, and can also expect to be impacted by merge conflicts as other code or features are merged. * These changes must typically be split into a series of changes. Reviewers typically shy away from larger single change sets due to increased difficulty in reviewing. * Do not expect any API or user-visible data model changes to merge after the API client freeze. Some substrate changes may merge if not user visible. * You should expect complex features, such as cross-project features or integration, to take longer than a single development cycle to land. * Building consensus is vital. * Often these changes are controversial or have multiple considerations that need to be worked through in the specification process, which may cause the design to change. As such, it may take months to reach consensus over design. * These features are best broken into larger chunks and tackled in an incremental fashion. Live Upgrade Related Concerns ----------------------------- See :doc:`/contributor/rolling-upgrades`. Driver Internal Info ~~~~~~~~~~~~~~~~~~~~ The ``driver_internal_info`` node field was introduced in the Kilo release. It allows driver developers to store internal information that can not be modified by end users. Here is the list of existing common and agent driver attributes: * Common attributes: * ``is_whole_disk_image``: A Boolean value to indicate whether the user image contains ramdisk/kernel. * ``clean_steps``: An ordered list of clean steps that will be performed on the node. * ``deploy_steps``: An ordered list of deploy steps that will be performed on the node. Support for deploy steps was added in the ``11.1.0`` release. * ``instance``: A list of dictionaries containing the disk layout values. * ``root_uuid_or_disk_id``: A String value of the bare metal node's root partition uuid or disk id. * ``persistent_boot_device``: A String value of device from ``ironic.common.boot_devices``. * ``is_next_boot_persistent``: A Boolean value to indicate whether the next boot device is ``persistent_boot_device``. * Agent driver attributes: * ``agent_url``: A String value of IPA API URL so that Ironic can talk to IPA ramdisk. * ``hardware_manager_version``: A String value of the version of the hardware manager in IPA ramdisk. * ``target_raid_config``: A Dictionary containing the target RAID configuration. This is a copy of the same name attribute in Node object. But this one is never actually saved into DB and is only read by IPA ramdisk. .. note:: These are only some fields in use. Other vendor drivers might expose more ``driver_internal_info`` properties, please check their development documentation and/or module docstring for details. It is important for developers to make sure these properties follow the precedent of prefixing their variable names with a specific interface name (e.g., ilo_bar, drac_xyz), so as to minimize or avoid any conflicts between interfaces. Ironic Specs Process -------------------- Specifications must follow the template which can be found at `specs/template.rst `_, which is quite self-documenting. Specifications are proposed by adding them to the `specs/approved` directory, adding a soft link to it from the `specs/not-implemented` directory, and posting it for review to Gerrit. For more information, please see the `README `_. The same `Gerrit process `_ as with source code, using the repository `ironic-specs `_, is used to add new specifications. All approved specifications are available at: https://specs.openstack.org/openstack/ironic-specs. If a specification has been approved but not completed within one or more releases since the approval, it may be re-reviewed to make sure it still makes sense as written. Ironic specifications are part of the `RFE (Requests for Feature Enhancements) process <#adding-new-features>`_. You are welcome to submit patches associated with an RFE, but they will have a -2 ("do not merge") until the specification has been approved. This is to ensure that the patches don't get accidentally merged beforehand. You will still be able to get reviewer feedback and push new patch sets, even with a -2. The `list of core reviewers `_ for the specifications is small but mighty. (This is not necessarily the same list of core reviewers for code patches.) Changes to existing specs ------------------------- For approved but not-completed specs: - cosmetic cleanup, fixing errors, and changing the definition of a feature can be done to the spec. For approved and completed specs: - changing a previously approved and completed spec should only be done for cosmetic cleanup or fixing errors. - changing the definition of the feature should be done in a new spec. Please see the `Ironic specs process wiki page `_ for further reference. Bug Reporting ============= Bugs can reported via our Task and Bug tracking tool Storyboard. When filing bugs, please include as much detail as possible, and don't be shy. Essential pieces of information are generally: * Contents of the 'node' - `openstack baremetal node show ` * Steps to reproduce the issue. * Exceptions and surrounding lines from the logs. * Versions of ironic, ironic-python-agent, and any other coupled components. Please also set your expectations of what *should* be happening. Statements of user expectations are how we understand what is occuring and how we learn new use cases! Project Team Leader Duties ========================== The ``Project Team Leader`` or ``PTL`` is elected each development cycle by the contributors to the ironic community. Think of this person as your primary contact if you need to try and rally the project, or have a major issue that requires attention. They serve a role that is mainly oriented towards trying to drive the technical discussion forward and managing the idiosyncrasies of the project. With this responsibility, they are considered a "public face" of the project and are generally obliged to try and provide "project updates" and outreach communication. All common PTL duties are enumerated here in the `PTL guide `_. Tasks like release management or preparation for a release are generally delegated with-in the team. Even outreach can be delegated, and specifically there is no rule stating that any member of the community can't propose a release, clean-up release notes or documentation, or even get on the occasional stage. ironic-15.0.0/doc/source/contributor/deploy-steps.rst0000664000175000017500000000306213652514273022714 0ustar zuulzuul00000000000000Developing a new Deploy Step ============================ To support customized deployment step, implement a new method in an interface class and use the decorator ``deploy_step`` defined in ``ironic/drivers/base.py``. For example, we will implement a ``do_nothing`` deploy step in the ``AgentDeploy`` class. .. code-block:: python class AgentDeploy(AgentDeployMixin, base.DeployInterface): ... @base.deploy_step(priority=200, argsinfo={ 'test_arg': { 'description': ( "This is a test argument." ), 'required': True } }) def do_nothing(self, task, **kwargs): return None After deployment of the baremetal node, check the updated deploy steps:: openstack baremetal node show $node_ident -f json -c driver_internal_info The above command outputs the ``driver_internal_info`` as following:: { "driver_internal_info": { ... "deploy_steps": [ { "priority": 200, "interface": "deploy", "step": "do_nothing", "argsinfo": { "test_arg": { "required": True, "description": "This is a test argument." } } }, { "priority": 100, "interface": "deploy", "step": "deploy", "argsinfo": null } ], "deploy_step_index": 1 } } .. note:: Similarly, clean steps can be implemented using the ``clean_step`` decorator. ironic-15.0.0/doc/source/contributor/vendor-passthru.rst0000664000175000017500000001442613652514273023436 0ustar zuulzuul00000000000000.. _vendor-passthru: ============== Vendor Methods ============== This document is a quick tutorial on writing vendor specific methods to a driver. The first thing to note is that the Ironic API supports two vendor endpoints: A driver vendor passthru and a node vendor passthru. * The ``VendorInterface`` allows hardware types to expose a custom top-level functionality which is not specific to a Node. For example, let's say the driver `ipmi` exposed a method called `authentication_types` that would return what are the authentication types supported. It could be accessed via the Ironic API like: :: GET http://
:/v1/drivers/ipmi/vendor_passthru/authentication_types .. warning:: The Bare Metal API currently only allows to use driver passthru for the default ``vendor`` interface implementation for a given hardware type. This limitation will be lifted in the future. * The node vendor passthru allows drivers to expose custom functionality on per-node basis. For example the same driver `ipmi` exposing a method called `send_raw` that would send raw bytes to the BMC, the method also receives a parameter called `raw_bytes` which the value would be the bytes to be sent. It could be accessed via the Ironic API like: :: POST {'raw_bytes': '0x01 0x02'} http://
:/v1/nodes//vendor_passthru/send_raw Writing Vendor Methods ====================== Writing a custom vendor method in Ironic should be simple. The first thing to do is write a class inheriting from the `VendorInterface`_ class: .. code-block:: python class ExampleVendor(VendorInterface) def get_properties(self): return {} def validate(self, task, **kwargs): pass The `get_properties` is a method that all driver interfaces have, it should return a dictionary of : telling in the description whether that property is required or optional so the node can be manageable by that driver. For example, a required property for a `ipmi` driver would be `ipmi_address` which is the IP address or hostname of the node. We are returning an empty dictionary in our example to make it simpler. The `validate` method is responsible for validating the parameters passed to the vendor methods. Ironic will not introspect into what is passed to the drivers, it's up to the developers writing the vendor method to validate that data. Let's extend the `ExampleVendor` class to support two methods, the `authentication_types` which will be exposed on the driver vendor passthru endpoint; And the `send_raw` method that will be exposed on the node vendor passthru endpoint: .. code-block:: python class ExampleVendor(VendorInterface) def get_properties(self): return {} def validate(self, task, method, **kwargs): if method == 'send_raw': if 'raw_bytes' not in kwargs: raise MissingParameterValue() @base.driver_passthru(['GET'], async_call=False) def authentication_types(self, context, **kwargs): return {"types": ["NONE", "MD5", "MD2"]} @base.passthru(['POST']) def send_raw(self, task, **kwargs): raw_bytes = kwargs.get('raw_bytes') ... That's it! Writing a node or driver vendor passthru method is pretty much the same, the only difference is how you decorate the methods and the first parameter of the method (ignoring self). A method decorated with the `@passthru` decorator should expect a Task object as first parameter and a method decorated with the `@driver_passthru` decorator should expect a Context object as first parameter. Both decorators accept these parameters: * http_methods: A list of what the HTTP methods supported by that vendor function. To know what HTTP method that function was invoked with, a `http_method` parameter will be present in the `kwargs`. Supported HTTP methods are *POST*, *PUT*, *GET* and *PATCH*. * method: By default the method name is the name of the python function, if you want to use a different name this parameter is where this name can be set. For example: .. code-block:: python @passthru(['PUT'], method="alternative_name") def name(self, task, **kwargs): ... * description: A string containing a nice description about what that method is supposed to do. Defaults to "" (empty string). .. _VendorInterface: ../api/ironic.drivers.base.html#ironic.drivers.base.VendorInterface * async_call: A boolean value to determine whether this method should run asynchronously or synchronously. Defaults to True (Asynchronously). .. note:: This parameter was previously called "async". The node vendor passthru decorator (`@passthru`) also accepts the following parameter: * require_exclusive_lock: A boolean value determining whether this method should require an exclusive lock on a node between validate() and the beginning of method execution. For synchronous methods, the lock on the node would also be kept for the duration of method execution. Defaults to True. .. WARNING:: Please avoid having a synchronous method for slow/long-running operations **or** if the method does talk to a BMC; BMCs are flaky and very easy to break. .. WARNING:: Each asynchronous request consumes a worker thread in the ``ironic-conductor`` process. This can lead to starvation of the thread pool, resulting in a denial of service. Give the new vendor interface implementation a human-friendly name and create an entry point for it in the ``setup.cfg``:: ironic.hardware.interfaces.vendor = example = ironic.drivers.modules.example:ExampleVendor Finally, add it to the list of supported vendor interfaces for relevant hardware types, for example: .. code-block:: python class ExampleHardware(generic.GenericHardware): ... @property def supported_vendor_interfaces(self): return [example.ExampleVendor] Backwards Compatibility ======================= There is no requirement that changes to a vendor method be backwards compatible. However, for your users' sakes, we highly recommend that you do so. If you are changing the exceptions being raised, you might want to ensure that the same HTTP code is being returned to the user. For non-backwards compatibility, please make sure you add a release note that indicates this. ironic-15.0.0/doc/source/contributor/debug-ci-failures.rst0000664000175000017500000000206213652514273023552 0ustar zuulzuul00000000000000.. _debug-ci-failures: ===================== Debugging CI failures ===================== If you see `FAILURE` in one or more jobs for your patch please don't panic. This guide may help you to find the initial reason for the failure. When clicking in the failed job you will be redirect to the Zuul web page that contains all the information about the job build. Zuul Web Page ============= The page has three tabs: `Summary`, `Logs` and `Console`. * Summary: Contains overall information about the build of the job, if the job build failed it will contain a general output of the failure. * Logs: Contains all configurations and log files about all services that were used in the job. This will give you an overall idea of the failures and you can identify services that may be involved. The `job-output` file can give an overall idea of the failures and what services may be involved. * Console: Contains all the playbooks that were executed, by clicking in the arrow before each playbook name you can find the roles and commands that were executed. ironic-15.0.0/doc/source/contributor/bios_develop.rst0000664000175000017500000001013713652514273022737 0ustar zuulzuul00000000000000.. _bios_develop: Developing BIOS Interface ========================= To support a driver specific BIOS interface it is necessary to create a class inheriting from the ``BIOSInterface`` class: .. code-block:: python from ironic.drivers import base class ExampleBIOS(base.BIOSInterface): def get_properties(self): return {} def validate(self, task): pass See :doc:`/contributor/drivers` for a detailed explanation of hardware type and interface. The ``get_properties`` and ``validate`` are methods that all driver interfaces have. The hardware interface that supports BIOS settings should also implement the following three methods: * Implement a method named ``cache_bios_settings``. This method stores BIOS settings to the ``bios_settings`` table during cleaning operations and updates the ``bios_settings`` table when ``apply_configuration`` or ``factory_reset`` are successfully called. .. code-block:: python from ironic.drivers import base driver_client = importutils.try_import('driver.client') class ExampleBIOS(base.BIOSInterface): def __init__(self): if driver_client is None: raise exception.DriverLoadError( driver=self.__class__.__name__, reason=_("Unable to import driver library")) def cache_bios_settings(self, task): node_id = task.node.id node_info = driver_common.parse_driver_info(task.node) settings = driver_client.get_bios_settings(node_info) create_list, update_list, delete_list, nochange_list = ( objects.BIOSSettingList.sync_node_setting(settings)) if len(create_list) > 0: objects.BIOSSettingList.create( task.context, node_id, create_list) if len(update_list) > 0: objects.BIOSSettingList.save( task.context, node_id, update_list) if len(delete_list) > 0: delete_names = [] for setting in delete_list: delete_names.append(setting.name) objects.BIOSSettingList.delete( task.context, node_id, delete_names) .. note:: ``driver.client`` is vendor specific library to control and manage the bare metal hardware, for example: python-dracclient, sushy. * Implement a method named ``factory_reset``. This method needs to use the ``clean_step`` decorator. It resets BIOS settings to factory default on the given node. It calls ``cache_bios_settings`` automatically to update existing ``bios_settings`` table once successfully executed. .. code-block:: python class ExampleBIOS(base.BIOSInterface): @base.clean_step(priority=0) def factory_reset(self, task): node_info = driver_common.parse_driver_info(task.node) driver_client.reset_bios_settings(node_info) * Implement a method named ``apply_configuration``. This method needs to use the clean_step decorator. It takes the given BIOS settings and applies them on the node. It also calls ``cache_bios_settings`` automatically to update existing ``bios_settings`` table after successfully applying given settings on the node. .. code-block:: python class ExampleBIOS(base.BIOSInterface): @base.clean_step(priority=0, argsinfo={ 'settings': { 'description': ( 'A list of BIOS settings to be applied' ), 'required': True } }) def apply_configuration(self, task, settings): node_info = driver_common.parse_driver_info(task.node) driver_client.apply_bios_settings(node_info, settings) The ``settings`` parameter is a list of BIOS settings to be configured. for example:: [ { "setting name": { "name": "String", "value": "String" } }, { "setting name": { "name": "String", "value": "String" } }, ... ] ironic-15.0.0/doc/source/contributor/ironic-multitenant-networking.rst0000664000175000017500000000756013652514273026305 0ustar zuulzuul00000000000000========================================== Ironic multitenant networking and DevStack ========================================== This guide will walk you through using OpenStack Ironic/Neutron with the ML2 ``networking-generic-switch`` plugin. Using VMs as baremetal servers ============================== This scenario shows how to setup Devstack to use Ironic/Neutron integration with VMs as baremetal servers and ML2 ``networking-generic-switch`` that interacts with OVS. DevStack Configuration ---------------------- The following is ``local.conf`` that will setup Devstack with 3 VMs that are registered in ironic. ``networking-generic-switch`` driver will be installed and configured in Neutron. :: [[local|localrc]] # Configure ironic from ironic devstack plugin. enable_plugin ironic https://opendev.org/openstack/ironic # Install networking-generic-switch Neutron ML2 driver that interacts with OVS enable_plugin networking-generic-switch https://opendev.org/openstack/networking-generic-switch # Add link local info when registering Ironic node IRONIC_USE_LINK_LOCAL=True IRONIC_ENABLED_NETWORK_INTERFACES=flat,neutron IRONIC_NETWORK_INTERFACE=neutron #Networking configuration OVS_PHYSICAL_BRIDGE=brbm PHYSICAL_NETWORK=mynetwork IRONIC_PROVISION_NETWORK_NAME=ironic-provision IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24 IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1 Q_PLUGIN=ml2 ENABLE_TENANT_VLANS=True Q_ML2_TENANT_NETWORK_TYPE=vlan TENANT_VLAN_RANGE=100:150 # Credentials ADMIN_PASSWORD=password RABBIT_PASSWORD=password DATABASE_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=password SWIFT_HASH=password SWIFT_TEMPURL_KEY=password # Enable Ironic API and Ironic Conductor enable_service ironic enable_service ir-api enable_service ir-cond # Disable nova novnc service, ironic does not support it anyway. disable_service n-novnc # Enable Swift for the direct deploy interface. enable_service s-proxy enable_service s-object enable_service s-container enable_service s-account # Disable Horizon disable_service horizon # Disable Cinder disable_service cinder c-sch c-api c-vol # Disable Tempest disable_service tempest # Swift temp URL's are required for the direct deploy interface. SWIFT_ENABLE_TEMPURLS=True # Create 3 virtual machines to pose as Ironic's baremetal nodes. IRONIC_VM_COUNT=3 IRONIC_BAREMETAL_BASIC_OPS=True # Enable additional hardware types, if needed. #IRONIC_ENABLED_HARDWARE_TYPES=ipmi,fake-hardware # Don't forget that many hardware types require enabling of additional # interfaces, most often power and management: #IRONIC_ENABLED_MANAGEMENT_INTERFACES=ipmitool,fake #IRONIC_ENABLED_POWER_INTERFACES=ipmitool,fake # The default deploy interface is 'iscsi', you can use 'direct' with #IRONIC_DEFAULT_DEPLOY_INTERFACE=direct # Change this to alter the default driver for nodes created by devstack. # This driver should be in the enabled list above. IRONIC_DEPLOY_DRIVER=ipmi # The parameters below represent the minimum possible values to create # functional nodes. IRONIC_VM_SPECS_RAM=1024 IRONIC_VM_SPECS_DISK=10 # Size of the ephemeral partition in GB. Use 0 for no ephemeral partition. IRONIC_VM_EPHEMERAL_DISK=0 # To build your own IPA ramdisk from source, set this to True IRONIC_BUILD_DEPLOY_RAMDISK=False VIRT_DRIVER=ironic # By default, DevStack creates a 10.0.0.0/24 network for instances. # If this overlaps with the hosts network, you may adjust with the # following. NETWORK_GATEWAY=10.1.0.1 FIXED_RANGE=10.1.0.0/24 FIXED_NETWORK_SIZE=256 # Log all output to files LOGFILE=$HOME/devstack.log LOGDIR=$HOME/logs IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs ironic-15.0.0/doc/source/contributor/index.rst0000664000175000017500000001111313652514273021367 0ustar zuulzuul00000000000000Developer's Guide ================= Getting Started --------------- If you are new to ironic, this section contains information that should help you get started as a developer working on the project or contributing to the project. .. toctree:: :maxdepth: 1 Developer Contribution Guide Setting Up Your Development Environment Priorities Specifications Frequently Asked Questions Contributor Vision OpenStack Vision The following pages describe the architecture of the Bare Metal service and may be helpful to anyone working on or with the service, but are written primarily for developers. .. toctree:: :maxdepth: 1 Ironic System Architecture Provisioning State Machine Developing New Notifications OSProfiler Tracing Rolling Upgrades These pages contain information for PTLs, cross-project liaisons, and core reviewers. .. toctree:: :maxdepth: 1 Releasing Ironic Projects Ironic Governance Structure Writing Drivers --------------- Ironic's community includes many hardware vendors who contribute drivers that enable more advanced functionality when Ironic is used in conjunction with that hardware. To do this, the Ironic developer community is committed to standardizing on a `Python Driver API `_ that meets the common needs of all hardware vendors, and evolving this API without breaking backwards compatibility. However, it is sometimes necessary for driver authors to implement functionality - and expose it through the REST API - that can not be done through any existing API. To facilitate that, we also provide the means for API calls to be "passed through" ironic and directly to the driver. Some guidelines on how to implement this are provided below. Driver authors are strongly encouraged to talk with the developer community about any implementation using this functionality. .. toctree:: :maxdepth: 1 Driver Overview Writing "vendor_passthru" methods Creating new BIOS interfaces Third party continuous integration testing Writing Deploy or Clean Steps Testing Network Integration --------------------------- In order to test the integration between the Bare Metal and Networking services, support has been added to `devstack `_ to mimic an external physical switch. Here we include a recommended configuration for devstack to bring up this environment. .. toctree:: :maxdepth: 1 Configuring Devstack for multitenant network testing Testing Boot-from-Volume ------------------------ Starting with the Pike release, it is also possible to use DevStack for testing booting from Cinder volumes with VMs. .. toctree:: :maxdepth: 1 Configuring Devstack for boot-from-volume testing Full Ironic Server Python API Reference --------------------------------------- .. toctree:: :maxdepth: 1 api/modules Understanding the Ironic's CI ----------------------------- It's important to understand the role of each job in the CI, how to add new jobs and how to debug failures that may arise. To facilitate that, we have created the documentation below. .. toctree:: :maxdepth: 1 Job roles in the CI How to add a new job? How to debug failures in CI jobs Our policy for stable branches ------------------------------ Stable branches that are on `Extended Maintenance`_ and haven't received backports in a while, can be tagged as ``Unmaintained``, after discussions within the ironic community. If such a decision is taken, an email will be sent to the OpenStack mailing list. What does ``Unmaintained`` mean? The branch still exists, but the ironic upstream community will not actively backport patches from maintained branches. Fixes can still be merged, though, if pushed into review by operators or other downstream developers. It also means that branchless projects (e.g.: ironic-tempest-plugin), may not have configurations that are compatible with those branches. As of 09 March 2020, the list of ``Unmaintained`` branches includes: * Ocata (Last commit - Jun 28, 2019) * Pike (Last commit - Oct 2, 2019) .. _Extended Maintenance: https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases ironic-15.0.0/doc/source/contributor/faq.rst0000664000175000017500000001263213652514273021036 0ustar zuulzuul00000000000000.. _faq: ========================================== Developer FAQ (frequently asked questions) ========================================== Here are some answers to frequently-asked questions from IRC and elsewhere. .. contents:: :local: :depth: 2 How do I... =========== ...create a migration script template? -------------------------------------- Using the ``ironic-dbsync revision`` command, e.g:: $ cd ironic $ tox -evenv -- ironic-dbsync revision -m \"create foo table\" It will create an empty alembic migration. For more information see the `alembic documentation`_. .. _`alembic documentation`: http://alembic.zzzcomputing.com/en/latest/tutorial.html#create-a-migration-script .. _faq_release_note: ...know if a release note is needed for my change? -------------------------------------------------- `Reno documentation`_ contains a description of what can be added to each section of a release note. If, after reading this, you're still unsure about whether to add a release note for your change or not, keep in mind that it is intended to contain information for deployers, so changes to unit tests or documentation are unlikely to require one. ...create a new release note? ----------------------------- By running ``reno`` command via tox, e.g:: $ tox -e venv -- reno new version-foo venv create: /home/foo/ironic/.tox/venv venv installdeps: -r/home/foo/ironic/test-requirements.txt venv develop-inst: /home/foo/ironic venv runtests: PYTHONHASHSEED='0' venv runtests: commands[0] | reno new version-foo Created new notes file in releasenotes/notes/version-foo-ecb3875dc1cbf6d9.yaml venv: commands succeeded congratulations :) $ git status On branch test Untracked files: (use "git add ..." to include in what will be committed) releasenotes/notes/version-foo-ecb3875dc1cbf6d9.yaml Then edit the result file. Note that: - we prefer to use present tense in release notes. For example, a release note should say "Adds support for feature foo", not "Added support for feature foo". (We use 'adds' instead of 'add' because grammatically, it is "ironic adds support", not "ironic add support".) - any variant of English spelling (American, British, Canadian, Australian...) is acceptable. The release note itself should be consistent and not have different spelling variants of the same word. For more information see the `reno documentation`_. .. _`reno documentation`: https://docs.openstack.org/reno/latest/user/usage.html ...update a release note? ------------------------- If this is a release note that pertains to something that was fixed on master or an intermediary release (during a development cycle, that hasn't been branched yet), you can go ahead and update it by submitting a patch. If it is the release note of an ironic release that has branched, `it can be updated `_ but we will only allow it in extenuating circumstances. (It can be updated by *only* updating the file in that branch. DO NOT update the file in master and cherry-pick it. If you do, `see how the mess was cleaned up `_.) ...get a decision on something? ------------------------------- You have an issue and would like a decision to be made. First, make sure that the issue hasn't already been addressed, by looking at documentation, stories, specifications, or asking. Information and links can be found on the `Ironic wiki`_ page. There are several ways to solicit comments and opinions: * bringing it up at the `weekly Ironic meeting`_ * bringing it up on IRC_ * bringing it up on the `mailing list`_ (add "[Ironic]" to the Subject of the email) If there are enough core folks at the weekly meeting, after discussing an issue, voting could happen and a decision could be made. The problem with IRC or the weekly meeting is that feedback will only come from the people that are actually present. To inform (and solicit feedback from) more people about an issue, the preferred process is: #. bring it up on the mailing list #. after some period of time has elapsed (and depending on the thread activity), someone should propose a solution via gerrit. (E.g. the person that started the thread if no one else steps up.) The proposal should be made in the git repository that is associated with the issue. (For instance, this decision process was proposed as a documentation patch to the ironic repository.) #. In the email thread, don't forget to provide a link to the proposed patch! #. The discussion then moves to the proposed patch. If this is a big decision, we could declare that some percentage of the cores should vote on it before landing it. (This process was suggested in an email thread about `process for making decisions`_.) .. _Ironic wiki: https://wiki.openstack.org/wiki/Ironic .. _weekly Ironic meeting: https://wiki.openstack.org/wiki/Meetings/Ironic .. _IRC: https://wiki.openstack.org/wiki/Ironic#IRC .. _mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss .. _process for making decisions: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095460.html ...add support for GMRs to new executables and extending the GMR? ----------------------------------------------------------------- For more information, see the :oslo.reports-doc:`oslo.reports documentation ` page. ironic-15.0.0/doc/source/cli/0000775000175000017500000000000013652514443015725 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/cli/ironic-status.rst0000664000175000017500000000337013652514273021267 0ustar zuulzuul00000000000000============= ironic-status ============= Synopsis ======== :: ironic-status [] Description =========== :program:`ironic-status` is a tool that provides routines for checking the status of a Ironic deployment. Options ======= The standard pattern for executing a :program:`ironic-status` command is:: ironic-status [] Run without arguments to see a list of available command categories:: ironic-status Categories are: * ``upgrade`` Detailed descriptions are below. You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category:: ironic-status upgrade These sections describe the available categories and arguments for :program:`ironic-status`. Upgrade ~~~~~~~ .. _ironic-status-checks: ``ironic-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. This command expects to have complete configuration and access to databases and services. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **12.0.0 (Stein)** * Adds a check for compatibility of the object versions with the release of ironic. ironic-15.0.0/doc/source/cli/ironic-dbsync.rst0000664000175000017500000001430113652514273021222 0ustar zuulzuul00000000000000============= ironic-dbsync ============= The :command:`ironic-dbsync` utility is used to create the database schema tables that the ironic services will use for storage. It can also be used to upgrade existing database tables when migrating between different versions of ironic. The `Alembic library `_ is used to perform the database migrations. Options ======= This is a partial list of the most useful options. To see the full list, run the following:: ironic-dbsync --help .. program:: ironic-dbsync .. option:: -h, --help Show help message and exit. .. option:: --config-dir Path to a config directory with configuration files. .. option:: --config-file Path to a configuration file to use. .. option:: -d, --debug Print debugging output. .. option:: --version Show the program's version number and exit. .. option:: upgrade, stamp, revision, version, create_schema, online_data_migrations The :ref:`command ` to run. Usage ===== Options for the various :ref:`commands ` for :command:`ironic-dbsync` are listed when the :option:`-h` or :option:`--help` option is used after the command. For example:: ironic-dbsync create_schema --help Information about the database is read from the ironic configuration file used by the API server and conductor services. This file must be specified with the :option:`--config-file` option:: ironic-dbsync --config-file /path/to/ironic.conf create_schema The configuration file defines the database backend to use with the *connection* database option:: [database] connection=mysql+pymysql://root@localhost/ironic If no configuration file is specified with the :option:`--config-file` option, :command:`ironic-dbsync` assumes an SQLite database. .. _dbsync_cmds: Command Options =============== :command:`ironic-dbsync` is given a command that tells the utility what actions to perform. These commands can take arguments. Several commands are available: .. _create_schema: create_schema ------------- .. program:: create_schema .. option:: -h, --help Show help for create_schema and exit. This command will create database tables based on the most current version. It assumes that there are no existing tables. An example of creating database tables with the most recent version:: ironic-dbsync --config-file=/etc/ironic/ironic.conf create_schema online_data_migrations ---------------------- .. program:: online_data_migrations .. option:: -h, --help Show help for online_data_migrations and exit. .. option:: --max-count The maximum number of objects (a positive value) to migrate. Optional. If not specified, all the objects will be migrated (in batches of 50 to avoid locking the database for long periods of time). .. option:: --option If a migration accepts additional parameters, they can be passed via this argument. It can be specified several times. This command will migrate objects in the database to their most recent versions. This command must be successfully run (return code 0) before upgrading to a future release. It returns: * 1 (not completed) if there are still pending objects to be migrated. Before upgrading to a newer release, this command must be run until 0 is returned. * 0 (success) after migrations are finished or there are no data to migrate * 127 (error) if max-count is not a positive value or an option is invalid * 2 (error) if the database is not compatible with this release. This command needs to be run using the previous release of ironic, before upgrading and running it with this release. revision -------- .. program:: revision .. option:: -h, --help Show help for revision and exit. .. option:: -m , --message The message to use with the revision file. .. option:: --autogenerate Compares table metadata in the application with the status of the database and generates migrations based on this comparison. This command will create a new revision file. You can use the :option:`--message` option to comment the revision. This is really only useful for ironic developers making changes that require database changes. This revision file is used during database migration and will specify the changes that need to be made to the database tables. Further discussion is beyond the scope of this document. stamp ----- .. program:: stamp .. option:: -h, --help Show help for stamp and exit. .. option:: --revision The revision number. This command will 'stamp' the revision table with the version specified with the :option:`--revision` option. It will not run any migrations. upgrade ------- .. program:: upgrade .. option:: -h, --help Show help for upgrade and exit. .. option:: --revision The revision number to upgrade to. This command will upgrade existing database tables to the most recent version, or to the version specified with the :option:`--revision` option. Before this ``upgrade`` is invoked, the command :command:`ironic-dbsync online_data_migrations` must have been successfully run using the previous version of ironic (if you are doing an upgrade as opposed to a new installation of ironic). If it wasn't run, the database will not be compatible with this recent version of ironic, and this command will return 2 (error). If there are no existing tables, then new tables are created, beginning with the oldest known version, and successively upgraded using all of the database migration files, until they are at the specified version. Note that this behavior is different from the :ref:`create_schema` command that creates the tables based on the most recent version. An example of upgrading to the most recent table versions:: ironic-dbsync --config-file=/etc/ironic/ironic.conf upgrade .. note:: This command is the default if no command is given to :command:`ironic-dbsync`. .. warning:: The upgrade command is not compatible with SQLite databases since it uses ALTER TABLE commands to upgrade the database tables. SQLite supports only a limited subset of ALTER TABLE. version ------- .. program:: version .. option:: -h, --help Show help for version and exit. This command will output the current database version. ironic-15.0.0/doc/source/cli/index.rst0000664000175000017500000000024013652514273017563 0ustar zuulzuul00000000000000Command References ================== Here are references for commands not elsewhere documented. .. toctree:: :maxdepth: 1 ironic-dbsync ironic-status ironic-15.0.0/doc/source/user/0000775000175000017500000000000013652514443016134 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/user/index.rst0000664000175000017500000004224413652514273020004 0ustar zuulzuul00000000000000.. _user-guide: ============================= Bare Metal Service User Guide ============================= Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance) and Object (swift) services. When the Bare Metal service is appropriately configured with the Compute and Network services, it is possible to provision both virtual and physical machines through the Compute service's API. However, the set of instance actions is limited, arising from the different characteristics of physical servers and switch hardware. For example, live migration can not be performed on a bare metal instance. The community maintains reference drivers that leverage open-source technologies (eg. PXE and IPMI) to cover a wide range of hardware. Ironic's pluggable driver architecture also allows hardware vendors to write and contribute drivers that may improve performance or add functionality not provided by the community drivers. .. TODO: the remainder of this file needs to be cleaned up still Why Provision Bare Metal ======================== Here are a few use-cases for bare metal (physical server) provisioning in cloud; there are doubtless many more interesting ones: - High-performance computing clusters - Computing tasks that require access to hardware devices which can't be virtualized - Database hosting (some databases run poorly in a hypervisor) - Single tenant, dedicated hardware for performance, security, dependability and other regulatory requirements - Or, rapidly deploying a cloud infrastructure Conceptual Architecture ======================= The following diagram shows the relationships and how all services come into play during the provisioning of a physical server. (Note that Ceilometer and Swift can be used with Ironic, but are missing from this diagram.) .. figure:: ../images/conceptual_architecture.png :alt: ConceptualArchitecture Key Technologies for Bare Metal Hosting ======================================= Preboot Execution Environment (PXE) ----------------------------------- PXE is part of the Wired for Management (WfM) specification developed by Intel and Microsoft. The PXE enables system's BIOS and network interface card (NIC) to bootstrap a computer from the network in place of a disk. Bootstrapping is the process by which a system loads the OS into local memory so that it can be executed by the processor. This capability of allowing a system to boot over a network simplifies server deployment and server management for administrators. Dynamic Host Configuration Protocol (DHCP) ------------------------------------------ DHCP is a standardized networking protocol used on Internet Protocol (IP) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. Using PXE, the BIOS uses DHCP to obtain an IP address for the network interface and to locate the server that stores the network bootstrap program (NBP). Network Bootstrap Program (NBP) ------------------------------- NBP is equivalent to GRUB (GRand Unified Bootloader) or LILO (LInux LOader) - loaders which are traditionally used in local booting. Like the boot program in a hard drive environment, the NBP is responsible for loading the OS kernel into memory so that the OS can be bootstrapped over a network. Trivial File Transfer Protocol (TFTP) ------------------------------------- TFTP is a simple file transfer protocol that is generally used for automated transfer of configuration or boot files between machines in a local environment. In a PXE environment, TFTP is used to download NBP over the network using information from the DHCP server. Intelligent Platform Management Interface (IPMI) ------------------------------------------------ IPMI is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation. It is a method to manage systems that may be unresponsive or powered off by using only a network connection to the hardware rather than to an operating system. .. _understanding-deployment: Understanding Bare Metal Deployment =================================== What happens when a boot instance request comes in? The below diagram walks through the steps involved during the provisioning of a bare metal instance. These pre-requisites must be met before the deployment process: * Dependent packages to be configured on the Bare Metal service node(s) where ironic-conductor is running like tftp-server, ipmi, syslinux etc for bare metal provisioning. * Nova must be configured to make use of the bare metal service endpoint and compute driver should be configured to use ironic driver on the Nova compute node(s). * Flavors to be created for the available hardware. Nova must know the flavor to boot from. * Images to be made available in Glance. Listed below are some image types required for successful bare metal deployment: - bm-deploy-kernel - bm-deploy-ramdisk - user-image - user-image-vmlinuz - user-image-initrd * Hardware to be enrolled via Ironic RESTful API service. Deploy Process -------------- This describes a typical ironic node deployment using PXE and the Ironic Python Agent (IPA). Depending on the ironic driver interfaces used, some of the steps might be marginally different, however the majority of them will remain the same. #. A boot instance request comes in via the Nova API, through the message queue to the Nova scheduler. #. Nova scheduler applies filters and finds the eligible hypervisor. The nova scheduler also uses the flavor's ``extra_specs``, such as ``cpu_arch``, to match the target physical node. #. Nova compute manager claims the resources of the selected hypervisor. #. Nova compute manager creates (unbound) tenant virtual interfaces (VIFs) in the Networking service according to the network interfaces requested in the nova boot request. A caveat here is, the MACs of the ports are going to be randomly generated, and will be updated when the VIF is attached to some node to correspond to the node network interface card's (or bond's) MAC. #. A spawn task is created by the nova compute which contains all the information such as which image to boot from etc. It invokes the ``driver.spawn`` from the virt layer of Nova compute. During the spawn process, the virt driver does the following: #. Updates the target ironic node with the information about deploy image, instance UUID, requested capabilities and various flavor properties. #. Validates node's power and deploy interfaces, by calling the ironic API. #. Attaches the previously created VIFs to the node. Each neutron port can be attached to any ironic port or port group, with port groups having higher priority than ports. On ironic side, this work is done by the network interface. Attachment here means saving the VIF identifier into ironic port or port group and updating VIF MAC to match the port's or port group's MAC, as described in bullet point 4. #. Generates config drive, if requested. #. Nova's ironic virt driver issues a deploy request via the Ironic API to the Ironic conductor servicing the bare metal node. #. Virtual interfaces are plugged in and Neutron API updates DHCP port to set PXE/TFTP options. In case of using ``neutron`` network interface, ironic creates separate provisioning ports in the Networking service, while in case of ``flat`` network interface, the ports created by nova are used both for provisioning and for deployed instance networking. #. The ironic node's boot interface prepares (i)PXE configuration and caches deploy kernel and ramdisk. #. The ironic node's management interface issues commands to enable network boot of a node. #. The ironic node's deploy interface caches the instance image (in case of ``iscsi`` deploy interface), and kernel and ramdisk if needed (it is needed in case of netboot for example). #. The ironic node's power interface instructs the node to power on. #. The node boots the deploy ramdisk. #. Depending on the exact driver used, either the conductor copies the image over iSCSI to the physical node (:ref:`iscsi-deploy`) or the deploy ramdisk downloads the image from a temporary URL (:ref:`direct-deploy`). The temporary URL can be generated by Swift API-compatible object stores, for example Swift itself or RadosGW. The image deployment is done. #. The node's boot interface switches pxe config to refer to instance images (or, in case of local boot, sets boot device to disk), and asks the ramdisk agent to soft power off the node. If the soft power off by the ramdisk agent fails, the bare metal node is powered off via IPMI/BMC call. #. The deploy interface triggers the network interface to remove provisioning ports if they were created, and binds the tenant ports to the node if not already bound. Then the node is powered on. .. note:: There are 2 power cycles during bare metal deployment; the first time the node is powered-on when ramdisk is booted, the second time after the image is deployed. #. The bare metal node's provisioning state is updated to ``active``. Below is the diagram that describes the above process. .. graphviz:: digraph "Deployment Steps" { node [shape=box, style=rounded, fontsize=10]; edge [fontsize=10]; /* cylinder shape works only in graphviz 2.39+ */ { rank=same; node [shape=cylinder]; "Nova DB"; "Ironic DB"; } { rank=same; "Nova API"; "Ironic API"; } { rank=same; "Nova Message Queue"; "Ironic Message Queue"; } { rank=same; "Ironic Conductor"; "TFTP Server"; } { rank=same; "Deploy Interface"; "Boot Interface"; "Power Interface"; "Management Interface"; } { rank=same; "Glance"; "Neutron"; } "Bare Metal Nodes" [shape=box3d]; "Nova API" -> "Nova Message Queue" [label=" 1"]; "Nova Message Queue" -> "Nova Conductor" [dir=both]; "Nova Message Queue" -> "Nova Scheduler" [label=" 2"]; "Nova Conductor" -> "Nova DB" [dir=both, label=" 3"]; "Nova Message Queue" -> "Nova Compute" [dir=both]; "Nova Compute" -> "Neutron" [label=" 4"]; "Nova Compute" -> "Nova Ironic Virt Driver" [label=5]; "Nova Ironic Virt Driver" -> "Ironic API" [label=6]; "Ironic API" -> "Ironic Message Queue"; "Ironic Message Queue" -> "Ironic Conductor" [dir=both]; "Ironic API" -> "Ironic DB" [dir=both]; "Ironic Conductor" -> "Ironic DB" [dir=both, label=16]; "Ironic Conductor" -> "Boot Interface" [label="8, 14"]; "Ironic Conductor" -> "Management Interface" [label=" 9"]; "Ironic Conductor" -> "Deploy Interface" [label=10]; "Deploy Interface" -> "Network Interface" [label="7, 15"]; "Ironic Conductor" -> "Power Interface" [label=11]; "Ironic Conductor" -> "Glance"; "Network Interface" -> "Neutron"; "Power Interface" -> "Bare Metal Nodes"; "Management Interface" -> "Bare Metal Nodes"; "TFTP Server" -> "Bare Metal Nodes" [label=12]; "Ironic Conductor" -> "Bare Metal Nodes" [style=dotted, label=13]; "Boot Interface" -> "TFTP Server"; } The following two examples describe what ironic is doing in more detail, leaving out the actions performed by nova and some of the more advanced options. .. _iscsi-deploy-example: Example 1: PXE Boot and iSCSI Deploy Process -------------------------------------------- This process is how :ref:`iscsi-deploy` works. .. seqdiag:: :scale: 75 diagram { Nova; API; Conductor; Neutron; HTTPStore; "TFTP/HTTPd"; Node; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Nova -> API [label = "Set instance_info\n(image_source,\nroot_gb, etc.)"]; Nova -> API [label = "Validate power and deploy\ninterfaces"]; Nova -> API [label = "Plug VIFs to the node"]; Nova -> API [label = "Set provision_state,\noptionally pass configdrive"]; API -> Conductor [label = "do_node_deploy()"]; Conductor -> Conductor [label = "Validate power and deploy interfaces"]; Conductor -> HTTPStore [label = "Store configdrive if configdrive_use_swift \noption is set"]; Conductor -> Node [label = "POWER OFF"]; Conductor -> Neutron [label = "Attach provisioning network to port(s)"]; Conductor -> Neutron [label = "Update DHCP boot options"]; Conductor -> Conductor [label = "Prepare PXE\nenvironment for\ndeployment"]; Conductor -> Node [label = "Set PXE boot device \nthrough the BMC"]; Conductor -> Conductor [label = "Cache deploy\nkernel, ramdisk,\ninstance images"]; Conductor -> Node [label = "REBOOT"]; Node -> Neutron [label = "DHCP request"]; Neutron -> Node [label = "next-server = Conductor"]; Node -> Node [label = "Runs agent\nramdisk"]; Node -> API [label = "lookup()"]; API -> Node [label = "Pass UUID"]; Node -> API [label = "Heartbeat (UUID)"]; API -> Conductor [label = "Heartbeat"]; Conductor -> Node [label = "Send IPA a command to expose disks via iSCSI"]; Conductor -> Node [label = "iSCSI attach"]; Conductor -> Node [label = "Copies user image and configdrive, if present"]; Conductor -> Node [label = "iSCSI detach"]; Conductor -> Conductor [label = "Delete instance\nimage from cache"]; Conductor -> Node [label = "Install boot loader, if requested"]; Conductor -> Neutron [label = "Update DHCP boot options"]; Conductor -> Conductor [label = "Prepare PXE\nenvironment for\ninstance image"]; Conductor -> Node [label = "Set boot device either to PXE or to disk"]; Conductor -> Node [label = "Collect ramdisk logs"]; Conductor -> Node [label = "POWER OFF"]; Conductor -> Neutron [label = "Detach provisioning network\nfrom port(s)"]; Conductor -> Neutron [label = "Bind tenant port"]; Conductor -> Node [label = "POWER ON"]; Conductor -> Conductor [label = "Mark node as\nACTIVE"]; } (From a `talk`_ and `slides`_) .. _direct-deploy-example: Example 2: PXE Boot and Direct Deploy Process --------------------------------------------- This process is how :ref:`direct-deploy` works. .. seqdiag:: :scale: 75 diagram { Nova; API; Conductor; Neutron; HTTPStore; "TFTP/HTTPd"; Node; activation = none; edge_length = 250; span_height = 1; default_note_color = white; default_fontsize = 14; Nova -> API [label = "Set instance_info\n(image_source,\nroot_gb, etc.)"]; Nova -> API [label = "Validate power and deploy\ninterfaces"]; Nova -> API [label = "Plug VIFs to the node"]; Nova -> API [label = "Set provision_state,\noptionally pass configdrive"]; API -> Conductor [label = "do_node_deploy()"]; Conductor -> Conductor [label = "Validate power and deploy interfaces"]; Conductor -> HTTPStore [label = "Store configdrive if configdrive_use_swift \noption is set"]; Conductor -> Node [label = "POWER OFF"]; Conductor -> Neutron [label = "Attach provisioning network to port(s)"]; Conductor -> Neutron [label = "Update DHCP boot options"]; Conductor -> Conductor [label = "Prepare PXE\nenvironment for\ndeployment"]; Conductor -> Node [label = "Set PXE boot device \nthrough the BMC"]; Conductor -> Conductor [label = "Cache deploy\nand instance\nkernel and ramdisk"]; Conductor -> Node [label = "REBOOT"]; Node -> Neutron [label = "DHCP request"]; Neutron -> Node [label = "next-server = Conductor"]; Node -> Node [label = "Runs agent\nramdisk"]; Node -> API [label = "lookup()"]; API -> Node [label = "Pass UUID"]; Node -> API [label = "Heartbeat (UUID)"]; API -> Conductor [label = "Heartbeat"]; Conductor -> Node [label = "Continue deploy asynchronously: Pass image, disk info"]; Node -> HTTPStore [label = "Downloads image, writes to disk, \nwrites configdrive if present"]; === Heartbeat periodically === Conductor -> Node [label = "Is deploy done?"]; Node -> Conductor [label = "Still working..."]; === ... === Node -> Conductor [label = "Deploy is done"]; Conductor -> Node [label = "Install boot loader, if requested"]; Conductor -> Neutron [label = "Update DHCP boot options"]; Conductor -> Conductor [label = "Prepare PXE\nenvironment for\ninstance image\nif needed"]; Conductor -> Node [label = "Set boot device either to PXE or to disk"]; Conductor -> Node [label = "Collect ramdisk logs"]; Conductor -> Node [label = "POWER OFF"]; Conductor -> Neutron [label = "Detach provisioning network\nfrom port(s)"]; Conductor -> Neutron [label = "Bind tenant port"]; Conductor -> Node [label = "POWER ON"]; Conductor -> Conductor [label = "Mark node as\nACTIVE"]; } (From a `talk`_ and `slides`_) .. _talk: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/isn-and-039t-it-ironic-the-bare-metal-cloud .. _slides: http://www.slideshare.net/devananda1/isnt-it-ironic-managing-a-bare-metal-cloud-osl-tes-2015 ironic-15.0.0/doc/source/conf.py0000664000175000017500000001236313652514273016463 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import eventlet # NOTE(dims): monkey patch subprocess to prevent failures in latest eventlet # See https://github.com/eventlet/eventlet/issues/398 try: eventlet.monkey_patch(subprocess=True) except TypeError: pass # -- General configuration ---------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.join(os.path.abspath('.'), '_exts')) # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.viewcode', 'sphinx.ext.graphviz', 'sphinxcontrib.httpdomain', 'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.seqdiag', 'sphinxcontrib.apidoc', 'sphinxcontrib.rsvgconverter', 'oslo_config.sphinxext', 'oslo_config.sphinxconfiggen', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', 'automated_steps', 'openstackdocstheme' ] # sphinxcontrib.apidoc options apidoc_module_dir = '../../ironic' apidoc_output_dir = 'contributor/api' apidoc_excluded_paths = [ 'db/sqlalchemy/alembic/env', 'db/sqlalchemy/alembic/versions/*', 'drivers/modules/ansible/playbooks*', 'hacking', 'tests', ] apidoc_separate_modules = True repository_name = 'openstack/ironic' use_storyboard = True openstack_projects = [ 'bifrost', 'cinder', 'glance', 'ironic', 'ironic-inspector', 'ironic-lib', 'ironic-neutron-agent', 'ironic-python-agent', 'ironic-ui', 'keystone', 'keystonemiddleware', 'metalsmith', 'networking-baremetal', 'neutron', 'nova', 'oslo.messaging', 'oslo.reports', 'oslo.versionedobjects', 'oslotest', 'osprofiler', 'os-traits', 'python-ironicclient', 'python-ironic-inspector-client', 'python-openstackclient', 'swift', ] wsme_protocols = ['restjson'] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'OpenStack Foundation' config_generator_config_file = '../../tools/config/ironic-config-generator.conf' sample_config_basename = '_static/ironic' policy_generator_config_file = '../../tools/policy/ironic-policy-generator.conf' sample_policy_basename = '_static/ironic' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['ironic.'] # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of glob-style patterns that should be excluded when looking for # source files. They are matched against the source file names relative to the # source directory, using slashes as directory separators on all platforms. exclude_patterns = ['api/ironic.drivers.modules.ansible.playbooks.*', 'api/ironic.tests.*'] # Ignore the following warning: WARNING: while setting up extension # wsmeext.sphinxext: directive 'autoattribute' is already registered, # it will be overridden. suppress_warnings = ['app.add_directive'] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'openstackdocs' # Output file base name for HTML help builder. htmlhelp_basename = 'Ironicdoc' latex_use_xindy = False # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ( 'index', 'doc-ironic.tex', u'Ironic Documentation', u'OpenStack Foundation', 'manual' ), ] # Allow deeper levels of nesting for \begin...\end stanzas latex_elements = {'maxlistdepth': 10} # -- Options for seqdiag ------------------------------------------------------ seqdiag_html_image_format = "SVG" ironic-15.0.0/doc/source/install/0000775000175000017500000000000013652514443016624 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/install/install-ubuntu.rst0000664000175000017500000000134713652514273022352 0ustar zuulzuul00000000000000.. _install-ubuntu: ================================ Install and configure for Ubuntu ================================ This section describes how to install and configure the Bare Metal service for Ubuntu 14.04 (LTS). .. include:: include/common-prerequisites.inc Install and configure components ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Install from packages (using apt-get) .. code-block:: console # apt-get install ironic-api ironic-conductor python-ironicclient #. Enable services Services are enabled by default on Ubuntu. .. include:: include/common-configure.inc .. include:: include/configure-ironic-api.inc .. include:: include/configure-ironic-api-mod_wsgi.inc .. include:: include/configure-ironic-conductor.inc ironic-15.0.0/doc/source/install/refarch/0000775000175000017500000000000013652514443020236 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/install/refarch/common.rst0000664000175000017500000003321513652514273022265 0ustar zuulzuul00000000000000Common Considerations ===================== This section covers considerations that are equally important to all described architectures. .. contents:: :local: .. _refarch-common-components: Components ---------- As explained in :doc:`../get_started`, the Bare Metal service has three components. * The Bare Metal API service (``ironic-api``) should be deployed in a similar way as the control plane API services. The exact location will depend on the architecture used. * The Bare Metal conductor service (``ironic-conductor``) is where most of the provisioning logic lives. The following considerations are the most important when deciding on the way to deploy it: * The conductor manages a certain proportion of nodes, distributed to it via a hash ring. This includes constantly polling these nodes for their current power state and hardware sensor data (if enabled and supported by hardware, see :ref:`ipmi-sensor-data` for an example). * The conductor needs access to the `management controller`_ of each node it manages. * The conductor co-exists with TFTP (for PXE) and/or HTTP (for iPXE) services that provide the kernel and ramdisk to boot the nodes. The conductor manages them by writing files to their root directories. * If serial console is used, the conductor launches console processes locally. If the ``nova-serialproxy`` service (part of the Compute service) is used, it has to be able to reach the conductors. Otherwise, they have to be directly accessible by the users. * There must be mutual connectivity between the conductor and the nodes being deployed or cleaned. See Networking_ for details. * The provisioning ramdisk which runs the ``ironic-python-agent`` service on start up. .. warning:: The ``ironic-python-agent`` service is not intended to be used or executed anywhere other than a provisioning/cleaning/rescue ramdisk. Hardware and drivers -------------------- The Bare Metal service strives to provide the best support possible for a variety of hardware. However, not all hardware is supported equally well. It depends on both the capabilities of hardware itself and the available drivers. This section covers various considerations related to the hardware interfaces. See :doc:`/install/enabling-drivers` for a detailed introduction into hardware types and interfaces before proceeding. Power and management interfaces ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The minimum set of capabilities that the hardware has to provide and the driver has to support is as follows: #. getting and setting the power state of the machine #. getting and setting the current boot device #. booting an image provided by the Bare Metal service (in the simplest case, support booting using PXE_ and/or iPXE_) .. note:: Strictly speaking, it is possible to make the Bare Metal service provision nodes without some of these capabilities via some manual steps. It is not the recommended way of deployment, and thus it is not covered in this guide. Once you make sure that the hardware supports these capabilities, you need to find a suitable driver. Most of enterprise-grade hardware has support for IPMI_ and thus can utilize :doc:`/admin/drivers/ipmitool`. Some newer hardware also supports :doc:`/admin/drivers/redfish`. Several vendors provide more specific drivers that usually provide additional capabilities. Check :doc:`/admin/drivers` to find the most suitable one. .. _refarch-common-boot: Boot interface ~~~~~~~~~~~~~~ The boot interface of a node manages booting of both the deploy ramdisk and the user instances on the bare metal node. The deploy interface orchestrates the deployment and defines how the image gets transferred to the target disk. The main alternatives are to use PXE/iPXE or virtual media - see :doc:`/admin/interfaces/boot` for a detailed explanation. If a virtual media implementation is available for the hardware, it is recommended using it for better scalability and security. Otherwise, it is recommended to use iPXE, when it is supported by target hardware. Deploy interface ~~~~~~~~~~~~~~~~ There are two deploy interfaces in-tree, ``iscsi`` and ``direct``. See :doc:`../../admin/interfaces/deploy` for explanation of the difference. With the ``iscsi`` deploy method, most of the deployment operations happen on the conductor. If the Object Storage service (swift) or RadosGW is present in the environment, it is recommended to use the ``direct`` deploy method for better scalability and reliability. .. TODO(dtantsur): say something about the ansible deploy, when it's in Hardware specifications ~~~~~~~~~~~~~~~~~~~~~~~ The Bare Metal services does not impose too many restrictions on the characteristics of hardware itself. However, keep in mind that * By default, the Bare Metal service will pick the smallest hard drive that is larger than 4 GiB for deployment. Another hard drive can be used, but it requires setting :ref:`root device hints `. .. note:: This device does not have to match the boot device set in BIOS (or similar firmware). * The machines should have enough RAM to fit the deployment/cleaning ramdisk to run. The minimum varies greatly depending on the way the ramdisk was built. For example, *tinyipa*, the TinyCoreLinux-based ramdisk used in the CI, only needs 400 MiB of RAM, while ramdisks built by *diskimage-builder* may require 3 GiB or more. Image types ----------- The Bare Metal service can deploy two types of images: * *Whole-disk* images that contain a complete partitioning table with all necessary partitions and a bootloader. Such images are the most universal, but may be harder to build. * *Partition images* that contain only the root partition. The Bare Metal service will create the necessary partitions and install a boot loader, if needed. .. warning:: Partition images are only supported with GNU/Linux operating systems. .. warning:: If you plan on using local boot, your partition images must contain GRUB2 bootloader tools to enable ironic to set up the bootloader during deploy. Local vs network boot --------------------- The Bare Metal service supports booting user instances either using a local bootloader or using the driver's boot interface (e.g. via PXE_ or iPXE_ protocol in case of the ``pxe`` interface). Network boot cannot be used with certain architectures (for example, when no tenant networks have access to the control plane). Additional considerations are related to the ``pxe`` boot interface, and other boot interfaces based on it: * Local boot makes node's boot process independent of the Bare Metal conductor managing it. Thus, nodes are able to reboot correctly, even if the Bare Metal TFTP or HTTP service is down. * Network boot (and iPXE) must be used when booting nodes from remote volumes, if the driver does not support attaching volumes out-of-band. The default boot option for the cloud can be changed via the Bare Metal service configuration file, for example: .. code-block:: ini [deploy] default_boot_option = local This default can be overridden by setting the ``boot_option`` capability on a node. See :ref:`local-boot-partition-images` for details. .. note:: Currently, network boot is used by default. However, we plan on changing it in the future, so it's safer to set the ``default_boot_option`` explicitly. .. _refarch-common-networking: Networking ---------- There are several recommended network topologies to be used with the Bare Metal service. They are explained in depth in specific architecture documentation. However, several considerations are common for all of them: * There has to be a *provisioning* network, which is used by nodes during the deployment process. If allowed by the architecture, this network should not be accessible by end users, and should not have access to the internet. * There has to be a *cleaning* network, which is used by nodes during the cleaning process. * There should be a *rescuing* network, which is used by nodes during the rescue process. It can be skipped if the rescue process is not supported. .. note:: In the majority of cases, the same network should be used for cleaning, provisioning and rescue for simplicity. Unless noted otherwise, everything in these sections apply to all three networks. * The baremetal nodes must have access to the Bare Metal API while connected to the provisioning/cleaning/rescuing network. .. note:: Only two endpoints need to be exposed there:: GET /v1/lookup POST /v1/heartbeat/[a-z0-9\-]+ You may want to limit access from this network to only these endpoints, and make these endpoint not accessible from other networks. * If the ``pxe`` boot interface (or any boot interface based on it) is used, then the baremetal nodes should have untagged (access mode) connectivity to the provisioning/cleaning/rescuing networks. It allows PXE firmware, which does not support VLANs, to communicate with the services required for provisioning. .. note:: It depends on the *network interface* whether the Bare Metal service will handle it automatically. Check the networking documentation for the specific architecture. Sometimes it may be necessary to disable the spanning tree protocol delay on the switch - see :ref:`troubleshooting-stp`. * The Baremetal nodes need to have access to any services required for provisioning/cleaning/rescue, while connected to the provisioning/cleaning/rescuing network. This may include: * a TFTP server for PXE boot and also an HTTP server when iPXE is enabled * either an HTTP server or the Object Storage service in case of the ``direct`` deploy interface and some virtual media boot interfaces * The Baremetal Conductors need to have access to the booted baremetal nodes during provisioning/cleaning/rescue. A conductor communicates with an internal API, provided by **ironic-python-agent**, to conduct actions on nodes. .. _refarch-common-ha: HA and Scalability ------------------ ironic-api ~~~~~~~~~~ The Bare Metal API service is stateless, and thus can be easily scaled horizontally. It is recommended to deploy it as a WSGI application behind e.g. Apache or another WSGI container. .. note:: This service accesses the ironic database for reading entities (e.g. in response to ``GET /v1/nodes`` request) and in rare cases for writing. ironic-conductor ~~~~~~~~~~~~~~~~ High availability ^^^^^^^^^^^^^^^^^ The Bare Metal conductor service utilizes the active/active HA model. Every conductor manages a certain subset of nodes. The nodes are organized in a hash ring that tries to keep the load spread more or less uniformly across the conductors. When a conductor is considered offline, its nodes are taken over by other conductors. As a result of this, you need at least 2 conductor hosts for an HA deployment. Performance ^^^^^^^^^^^ Conductors can be resource intensive, so it is recommended (but not required) to keep all conductors separate from other services in the cloud. The minimum required number of conductors in a deployment depends on several factors: * the performance of the hardware where the conductors will be running, * the speed and reliability of the `management controller`_ of the bare metal nodes (for example, handling slower controllers may require having less nodes per conductor), * the frequency, at which the management controllers are polled by the Bare Metal service (see the ``sync_power_state_interval`` option), * the bare metal driver used for nodes (see `Hardware and drivers`_ above), * the network performance, * the maximum number of bare metal nodes that are provisioned simultaneously (see the ``max_concurrent_builds`` option for the Compute service). We recommend a target of **100** bare metal nodes per conductor for maximum reliability and performance. There is some tolerance for a larger number per conductor. However, it was reported [1]_ [2]_ that reliability degrades when handling approximately 300 bare metal nodes per conductor. Disk space ^^^^^^^^^^ Each conductor needs enough free disk space to cache images it uses. Depending on the combination of the deploy interface and the boot option, the space requirements are different: * The deployment kernel and ramdisk are always cached during the deployment. * The ``iscsi`` deploy method requires caching of the whole instance image locally during the deployment. The image has to be converted to the raw format, which may increase the required amount of disk space, as well as the CPU load. .. note:: This is not a concern for the ``direct`` deploy interface, as in this case the deployment ramdisk downloads the image and either streams it to the disk or caches it in memory. * When network boot is used, the instance image kernel and ramdisk are cached locally while the instance is active. .. note:: All images may be stored for some time after they are no longer needed. This is done to speed up simultaneous deployments of many similar images. The caching can be configured via the ``image_cache_size`` and ``image_cache_ttl`` configuration options in the ``pxe`` group. .. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118033.html .. [2] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118327.html Other services ~~~~~~~~~~~~~~ When integrating with other OpenStack services, more considerations may need to be applied. This is covered in other parts of this guide. .. _PXE: https://en.wikipedia.org/wiki/Preboot_Execution_Environment .. _iPXE: https://en.wikipedia.org/wiki/IPXE .. _IPMI: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface .. _management controller: https://en.wikipedia.org/wiki/Out-of-band_management ironic-15.0.0/doc/source/install/refarch/small-cloud-trusted-tenants.rst0000664000175000017500000002306413652514273026354 0ustar zuulzuul00000000000000Small cloud with trusted tenants ================================ Story ----- As an operator I would like to build a small cloud with both virtual and bare metal instances or add bare metal provisioning to my existing small or medium scale single-site OpenStack cloud. The expected number of bare metal machines is less than 100, and the rate of provisioning and unprovisioning is expected to be low. All users of my cloud are trusted by me to not conduct malicious actions towards each other or the cloud infrastructure itself. As a user I would like to occasionally provision bare metal instances through the Compute API by selecting an appropriate Compute flavor. I would like to be able to boot them from images provided by the Image service or from volumes provided by the Volume service. Components ---------- This architecture assumes `an OpenStack installation`_ with the following components participating in the bare metal provisioning: * The :nova-doc:`Compute service <>` manages bare metal instances. * The :neutron-doc:`Networking service <>` provides DHCP for bare metal instances. * The :glance-doc:`Image service <>` provides images for bare metal instances. The following services can be optionally used by the Bare Metal service: * The :cinder-doc:`Volume service <>` provides volumes to boot bare metal instances from. * The :ironic-inspector-doc:`Bare Metal Introspection service <>` simplifies enrolling new bare metal machines by conducting in-band introspection. Node roles ---------- An OpenStack installation in this guide has at least these three types of nodes: * A *controller* node hosts the control plane services. * A *compute* node runs the virtual machines and hosts a subset of Compute and Networking components. * A *block storage* node provides persistent storage space for both virtual and bare metal nodes. The *compute* and *block storage* nodes are configured as described in the installation guides of the :nova-doc:`Compute service <>` and the :cinder-doc:`Volume service <>` respectively. The *controller* nodes host the Bare Metal service components. Networking ---------- The networking architecture will highly depend on the exact operating requirements. This guide expects the following existing networks: *control plane*, *storage* and *public*. Additionally, two more networks will be needed specifically for bare metal provisioning: *bare metal* and *management*. .. TODO(dtantsur): describe the storage network? .. TODO(dtantsur): a nice picture to illustrate the layout Control plane network ~~~~~~~~~~~~~~~~~~~~~ The *control plane network* is the network where OpenStack control plane services provide their public API. The Bare Metal API will be served to the operators and to the Compute service through this network. Public network ~~~~~~~~~~~~~~ The *public network* is used in a typical OpenStack deployment to create floating IPs for outside access to instances. Its role is the same for a bare metal deployment. .. note:: Since, as explained below, bare metal nodes will be put on a flat provider network, it is also possible to organize direct access to them, without using floating IPs and bypassing the Networking service completely. Bare metal network ~~~~~~~~~~~~~~~~~~ The *Bare metal network* is a dedicated network for bare metal nodes managed by the Bare Metal service. This architecture uses :ref:`flat bare metal networking `, in which both tenant traffic and technical traffic related to the Bare Metal service operation flow through this one network. Specifically, this network will serve as the *provisioning*, *cleaning* and *rescuing* network. It will also be used for introspection via the Bare Metal Introspection service. See :ref:`common networking considerations ` for an in-depth explanation of the networks used by the Bare Metal service. DHCP and boot parameters will be provided on this network by the Networking service's DHCP agents. For booting from volumes this network has to have a route to the *storage network*. Management network ~~~~~~~~~~~~~~~~~~ *Management network* is an independent network on which BMCs of the bare metal nodes are located. The ``ironic-conductor`` process needs access to this network. The tenants of the bare metal nodes must not have access to it. .. note:: The :ref:`direct deploy interface ` and certain :doc:`/admin/drivers` require the *management network* to have access to the Object storage service backend. Controllers ----------- A *controller* hosts the OpenStack control plane services as described in the `control plane design guide`_. While this architecture allows using *controllers* in a non-HA configuration, it is recommended to have at least three of them for HA. See :ref:`refarch-common-ha` for more details. Bare Metal services ~~~~~~~~~~~~~~~~~~~ The following components of the Bare Metal service are installed on a *controller* (see :ref:`components of the Bare Metal service `): * The Bare Metal API service either as a WSGI application or the ``ironic-api`` process. Typically, a load balancer, such as HAProxy, spreads the load between the API instances on the *controllers*. The API has to be served on the *control plane network*. Additionally, it has to be exposed to the *bare metal network* for the ramdisk callback API. * The ``ironic-conductor`` process. These processes work in active/active HA mode as explained in :ref:`refarch-common-ha`, thus they can be installed on all *controllers*. Each will handle a subset of bare metal nodes. The ``ironic-conductor`` processes have to have access to the following networks: * *control plane* for interacting with other services * *management* for contacting node's BMCs * *bare metal* for contacting deployment, cleaning or rescue ramdisks * TFTP and HTTP service for booting the nodes. Each ``ironic-conductor`` process has to have a matching TFTP and HTTP service. They should be exposed only to the *bare metal network* and must not be behind a load balancer. * The ``nova-compute`` process (from the Compute service). These processes work in active/active HA mode when dealing with bare metal nodes, thus they can be installed on all *controllers*. Each will handle a subset of bare metal nodes. .. note:: There is no 1-1 mapping between ``ironic-conductor`` and ``nova-compute`` processes, as they communicate only through the Bare Metal API service. * The :networking-baremetal-doc:`networking-baremetal <>` ML2 plugin should be loaded into the Networking service to assist with binding bare metal ports. The :ironic-neutron-agent-doc:`ironic-neutron-agent <>` service should be started as well. * If the Bare Metal introspection is used, its ``ironic-inspector`` process has to be installed on all *controllers*. Each such process works as both Bare Metal Introspection API and conductor service. A load balancer should be used to spread the API load between *controllers*. The API has to be served on the *control plane network*. Additionally, it has to be exposed to the *bare metal network* for the ramdisk callback API. .. TODO(dtantsur): a nice picture to illustrate the above Shared services ~~~~~~~~~~~~~~~ A *controller* also hosts two services required for the normal operation of OpenStack: * Database service (MySQL/MariaDB is typically used, but other enterprise-grade database solutions can be used as well). All Bare Metal service components need access to the database service. * Message queue service (RabbitMQ is typically used, but other enterprise-grade message queue brokers can be used as well). Both Bare Metal API (WSGI application or ``ironic-api`` process) and the ``ironic-conductor`` processes need access to the message queue service. The Bare Metal Introspection service does not need it. .. note:: These services are required for all OpenStack services. If you're adding the Bare Metal service to your cloud, you may reuse the existing database and messaging queue services. Bare metal nodes ---------------- Each bare metal node must be capable of booting from network, virtual media or other boot technology supported by the Bare Metal service as explained in :ref:`refarch-common-boot`. Each node must have one NIC on the *bare metal network*, and this NIC (and **only** it) must be configured to be able to boot from network. This is usually done in the *BIOS setup* or a similar firmware configuration utility. There is no need to alter the boot order, as it is managed by the Bare Metal service. Other NICs, if present, will not be managed by OpenStack. The NIC on the *bare metal network* should have untagged connectivity to it, since PXE firmware usually does not support VLANs - see :ref:`refarch-common-networking` for details. Storage ------- If your hardware **and** its bare metal :doc:`driver ` support booting from remote volumes, please check the driver documentation for information on how to enable it. It may include routing *management* and/or *bare metal* networks to the *storage network*. In case of the standard :ref:`pxe-boot`, booting from remote volumes is done via iPXE. In that case, the Volume storage backend must support iSCSI_ protocol, and the *bare metal network* has to have a route to the *storage network*. See :doc:`/admin/boot-from-volume` for more details. .. _an OpenStack installation: https://docs.openstack.org/arch-design/use-cases/use-case-general-compute.html .. _control plane design guide: https://docs.openstack.org/arch-design/design-control-plane.html .. _iSCSI: https://en.wikipedia.org/wiki/ISCSI ironic-15.0.0/doc/source/install/refarch/index.rst0000664000175000017500000000072713652514273022106 0ustar zuulzuul00000000000000Reference Deploy Architectures ============================== This section covers the way we recommend the Bare Metal service to be deployed and managed. It is assumed that a reader has already gone through :doc:`/user/index`. It may be also useful to try :ref:`deploy_devstack` first to get better familiar with the concepts used in this guide. .. toctree:: :maxdepth: 2 common Scenarios --------- .. toctree:: :maxdepth: 2 small-cloud-trusted-tenants ironic-15.0.0/doc/source/install/configure-nova-flavors.rst0000664000175000017500000001130513652514273023753 0ustar zuulzuul00000000000000.. _flavor-creation: Create flavors for use with the Bare Metal service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You'll need to create a special bare metal flavor in the Compute service. The flavor is mapped to the bare metal node through the node's ``resource_class`` field (available starting with Bare Metal API version 1.21). A flavor can request *exactly one* instance of a bare metal resource class. Note that when creating the flavor, it's useful to add the ``RAM_MB`` and ``CPU`` properties as a convenience to users, although they are not used for scheduling. The ``DISK_GB`` property is also not used for scheduling, but is still used to determine the root partition size. #. Change these to match your hardware: .. code-block:: console $ RAM_MB=1024 $ CPU=2 $ DISK_GB=100 #. Create the bare metal flavor by executing the following command: .. code-block:: console $ openstack flavor create --ram $RAM_MB --vcpus $CPU --disk $DISK_GB \ my-baremetal-flavor .. note:: You can add ``--id `` to specify an ID for the flavor. See the :python-openstackclient-doc:`docs on this command ` for other options that may be specified. After creation, associate each flavor with one custom resource class. The name of a custom resource class that corresponds to a node's resource class (in the Bare Metal service) is: * the bare metal node's resource class all upper-cased * prefixed with ``CUSTOM_`` * all punctuation replaced with an underscore For example, if the resource class is named ``baremetal-small``, associate the flavor with this custom resource class via: .. code-block:: console $ openstack flavor set --property resources:CUSTOM_BAREMETAL_SMALL=1 my-baremetal-flavor Another set of flavor properties must be used to disable scheduling based on standard properties for a bare metal flavor: .. code-block:: console $ openstack flavor set --property resources:VCPU=0 my-baremetal-flavor $ openstack flavor set --property resources:MEMORY_MB=0 my-baremetal-flavor $ openstack flavor set --property resources:DISK_GB=0 my-baremetal-flavor Example ------- If you want to define a class of nodes called ``baremetal.with-GPU``, start with tagging some nodes with it: .. code-block:: console $ openstack --os-baremetal-api-version 1.21 baremetal node set $NODE_UUID \ --resource-class baremetal.with-GPU .. warning:: It is possible to **add** a resource class to ``active`` nodes, but it is not possible to **replace** an existing resource class on them. Then you can update your flavor to request the resource class instead of the standard properties: .. code-block:: console $ openstack flavor set --property resources:CUSTOM_BAREMETAL_WITH_GPU=1 my-baremetal-flavor $ openstack flavor set --property resources:VCPU=0 my-baremetal-flavor $ openstack flavor set --property resources:MEMORY_MB=0 my-baremetal-flavor $ openstack flavor set --property resources:DISK_GB=0 my-baremetal-flavor Note how ``baremetal.with-GPU`` in the node's ``resource_class`` field becomes ``CUSTOM_BAREMETAL_WITH_GPU`` in the flavor's properties. .. _scheduling-traits: Scheduling based on traits -------------------------- Starting with the Queens release, the Compute service supports scheduling based on qualitative attributes using traits. Starting with Bare Metal REST API version 1.37, it is possible to assign a list of traits to each bare metal node. Traits assigned to a bare metal node will be assigned to the corresponding resource provider in the Compute service placement API. When creating a flavor in the Compute service, required traits may be specified via flavor properties. The Compute service will then schedule instances only to bare metal nodes with all of the required traits. Traits can be either standard or custom. Standard traits are listed in the `os_traits library `_. Custom traits must meet the following requirements: * prefixed with ``CUSTOM_`` * contain only upper case characters A to Z, digits 0 to 9, or underscores * no longer than 255 characters in length A bare metal node can have a maximum of 50 traits. Example ^^^^^^^ To add the standard trait ``HW_CPU_X86_VMX`` and a custom trait ``CUSTOM_TRAIT1`` to a node: .. code-block:: console $ openstack --os-baremetal-api-version 1.37 baremetal node add trait \ $NODE_UUID CUSTOM_TRAIT1 HW_CPU_X86_VMX Then, update the flavor to require these traits: .. code-block:: console $ openstack flavor set --property trait:CUSTOM_TRAIT1=required my-baremetal-flavor $ openstack flavor set --property trait:HW_CPU_X86_VMX=required my-baremetal-flavor ironic-15.0.0/doc/source/install/configure-compute.rst0000664000175000017500000001277113652514273023022 0ustar zuulzuul00000000000000Configure the Compute service to use the Bare Metal service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Compute service needs to be configured to use the Bare Metal service's driver. The configuration file for the Compute service is typically located at ``/etc/nova/nova.conf``. .. note:: As of the Newton release, it is possible to have multiple nova-compute services running the ironic virtual driver (in nova) to provide redundancy. Bare metal nodes are mapped to the services via a hash ring. If a service goes down, the available bare metal nodes are remapped to different services. Once active, a node will stay mapped to the same nova-compute even when it goes down. The node is unable to be managed through the Compute API until the service responsible returns to an active state. The following configuration file must be modified on the Compute service's controller nodes and compute nodes. #. Change these configuration options in the Compute service configuration file (for example, ``/etc/nova/nova.conf``): .. code-block:: ini [default] # Defines which driver to use for controlling virtualization. # Enable the ironic virt driver for this compute instance. compute_driver=ironic.IronicDriver # Amount of memory in MB to reserve for the host so that it is always # available to host processes. # It is impossible to reserve any memory on bare metal nodes, so set # this to zero. reserved_host_memory_mb=0 [filter_scheduler] # Enables querying of individual hosts for instance information. # Not possible for bare metal nodes, so set it to False. track_instance_changes=False [scheduler] # This value controls how often (in seconds) the scheduler should # attempt to discover new hosts that have been added to cells. # If negative (the default), no automatic discovery will occur. # As each bare metal node is represented by a separate host, it has # to be discovered before the Compute service can deploy on it. # The value here has to be carefully chosen based on a compromise # between the enrollment speed and the load on the Compute scheduler. # The recommended value of 2 minutes matches how often the Compute # service polls the Bare Metal service for node information. discover_hosts_in_cells_interval=120 .. note:: The alternative to setting the ``discover_hosts_in_cells_interval`` option is to run the following command on any Compute controller node after each node is enrolled:: nova-manage cell_v2 discover_hosts --by-service #. Consider enabling the following option on controller nodes: .. code-block:: ini [filter_scheduler] # Enabling this option is beneficial as it reduces re-scheduling events # for ironic nodes when scheduling is based on resource classes, # especially for mixed hypervisor case with host_subset_size = 1. # However enabling it will also make packing of VMs on hypervisors # less dense even when scheduling weights are completely disabled. #shuffle_best_same_weighed_hosts = false #. Carefully consider the following option: .. code-block:: ini [compute] # This option will cause nova-compute to set itself to a disabled state # if a certain number of consecutive build failures occur. This will # prevent the scheduler from continuing to send builds to a compute # service that is consistently failing. In the case of bare metal # provisioning, however, a compute service is rarely the cause of build # failures. Furthermore, bare metal nodes, managed by a disabled # compute service, will be remapped to a different one. That may cause # the second compute service to also be disabled, and so on, until no # compute services are active. # If this is not the desired behavior, consider increasing this value or # setting it to 0 to disable this behavior completely. #consecutive_build_service_disable_threshold = 10 #. Change these configuration options in the ``ironic`` section. Replace: - ``IRONIC_PASSWORD`` with the password you chose for the ``ironic`` user in the Identity Service - ``IRONIC_NODE`` with the hostname or IP address of the ironic-api node - ``IDENTITY_IP`` with the IP of the Identity server .. code-block:: ini [ironic] # Ironic authentication type auth_type=password # Keystone API endpoint auth_url=http://IDENTITY_IP:5000/v3 # Ironic keystone project name project_name=service # Ironic keystone admin name username=ironic # Ironic keystone admin password password=IRONIC_PASSWORD # Ironic keystone project domain # or set project_domain_id project_domain_name=Default # Ironic keystone user domain # or set user_domain_id user_domain_name=Default #. On the Compute service's controller nodes, restart the ``nova-scheduler`` process: .. code-block:: console Fedora/RHEL7/CentOS7/SUSE: sudo systemctl restart openstack-nova-scheduler Ubuntu: sudo service nova-scheduler restart #. On the Compute service's compute nodes, restart the ``nova-compute`` process: .. code-block:: console Fedora/RHEL7/CentOS7/SUSE: sudo systemctl restart openstack-nova-compute Ubuntu: sudo service nova-compute restart ironic-15.0.0/doc/source/install/configure-networking.rst0000664000175000017500000001227413652514273023533 0ustar zuulzuul00000000000000.. _configure-networking: Configure the Networking service for bare metal provisioning ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You need to configure Networking so that the bare metal server can communicate with the Networking service for DHCP, PXE boot and other requirements. This section covers configuring Networking for a single flat network for bare metal provisioning. It is recommended to use the baremetal ML2 mechanism driver and L2 agent for proper integration with the Networking service. Documentation regarding installation and configuration of the baremetal mechanism driver and L2 agent is available :networking-baremetal-doc:`here `. For use with :neutron-doc:`routed networks ` the baremetal ML2 components are required. .. Note:: When the baremetal ML2 components are *not* used, ports in the Networking service will have status: ``DOWN``, and binding_vif_type: ``binding_failed``. This was always the status for Bare Metal service ``flat`` network interface ports prior to the introduction of the baremetal ML2 integration. For a non-routed network, bare metal servers can still be deployed and are functional, despite this port binding state in the Networking service. You will also need to provide Bare Metal service with the MAC address(es) of each node that it is provisioning; Bare Metal service in turn will pass this information to Networking service for DHCP and PXE boot configuration. An example of this is shown in the :ref:`enrollment` section. #. Install the networking-baremetal ML2 mechanism driver and L2 agent in the Networking service. #. Edit ``/etc/neutron/plugins/ml2/ml2_conf.ini`` and modify these: .. code-block:: ini [ml2] type_drivers = flat tenant_network_types = flat mechanism_drivers = openvswitch,baremetal [ml2_type_flat] flat_networks = physnet1 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] bridge_mappings = physnet1:br-eth2 # Replace eth2 with the interface on the neutron node which you # are using to connect to the bare metal server #. Restart the ``neutron-server`` service, to load the new configuration. #. Create and edit ``/etc/neutron/plugins/ml2/ironic_neutron_agent.ini`` and add the required configuration. For example: .. code-block:: ini [ironic] project_domain_name = Default project_name = service user_domain_name = Default password = password username = ironic auth_url = http://identity-server.example.com/identity auth_type = password region_name = RegionOne #. Make sure the ``ironic-neutron-agent`` service is started. #. If neutron-openvswitch-agent runs with ``ovs_neutron_plugin.ini`` as the input config-file, edit ``ovs_neutron_plugin.ini`` to configure the bridge mappings by adding the [ovs] section described in the previous step, and restart the neutron-openvswitch-agent. #. Add the integration bridge to Open vSwitch: .. code-block:: console $ ovs-vsctl add-br br-int #. Create the br-eth2 network bridge to handle communication between the OpenStack services (and the Bare Metal services) and the bare metal nodes using eth2. Replace eth2 with the interface on the network node which you are using to connect to the Bare Metal service: .. code-block:: console $ ovs-vsctl add-br br-eth2 $ ovs-vsctl add-port br-eth2 eth2 #. Restart the Open vSwitch agent: .. code-block:: console # service neutron-plugin-openvswitch-agent restart #. On restarting the Networking service Open vSwitch agent, the veth pair between the bridges br-int and br-eth2 is automatically created. Your Open vSwitch bridges should look something like this after following the above steps: .. code-block:: console $ ovs-vsctl show Bridge br-int fail_mode: secure Port "int-br-eth2" Interface "int-br-eth2" type: patch options: {peer="phy-br-eth2"} Port br-int Interface br-int type: internal Bridge "br-eth2" Port "phy-br-eth2" Interface "phy-br-eth2" type: patch options: {peer="int-br-eth2"} Port "eth2" Interface "eth2" Port "br-eth2" Interface "br-eth2" type: internal ovs_version: "2.3.0" #. Create the flat network on which you are going to launch the instances: .. code-block:: console $ openstack network create --project $TENANT_ID sharednet1 --share \ --provider-network-type flat --provider-physical-network physnet1 #. Create the subnet on the newly created network: .. code-block:: console $ openstack subnet create $SUBNET_NAME --network sharednet1 \ --subnet-range $NETWORK_CIDR --ip-version 4 --gateway $GATEWAY_IP \ --allocation-pool start=$START_IP,end=$END_IP --dhcp ironic-15.0.0/doc/source/install/setup-drivers.rst0000664000175000017500000000030513652514273022171 0ustar zuulzuul00000000000000Set up the drivers for the Bare Metal service ============================================= .. toctree:: :maxdepth: 1 enabling-drivers configure-pxe configure-ipmi configure-iscsi ironic-15.0.0/doc/source/install/configure-iscsi.rst0000664000175000017500000000025013652514273022445 0ustar zuulzuul00000000000000Configuring iSCSI-based drivers ------------------------------- Ensure that the ``qemu-img`` and ``iscsiadm`` tools are installed on the **ironic-conductor** host(s). ironic-15.0.0/doc/source/install/standalone.rst0000664000175000017500000002347013652514273021515 0ustar zuulzuul00000000000000 Using Bare Metal service as a standalone service ================================================ It is possible to use the Bare Metal service without other OpenStack services. You should make the following changes to ``/etc/ironic/ironic.conf``: #. To disable usage of Identity service tokens:: [DEFAULT] ... auth_strategy=noauth #. If you want to disable the Networking service, you should have your network pre-configured to serve DHCP and TFTP for machines that you're deploying. To disable it, change the following lines:: [dhcp] ... dhcp_provider=none .. note:: If you disabled the Networking service and the driver that you use is supported by at most one conductor, PXE boot will still work for your nodes without any manual config editing. This is because you know all the DHCP options that will be used for deployment and can set up your DHCP server appropriately. If you have multiple conductors per driver, it would be better to use Networking since it will do all the dynamically changing configurations for you. #. If you want to disable using a messaging broker between conductor and API processes, switch to JSON RPC instead: .. code-block:: ini [DEFAULT] rpc_transport = json-rpc If you don't use Image service, it's possible to provide images to Bare Metal service via a URL. .. note:: At the moment, only two types of URLs are acceptable instead of Image service UUIDs: HTTP(S) URLs (for example, "http://my.server.net/images/img") and file URLs (file:///images/img). There are however some limitations for different hardware interfaces: * If you're using :ref:`direct-deploy`, you have to provide the Bare Metal service with the MD5 checksum of your instance image. To compute it, you can use the following command:: md5sum image.qcow2 ed82def8730f394fb85aef8a208635f6 image.qcow2 * :ref:`direct-deploy` requires the instance image be accessible through a HTTP(s) URL. Steps to start a deployment are pretty similar to those when using Compute: #. To use the :python-ironicclient-doc:`openstack baremetal CLI `, set up these environment variables. Since no authentication strategy is being used, the value none must be set for OS_AUTH_TYPE. OS_ENDPOINT is the URL of the ironic-api process. For example:: export OS_AUTH_TYPE=none export OS_ENDPOINT=http://localhost:6385/ #. Create a node in Bare Metal service. At minimum, you must specify the driver name (for example, ``ipmi``). You can also specify all the required driver parameters in one command. This will return the node UUID:: openstack baremetal node create --driver ipmi \ --driver-info ipmi_address=ipmi.server.net \ --driver-info ipmi_username=user \ --driver-info ipmi_password=pass \ --driver-info deploy_kernel=file:///images/deploy.vmlinuz \ --driver-info deploy_ramdisk=http://my.server.net/images/deploy.ramdisk +--------------+--------------------------------------------------------------------------+ | Property | Value | +--------------+--------------------------------------------------------------------------+ | uuid | be94df40-b80a-4f63-b92b-e9368ee8d14c | | driver_info | {u'deploy_ramdisk': u'http://my.server.net/images/deploy.ramdisk', | | | u'deploy_kernel': u'file:///images/deploy.vmlinuz', u'ipmi_address': | | | u'ipmi.server.net', u'ipmi_username': u'user', u'ipmi_password': | | | u'******'} | | extra | {} | | driver | ipmi | | chassis_uuid | | | properties | {} | +--------------+--------------------------------------------------------------------------+ Note that here deploy_kernel and deploy_ramdisk contain links to images instead of Image service UUIDs. #. As in case of Compute service, you can also provide ``capabilities`` to node properties, but they will be used only by Bare Metal service (for example, boot mode). Although you don't need to add properties like ``memory_mb``, ``cpus`` etc. as Bare Metal service will require UUID of a node you're going to deploy. #. Then create a port to inform Bare Metal service of the network interface cards which are part of the node by creating a port with each NIC's MAC address. In this case, they're used for naming of PXE configs for a node:: openstack baremetal port create $MAC_ADDRESS --node $NODE_UUID #. You also need to specify image information in the node's ``instance_info`` (see :doc:`creating-images`): * ``image_source`` - URL of the whole disk or root partition image, mandatory. For :ref:`direct-deploy` only HTTP(s) links are accepted, while :ref:`iscsi-deploy` also accepts links to local files (prefixed with ``file://``). * ``root_gb`` - size of the root partition, required for partition images. .. note:: Older versions of the Bare Metal service used to require a positive integer for ``root_gb`` even for whole-disk images. You may want to set it for compatibility. * ``image_checksum`` - MD5 checksum of the image specified by ``image_source``, only required for :ref:`direct-deploy`. .. note:: Additional checksum support exists via the ``image_os_hash_algo`` and ``image_os_hash_value`` fields. They may be used instead of the ``image_checksum`` field. Starting with the Stein release of ironic-python-agent can also be a URL to a checksums file, e.g. one generated with: .. code-block:: shell cd /path/to/http/root md5sum *.img > checksums * ``kernel``, ``ramdisk`` - HTTP(s) or file URLs of the kernel and initramfs of the target OS. Must be added **only** for partition images. For example:: openstack baremetal node set $NODE_UUID \ --instance-info image_source=$IMG \ --instance-info image_checksum=$MD5HASH \ --instance-info kernel=$KERNEL \ --instance-info ramdisk=$RAMDISK \ --instance-info root_gb=10 With a whole disk image:: openstack baremetal node set $NODE_UUID \ --instance-info image_source=$IMG \ --instance-info image_checksum=$MD5HASH #. :ref:`Boot mode ` can be specified per instance:: openstack baremetal node set $NODE_UUID \ --instance-info deploy_boot_mode=uefi Otherwise, the ``boot_mode`` capability from the node's ``properties`` will be used. .. warning:: The two settings must not contradict each other. .. note:: The ``boot_mode`` capability is only used in the node's ``properties``, not in ``instance_info`` like most other capabilities. Use the separate ``instance_info/deploy_boot_mode`` field instead. #. To override the :ref:`boot option ` used for this instance, set the ``boot_option`` capability:: openstack baremetal node set $NODE_UUID \ --instance-info capabilities='{"boot_option": "local"}' #. Starting with the Ussuri release, you can set :ref:`root device hints ` per instance:: openstack baremetal node set $NODE_UUID \ --instance-info root_device='{"wwn": "0x4000cca77fc4dba1"}' This setting overrides any previous setting in ``properties`` and will be removed on undeployment. #. Validate that all parameters are correct:: openstack baremetal node validate $NODE_UUID +------------+--------+----------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+----------------------------------------------------------------+ | boot | True | | | console | False | Missing 'ipmi_terminal_port' parameter in node's driver_info. | | deploy | True | | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+----------------------------------------------------------------+ #. Now you can start the deployment, run:: openstack baremetal node deploy $NODE_UUID For iLO drivers, fields that should be provided are: * ``ilo_deploy_iso`` under ``driver_info``; * ``ilo_boot_iso``, ``image_source``, ``root_gb`` under ``instance_info``. .. note:: The Bare Metal service tracks content changes for non-Glance images by checking their modification date and time. For example, for HTTP image, if 'Last-Modified' header value from response to a HEAD request to "http://my.server.net/images/deploy.ramdisk" is greater than cached image modification time, Ironic will re-download the content. For "file://" images, the file system modification time is used. Other references ---------------- * :ref:`local-boot-without-compute` ironic-15.0.0/doc/source/install/configure-cleaning.rst0000664000175000017500000000212013652514273023111 0ustar zuulzuul00000000000000.. _configure-cleaning: Configure the Bare Metal service for cleaning ============================================= .. note:: If you configured the Bare Metal service to do :ref:`automated_cleaning` (which is enabled by default), you will need to set the ``cleaning_network`` configuration option. #. Note the network UUID (the `id` field) of the network you created in :ref:`configure-networking` or another network you created for cleaning: .. code-block:: console $ openstack network list #. Configure the cleaning network UUID via the ``cleaning_network`` option in the Bare Metal service configuration file (``/etc/ironic/ironic.conf``). In the following, replace ``NETWORK_UUID`` with the UUID you noted in the previous step: .. code-block:: ini [neutron] cleaning_network = NETWORK_UUID #. Restart the Bare Metal service's ironic-conductor: .. code-block:: console Fedora/RHEL7/CentOS7/SUSE: sudo systemctl restart openstack-ironic-conductor Ubuntu: sudo service ironic-conductor restart ironic-15.0.0/doc/source/install/troubleshooting.rst0000664000175000017500000001767013652514273022621 0ustar zuulzuul00000000000000.. _troubleshooting-install: =============== Troubleshooting =============== Once all the services are running and configured properly, and a node has been enrolled with the Bare Metal service and is in the ``available`` provision state, the Compute service should detect the node as an available resource and expose it to the scheduler. .. note:: There is a delay, and it may take up to a minute (one periodic task cycle) for the Compute service to recognize any changes in the Bare Metal service's resources (both additions and deletions). In addition to watching ``nova-compute`` log files, you can see the available resources by looking at the list of Compute hypervisors. The resources reported therein should match the bare metal node properties, and the Compute service flavor. Here is an example set of commands to compare the resources in Compute service and Bare Metal service:: $ openstack baremetal node list +--------------------------------------+---------------+-------------+--------------------+-------------+ | UUID | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+---------------+-------------+--------------------+-------------+ | 86a2b1bb-8b29-4964-a817-f90031debddb | None | power off | available | False | +--------------------------------------+---------------+-------------+--------------------+-------------+ $ openstack baremetal node show 86a2b1bb-8b29-4964-a817-f90031debddb +------------------------+----------------------------------------------------------------------+ | Property | Value | +------------------------+----------------------------------------------------------------------+ | instance_uuid | None | | properties | {u'memory_mb': u'1024', u'cpu_arch': u'x86_64', u'local_gb': u'10', | | | u'cpus': u'1'} | | maintenance | False | | driver_info | { [SNIP] } | | extra | {} | | last_error | None | | created_at | 2014-11-20T23:57:03+00:00 | | target_provision_state | None | | driver | ipmi | | updated_at | 2014-11-21T00:47:34+00:00 | | instance_info | {} | | chassis_uuid | 7b49bbc5-2eb7-4269-b6ea-3f1a51448a59 | | provision_state | available | | reservation | None | | power_state | power off | | console_enabled | False | | uuid | 86a2b1bb-8b29-4964-a817-f90031debddb | +------------------------+----------------------------------------------------------------------+ $ nova hypervisor-list +--------------------------------------+--------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +--------------------------------------+--------------------------------------+-------+---------+ | 584cfdc8-9afd-4fbb-82ef-9ff25e1ad3f3 | 86a2b1bb-8b29-4964-a817-f90031debddb | up | enabled | +--------------------------------------+--------------------------------------+-------+---------+ $ nova hypervisor-show 584cfdc8-9afd-4fbb-82ef-9ff25e1ad3f3 +-------------------------+--------------------------------------+ | Property | Value | +-------------------------+--------------------------------------+ | cpu_info | baremetal cpu | | current_workload | 0 | | disk_available_least | - | | free_disk_gb | 10 | | free_ram_mb | 1024 | | host_ip | [ SNIP ] | | hypervisor_hostname | 86a2b1bb-8b29-4964-a817-f90031debddb | | hypervisor_type | ironic | | hypervisor_version | 1 | | id | 1 | | local_gb | 10 | | local_gb_used | 0 | | memory_mb | 1024 | | memory_mb_used | 0 | | running_vms | 0 | | service_disabled_reason | - | | service_host | my-test-host | | service_id | 6 | | state | up | | status | enabled | | vcpus | 1 | | vcpus_used | 0 | +-------------------------+--------------------------------------+ .. _maintenance_mode: Maintenance mode ---------------- Maintenance mode may be used if you need to take a node out of the resource pool. Putting a node in maintenance mode will prevent Bare Metal service from executing periodic tasks associated with the node. This will also prevent Compute service from placing a tenant instance on the node by not exposing the node to the nova scheduler. Nodes can be placed into maintenance mode with the following command. :: $ openstack baremetal node maintenance set $NODE_UUID A maintenance reason may be included with the optional ``--reason`` command line option. This is a free form text field that will be displayed in the ``maintenance_reason`` section of the ``node show`` command. :: $ openstack baremetal node maintenance set $UUID --reason "Need to add ram." $ openstack baremetal node show $UUID +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-04-27T15:43:58+00:00 | | maintenance_reason | Need to add ram. | | ... | ... | | maintenance | True | | ... | ... | +------------------------+--------------------------------------+ To remove maintenance mode and clear any ``maintenance_reason`` use the following command. :: $ openstack baremetal node maintenance unset $NODE_UUID ironic-15.0.0/doc/source/install/creating-images.rst0000664000175000017500000000713613652514273022425 0ustar zuulzuul00000000000000Create user images for the Bare Metal service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bare Metal provisioning requires two sets of images: the deploy images and the user images. The :ref:`deploy images ` are used by the Bare Metal service to prepare the bare metal server for actual OS deployment. Whereas the user images are installed on the bare metal server to be used by the end user. There are two types of user images: *partition images* contain only the contents of the root partition. Additionally, two more images are used together with them: an image with a kernel and with an initramfs. .. warning:: To use partition images with local boot, Grub2 must be installed on them. *whole disk images* contain a complete partition table with one or more partitions. .. warning:: The kernel/initramfs pair must not be used with whole disk images, otherwise they'll be mistaken for partition images. Building user images ^^^^^^^^^^^^^^^^^^^^ disk-image-builder ------------------ The `disk-image-builder`_ can be used to create user images required for deployment and the actual OS which the user is going to run. - Install diskimage-builder package (use virtualenv, if you don't want to install anything globally): .. code-block:: console # pip install diskimage-builder - Build the image your users will run (Ubuntu image has been taken as an example): - Partition images .. code-block:: console $ disk-image-create ubuntu baremetal dhcp-all-interfaces grub2 -o my-image - Whole disk images .. code-block:: console $ disk-image-create ubuntu vm dhcp-all-interfaces -o my-image The partition image command creates ``my-image.qcow2``, ``my-image.vmlinuz`` and ``my-image.initrd`` files. The ``grub2`` element in the partition image creation command is only needed if local boot will be used to deploy ``my-image.qcow2``, otherwise the images ``my-image.vmlinuz`` and ``my-image.initrd`` will be used for PXE booting after deploying the bare metal with ``my-image.qcow2``. For whole disk images only the main image is used. If you want to use Fedora image, replace ``ubuntu`` with ``fedora`` in the chosen command. .. _disk-image-builder: https://docs.openstack.org/diskimage-builder/latest/ Virtual machine --------------- Virtual machine software can also be used to build user images. There are different software options available, qemu-kvm is usually a good choice on linux platform, it supports emulating many devices and even building images for architectures other than the host machine by software emulation. VirtualBox is another good choice for non-linux host. The procedure varies depending on the software used, but the steps for building an image are similar, the user creates a virtual machine, and installs the target system just like what is done for a real hardware. The system can be highly customized like partition layout, drivers or software shipped, etc. Usually libvirt and its management tools are used to make interaction with qemu-kvm easier, for example, to create a virtual machine with ``virt-install``:: $ virt-install --name centos8 --ram 4096 --vcpus=2 -f centos8.qcow2 \ > --cdrom CentOS-8-x86_64-1905-dvd1.iso Graphic frontend like ``virt-manager`` can also be utilized. The disk file can be used as user image after the system is set up and powered off. The path of the disk file varies depending on the software used, usually it's stored in a user-selected part of the local file system. For qemu-kvm or GUI frontend building upon it, it's typically stored at ``/var/lib/libvirt/images``. ironic-15.0.0/doc/source/install/enabling-drivers.rst0000664000175000017500000002425213652514273022617 0ustar zuulzuul00000000000000Enabling drivers and hardware types =================================== Introduction ------------ The Bare Metal service delegates actual hardware management to **drivers**. *Drivers*, also called *hardware types*, consist of *hardware interfaces*: sets of functionality dealing with some aspect of bare metal provisioning in a vendor-specific way. There are generic **hardware types** (eg. ``redfish`` and ``ipmi``), and vendor-specific ones (eg. ``ilo`` and ``irmc``). .. note:: Starting with the Rocky release, the terminologies *driver*, *dynamic driver*, and *hardware type* have the same meaning in the scope of Bare Metal service. .. _enable-hardware-types: Enabling hardware types ----------------------- Hardware types are enabled in the configuration file of the **ironic-conductor** service by setting the ``enabled_hardware_types`` configuration option, for example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish Due to the driver's dynamic nature, they also require configuring enabled hardware interfaces. .. note:: All available hardware types and interfaces are listed in setup.cfg_ file in the source code tree. .. _enable-hardware-interfaces: Enabling hardware interfaces ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are several types of hardware interfaces: bios manages configuration of the BIOS settings of a bare metal node. This interface is vendor-specific and can be enabled via the ``enabled_bios_interfaces`` option: .. code-block:: ini [DEFAULT] enabled_hardware_types = enabled_bios_interfaces = See :doc:`/admin/bios` for details. boot manages booting of both the deploy ramdisk and the user instances on the bare metal node. See :doc:`/admin/interfaces/boot` for details. Boot interface implementations are often vendor specific, and can be enabled via the ``enabled_boot_interfaces`` option: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,ilo enabled_boot_interfaces = pxe,ilo-virtual-media Boot interfaces with ``pxe`` in their name require :doc:`configure-pxe`. There are also a few hardware-specific boot interfaces - see :doc:`/admin/drivers` for their required configuration. console manages access to the serial console of a bare metal node. See :doc:`/admin/console` for details. deploy defines how the image gets transferred to the target disk. See :doc:`/admin/interfaces/deploy` for an explanation of the difference between supported deploy interfaces ``direct`` and ``iscsi``. The deploy interfaces can be enabled as follows: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish enabled_deploy_interfaces = iscsi,direct Additionally, * the ``iscsi`` deploy interface requires :doc:`configure-iscsi` * the ``direct`` deploy interface requires the Object Storage service or an HTTP service inspect implements fetching hardware information from nodes. Can be implemented out-of-band (via contacting the node's BMC) or in-band (via booting a ramdisk on a node). The latter implementation is called ``inspector`` and uses a separate service called :ironic-inspector-doc:`ironic-inspector <>`. Example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,ilo,irmc enabled_inspect_interfaces = ilo,irmc,inspector See :doc:`/admin/inspection` for more details. management provides additional hardware management actions, like getting or setting boot devices. This interface is usually vendor-specific, and its name often matches the name of the hardware type (with ``ipmitool`` being a notable exception). For example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish,ilo,irmc enabled_management_interfaces = ipmitool,redfish,ilo,irmc Using ``ipmitool`` requires :doc:`configure-ipmi`. See :doc:`/admin/drivers` for the required configuration of each driver. network connects/disconnects bare metal nodes to/from virtual networks. See :doc:`configure-tenant-networks` for more details. power runs power actions on nodes. Similar to the management interface, it is usually vendor-specific, and its name often matches the name of the hardware type (with ``ipmitool`` being again an exception). For example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish,ilo,irmc enabled_power_interfaces = ipmitool,redfish,ilo,irmc Using ``ipmitool`` requires :doc:`configure-ipmi`. See :doc:`/admin/drivers` for the required configuration of each driver. raid manages building and tearing down RAID on nodes. Similar to inspection, it can be implemented either out-of-band or in-band (via ``agent`` implementation). See :doc:`/admin/raid` for details. For example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish,ilo,irmc enabled_raid_interfaces = agent,no-raid storage manages the interaction with a remote storage subsystem, such as the Block Storage service, and helps facilitate booting from a remote volume. This interface ensures that volume target and connector information is updated during the lifetime of a deployed instance. See :doc:`/admin/boot-from-volume` for more details. This interface defaults to a ``noop`` driver as it is considered an "opt-in" interface which requires additional configuration by the operator to be usable. For example: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,irmc enabled_storage_interfaces = cinder,noop vendor is a place for vendor extensions to be exposed in API. See :doc:`/contributor/vendor-passthru` for details. .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish,ilo,irmc enabled_vendor_interfaces = ipmitool,no-vendor Here is a complete configuration example, enabling two generic protocols, IPMI and Redfish, with a few additional features: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish enabled_boot_interfaces = pxe enabled_console_interfaces = ipmitool-socat,no-console enabled_deploy_interfaces = iscsi,direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool,redfish enabled_network_interfaces = flat,neutron enabled_power_interfaces = ipmitool,redfish enabled_raid_interfaces = agent enabled_storage_interfaces = cinder,noop enabled_vendor_interfaces = ipmitool,no-vendor Note that some interfaces have implementations named ``no-`` where ```` is the interface type. These implementations do nothing and return errors when used from API. Hardware interfaces in multi-conductor environments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When enabling hardware types and their interfaces, make sure that for every enabled hardware type, the whole set of enabled interfaces matches for all conductors. However, different conductors can have different hardware types enabled. For example, you can have two conductors with the following configuration respectively: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi enabled_deploy_interfaces = direct enabled_power_interfaces = ipmitool enabled_management_interfaces = ipmitool .. code-block:: ini [DEFAULT] enabled_hardware_types = redfish enabled_deploy_interfaces = iscsi enabled_power_interfaces = redfish enabled_management_interfaces = redfish But you cannot have two conductors with the following configuration respectively: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish enabled_deploy_interfaces = direct enabled_power_interfaces = ipmitool,redfish enabled_management_interfaces = ipmitool,redfish .. code-block:: ini [DEFAULT] enabled_hardware_types = redfish enabled_deploy_interfaces = iscsi enabled_power_interfaces = redfish enabled_management_interfaces = redfish This is because the ``redfish`` hardware type will have different enabled *deploy* interfaces on these conductors. It would have been fine, if the second conductor had ``enabled_deploy_interfaces = direct`` instead of ``iscsi``. This situation is not detected by the Bare Metal service, but it can cause inconsistent behavior in the API, when node functionality will depend on which conductor it gets assigned to. .. note:: We don't treat this as an error, because such *temporary* inconsistency is inevitable during a rolling upgrade or a configuration update. Configuring interface defaults ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When an operator does not provide an explicit value for one of the interfaces (when creating a node or updating its driver), the default value is calculated as described in :ref:`hardware_interfaces_defaults`. It is also possible to override the defaults for any interfaces by setting one of the options named ``default__interface``, where ```` is the interface name. For example: .. code-block:: ini [DEFAULT] default_deploy_interface = direct default_network_interface = neutron This configuration forces the default *deploy* interface to be ``direct`` and the default *network* interface to be ``neutron`` for all hardware types. The defaults are calculated and set on a node when creating it or updating its hardware type. Thus, changing these configuration options has no effect on existing nodes. .. warning:: The default interface implementation must be configured the same way across all conductors in the cloud, except maybe for a short period of time during an upgrade or configuration update. Otherwise the default implementation will depend on which conductor handles which node, and this mapping is not predictable or even persistent. .. warning:: These options should be used with care. If a hardware type does not support the provided default implementation, its users will have to always provide an explicit value for this interface when creating a node. .. _setup.cfg: https://opendev.org/openstack/ironic/src/branch/master/setup.cfg ironic-15.0.0/doc/source/install/configure-tenant-networks.rst0000664000175000017500000001542013652514273024503 0ustar zuulzuul00000000000000.. _configure-tenant-networks: Configure tenant networks ========================= Below is an example flow of how to set up the Bare Metal service so that node provisioning will happen in a multi-tenant environment (which means using the ``neutron`` network interface as stated above): #. Network interfaces can be enabled on ironic-conductor by adding them to the ``enabled_network_interfaces`` configuration option under the ``default`` section of the configuration file:: [DEFAULT] ... enabled_network_interfaces=noop,flat,neutron Keep in mind that, ideally, all ironic-conductors should have the same list of enabled network interfaces, but it may not be the case during ironic-conductor upgrades. This may cause problems if one of the ironic-conductors dies and some node that is taken over is mapped to an ironic-conductor that does not support the node's network interface. Any actions that involve calling the node's driver will fail until that network interface is installed and enabled on that ironic-conductor. #. It is recommended to set the default network interface via the ``default_network_interface`` configuration option under the ``default`` section of the configuration file:: [DEFAULT] ... default_network_interface=neutron This default value will be used for all nodes that don't have a network interface explicitly specified in the creation request. If this configuration option is not set, the default network interface is determined by looking at the ``[dhcp]dhcp_provider`` configuration option value. If it is ``neutron``, then ``flat`` network interface becomes the default, otherwise ``noop`` is the default. #. Define a provider network in the Networking service, which we shall refer to as the "provisioning" network. Using the ``neutron`` network interface requires that ``provisioning_network`` and ``cleaning_network`` configuration options are set to valid identifiers (UUID or name) of networks in the Networking service. If these options are not set correctly, cleaning or provisioning will fail to start. There are two ways to set these values: - Under the ``neutron`` section of ironic configuration file: .. code-block:: ini [neutron] cleaning_network = $CLEAN_UUID_OR_NAME provisioning_network = $PROVISION_UUID_OR_NAME - Under ``provisioning_network`` and ``cleaning_network`` keys of the node's ``driver_info`` field as ``driver_info['provisioning_network']`` and ``driver_info['cleaning_network']`` respectively. .. note:: If these ``provisioning_network`` and ``cleaning_network`` values are not specified in node's `driver_info` then ironic falls back to the configuration in the ``neutron`` section. Please refer to :doc:`configure-cleaning` for more information about cleaning. .. warning:: Please make sure that the Bare Metal service has exclusive access to the provisioning and cleaning networks. Spawning instances by non-admin users in these networks and getting access to the Bare Metal service's control plane is a security risk. For this reason, the provisioning and cleaning networks should be configured as non-shared networks in the ``admin`` tenant. .. note:: When using the ``flat`` network interface, bare metal instances are normally spawned onto the "provisioning" network. This is not supported with the ``neutron`` interface and the deployment will fail. Please ensure a different network is chosen in the Networking service when a bare metal instance is booted from the Compute service. .. note:: The "provisioning" and "cleaning" networks may be the same network or distinct networks. To ensure that communication between the Bare Metal service and the deploy ramdisk works, it is important to ensure that security groups are disabled for these networks, *or* that the default security groups allow: * DHCP * TFTP * egress port used for the Bare Metal service (6385 by default) * ingress port used for ironic-python-agent (9999 by default) * if using :ref:`iscsi-deploy`, the ingress port used for iSCSI (3260 by default) * if using :ref:`direct-deploy`, the egress port used for the Object Storage service (typically 80 or 443) * if using iPXE, the egress port used for the HTTP server running on the ironic-conductor nodes (typically 80). #. This step is optional and applicable only if you want to use security groups during provisioning and/or cleaning of the nodes. If not specified, default security groups are used. #. Define security groups in the Networking service, to be used for provisioning and/or cleaning networks. #. Add the list of these security group UUIDs under the ``neutron`` section of ironic-conductor's configuration file as shown below:: [neutron] ... cleaning_network=$CLEAN_UUID_OR_NAME cleaning_network_security_groups=[$LIST_OF_CLEAN_SECURITY_GROUPS] provisioning_network=$PROVISION_UUID_OR_NAME provisioning_network_security_groups=[$LIST_OF_PROVISION_SECURITY_GROUPS] Multiple security groups may be applied to a given network, hence, they are specified as a list. The same security group(s) could be used for both provisioning and cleaning networks. .. warning:: If security groups are configured as described above, do not set the "port_security_enabled" flag to False for the corresponding Networking service's network or port. This will cause the deploy to fail. For example: if ``provisioning_network_security_groups`` configuration option is used, ensure that "port_security_enabled" flag for the provisioning network is set to True. This flag is set to True by default; make sure not to override it by manually setting it to False. #. Install and configure a compatible ML2 mechanism driver which supports bare metal provisioning for your switch. See :neutron-doc:`ML2 plugin configuration manual ` for details. #. Restart the ironic-conductor and ironic-api services after the modifications: - Fedora/RHEL7/CentOS7:: sudo systemctl restart openstack-ironic-api sudo systemctl restart openstack-ironic-conductor - Ubuntu:: sudo service ironic-api restart sudo service ironic-conductor restart #. Make sure that the ironic-conductor is reachable over the provisioning network by trying to download a file from a TFTP server on it, from some non-control-plane server in that network:: tftp $TFTP_IP -c get $FILENAME where FILENAME is the file located at the TFTP server. See :ref:`multitenancy` for required node configuration. ironic-15.0.0/doc/source/install/enabling-https.rst0000664000175000017500000000630213652514273022277 0ustar zuulzuul00000000000000.. _enabling-https: Enabling HTTPS -------------- .. _EnableHTTPSinSwift: Enabling HTTPS in Swift ======================= The drivers using virtual media use swift for storing boot images and node configuration information (contains sensitive information for Ironic conductor to provision bare metal hardware). By default, HTTPS is not enabled in swift. HTTPS is required to encrypt all communication between swift and Ironic conductor and swift and bare metal (via virtual media). It can be enabled in one of the following ways: * `Using an SSL termination proxy `_ * :swift-doc:`Using native SSL support in swift ` (recommended only for testing purpose by swift). .. _EnableHTTPSinGlance: Enabling HTTPS in Image service =============================== Ironic drivers usually use Image service during node provisioning. By default, image service does not use HTTPS, but it is required for secure communication. It can be enabled by making the following changes to ``/etc/glance/glance-api.conf``: #. :glance-doc:`Configuring SSL support ` #. Restart the glance-api service:: Fedora/RHEL7/CentOS7/SUSE: sudo systemctl restart openstack-glance-api Debian/Ubuntu: sudo service glance-api restart See the :glance-doc:`Glance <>` documentation, for more details on the Image service. Enabling HTTPS communication between Image service and Object storage ===================================================================== This section describes the steps needed to enable secure HTTPS communication between Image service and Object storage when Object storage is used as the Backend. To enable secure HTTPS communication between Image service and Object storage follow these steps: #. :ref:`EnableHTTPSinSwift` #. :glance-doc:`Configure Swift Storage Backend ` #. :ref:`EnableHTTPSinGlance` Enabling HTTPS communication between Image service and Bare Metal service ========================================================================= This section describes the steps needed to enable secure HTTPS communication between Image service and Bare Metal service. To enable secure HTTPS communication between Bare Metal service and Image service follow these steps: #. Edit ``/etc/ironic/ironic.conf``:: [glance] ... glance_cafile=/path/to/certfile .. note:: 'glance_cafile' is an optional path to a CA certificate bundle to be used to validate the SSL certificate served by Image service. #. If not using the keystone service catalog for the Image service API endpoint discovery, also edit the ``endpoint_override`` option to point to HTTPS URL of image service (replace ```` with hostname[:port][path] of the Image service endpoint):: [glance] ... endpoint_override = https:// #. Restart ironic-conductor service:: Fedora/RHEL7/CentOS7/SUSE: sudo systemctl restart openstack-ironic-conductor Debian/Ubuntu: sudo service ironic-conductor restart ironic-15.0.0/doc/source/install/configure-glance-images.rst0000664000175000017500000000546413652514273024043 0ustar zuulzuul00000000000000.. _image-requirements: Add images to the Image service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Build or download the user images as described in :doc:`creating-images`. #. Add the user images to the Image service Load all the images created in the below steps into the Image service, and note the image UUIDs in the Image service for each one as it is generated. For *partition images*: - Add the kernel and ramdisk images to the Image service: .. code-block:: console $ openstack image create my-kernel --public \ --disk-format aki --container-format aki --file my-image.vmlinuz Store the image uuid obtained from the above step as ``MY_VMLINUZ_UUID``. .. code-block:: console $ openstack image create my-image.initrd --public \ --disk-format ari --container-format ari --file my-image.initrd Store the image UUID obtained from the above step as ``MY_INITRD_UUID``. - Add the *my-image* to the Image service which is going to be the OS that the user is going to run. Also associate the above created images with this OS image. These two operations can be done by executing the following command: .. code-block:: console $ openstack image create my-image --public \ --disk-format qcow2 --container-format bare --property \ kernel_id=$MY_VMLINUZ_UUID --property \ ramdisk_id=$MY_INITRD_UUID --file my-image.qcow2 For *whole disk images*, skip uploading and configuring kernel and ramdisk images completely, proceed directly to uploading the main image: .. code-block:: console $ openstack image create my-whole-disk-image --public \ --disk-format qcow2 --container-format bare \ --file my-whole-disk-image.qcow2 .. warning:: The kernel/initramfs pair must not be set for whole disk images, otherwise they'll be mistaken for partition images. #. Build or download the deploy images The deploy images are used initially for preparing the server (creating disk partitions) before the actual OS can be deployed. There are several methods to build or download deploy images, please read the :ref:`deploy-ramdisk` section. #. Add the deploy images to the Image service Add the deployment kernel and ramdisk images to the Image service: .. code-block:: console $ openstack image create deploy-vmlinuz --public \ --disk-format aki --container-format aki \ --file ironic-python-agent.vmlinuz Store the image UUID obtained from the above step as ``DEPLOY_VMLINUZ_UUID``. .. code-block:: console $ openstack image create deploy-initrd --public \ --disk-format ari --container-format ari \ --file ironic-python-agent.initramfs Store the image UUID obtained from the above step as ``DEPLOY_INITRD_UUID``. ironic-15.0.0/doc/source/install/configure-glance-swift.rst0000664000175000017500000000577313652514273023735 0ustar zuulzuul00000000000000.. _image-store: Configure the Image service for temporary URLs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some drivers of the Baremetal service (in particular, any drivers using :ref:`direct-deploy` or :ref:`ansible-deploy` interfaces, and some virtual media drivers) require target user images to be available over clean HTTP(S) URL with no authentication involved (neither username/password-based, nor token-based). When using the Baremetal service integrated in OpenStack, this can be achieved by specific configuration of the Image service and Object Storage service as described below. #. Configure the Image service to have object storage as a backend for storing images. For more details, please refer to the Image service configuration guide. .. note:: When using Ceph+RadosGW for Object Storage service, images stored in Image service must be available over Object Storage service as well. #. Enable TempURLs for the Object Storage account used by the Image service for storing images in the Object Storage service. #. Check if TempURLs are enabled: .. code-block:: shell # executed under credentials of the user used by Image service # to access Object Storage service $ openstack object store account show +------------+---------------------------------------+ | Field | Value | +------------+---------------------------------------+ | Account | AUTH_bc39f1d9dcf9486899088007789ae643 | | Bytes | 536661727 | | Containers | 1 | | Objects | 19 | | properties | Temp-Url-Key='secret' | +------------+---------------------------------------+ #. If property ``Temp-Url-Key`` is set, note its value. #. If property ``Temp-Url-Key`` is not set, you have to configure it (``secret`` is used in the example below for the value): .. code-block:: shell $ openstack object store account set --property Temp-Url-Key=secret #. Optionally, configure the ironic-conductor service. The default configuration assumes that: #. the Object Storage service is implemented by :swift-doc:`swift <>`, #. the Object Storage service URL is available from the service catalog, #. the project, used by the Image service to access the Object Storage, is the same as the project, used by the Bare Metal service to access it, #. the container, used by the Image service, is called ``glance``. If any of these assumptions do not hold, you may want to change your configuration file (typically located at ``/etc/ironic/ironic.conf``), for example: .. code-block:: ini [glance] swift_endpoint_url = http://openstack/swift swift_account = AUTH_bc39f1d9dcf9486899088007789ae643 swift_container = glance swift_temp_url_key = secret #. (Re)start the ironic-conductor service. ironic-15.0.0/doc/source/install/configure-ipv6-networking.rst0000664000175000017500000001337313652514273024416 0ustar zuulzuul00000000000000Configuring services for bare metal provisioning using IPv6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use of IPv6 addressing for baremetal provisioning requires additional configuration. This page covers the IPv6 specifics only. Please refer to :doc:`/install/configure-tenant-networks` and :doc:`/install/configure-networking` for general networking configuration. Configure ironic PXE driver for provisioning using IPv6 addressing ================================================================== The ironic PXE driver operates in either IPv4 or IPv6 mode (IPv4 is the default). To enable IPv6 mode, set the ``[pxe]/ip_version`` option in the Bare Metal Service's configuration file (``/etc/ironic/ironic.conf``) to ``6``. .. Note:: Support for dual mode IPv4 and IPv6 operations is planned for a future version of ironic. Provisioning with IPv6 stateless addressing ------------------------------------------- When using stateless addressing DHCPv6 does not provide addresses to the client. DHCPv6 however provides other configuration via DHCPv6 options such as the bootfile-url and bootfile-parameters. Once the PXE driver is set to operate in IPv6 mode no further configuration is required in the Baremetal Service. Creating networks and subnets in the Networking Service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When creating the Baremetal Service network(s) and subnet(s) in the Networking Service's, subnets should have ``ipv6-address-mode`` set to ``dhcpv6-stateless`` and ``ip-version`` set to ``6``. Depending on whether a router in the Networking Service is providing RA's (Router Advertisements) or not, the ``ipv6-ra-mode`` for the subnet(s) should either be set to ``dhcpv6-stateless`` or be left unset. .. Note:: If ``ipv6-ra-mode`` is left unset, an external router on the network is expected to provide RA's with the appropriate flags set for automatic addressing and other configuration. Provisioning with IPv6 stateful addressing ------------------------------------------ When using stateful addressing DHCPv6 is providing both addresses and other configuration via DHCPv6 options such as the bootfile-url and bootfile- parameters. The "identity-association" (IA) construct used by DHCPv6 is challenging when booting over the network. Firmware, and ramdisks typically end up using different DUID/IAID combinations and it is not always possible for one chain- booting stage to release its address before giving control to the next step. In case the DHCPv6 server is configured with static reservations only the result is that booting will fail because the DHCPv6 server has no addresses available. To get past this issue either configure the DHCPv6 server with multiple address reservations for each host, or use a dynamic range. .. Note:: Support for multiple address reservations requires dnsmasq version 2.81 or later. Some distributions may backport this feature to earlier dnsmasq version as part of the packaging, check the distributions release notes. If a different (not dnsmasq) DHCPv6 server backend is used with the Networking service, use of multiple address reservations might not work. Using the ``flat`` network interface ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Due to the "identity-association" challenges with DHCPv6 provisioning using the ``flat`` network interface is not recommended. When ironic operates with the ``flat`` network interface the server instance port is used for provisioning and other operations. Ironic will not use multiple address reservations in this scenario. Because of this **it will not work in most cases**. Using the ``neutron`` network interface ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When using the ``neutron`` network interface the Baremetal Service will allocate multiple IPv6 addresses (4 addresses per port by default) on the service networks used for provisioning, cleaning, rescue and introspection. The number of addresses allocated can be controlled via the ``[neutron]/dhcpv6_stateful_address_count`` option in the Bare Metal Service's configuration file (``/etc/ironic/ironic.conf``). Using multiple address reservations ensures that the DHCPv6 server can lease addresses to each step. To enable IPv6 provisioning on neutron *flat* provider networks with no switch management, the ``local_link_connection`` field of baremetal ports must be set to ``{'network_type': 'unmanaged'}``. The following example shows how to set the local_link_connection for operation on unmanaged networks:: openstack baremetal port set \ --local-link-connection network_type=unmanaged The use of multiple IPv6 addresses must also be enabled in the Networking Service's dhcp agent configuration (``/etc/neutron/dhcp_agent.ini``) by setting the option ``[DEFAULT]/dnsmasq_enable_addr6_list`` to ``True`` (default ``False`` in Ussuri release). .. Note:: Support for multiple IPv6 address reservations in the dnsmasq backend was added to the Networking Service Ussuri release. It was also backported to the stable Train release. Creating networks and subnets in the Networking Service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When creating the ironic service network(s) and subnet(s) in the Networking Service, subnets should have ``ipv6-address-mode`` set to ``dhcpv6-stateful`` and ``ip-version`` set to ``6``. Depending on whether a router in the Networking Service is providing RA's (Router Advertisements) or not, the ``ipv6-ra-mode`` for the subnet(s) should be set to either ``dhcpv6-stateful`` or be left unset. .. Note:: If ``ipv6-ra-mode`` is left unset, an external router on the network is expected to provide RA's with the appropriate flags set for managed addressing and other configuration. ironic-15.0.0/doc/source/install/deploy-ramdisk.rst0000664000175000017500000000247013652514273022306 0ustar zuulzuul00000000000000.. _deploy-ramdisk: Building or downloading a deploy ramdisk image ============================================== Ironic depends on having an image with the :ironic-python-agent-doc:`ironic-python-agent (IPA) <>` service running on it for controlling and deploying bare metal nodes. Two kinds of images are published on every commit from every branch of :ironic-python-agent-doc:`ironic-python-agent (IPA) <>` * DIB_ images are suitable for production usage and can be downloaded from https://tarballs.openstack.org/ironic-python-agent/dib/files/. * For Train and older use CentOS 7 images. * For Ussuri and newer use CentOS 8 images. .. warning:: CentOS 7 master images are no longer updated and must not be used. * TinyIPA_ images are suitable for CI and testing environments and can be downloaded from https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/. Building from source -------------------- Check the ironic-python-agent-builder_ project for information on how to build ironic-python-agent ramdisks. .. _DIB: https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html .. _TinyIPA: https://docs.openstack.org/ironic-python-agent-builder/latest/admin/tinyipa.html .. _ironic-python-agent-builder: https://docs.openstack.org/ironic-python-agent-builder/latest/ ironic-15.0.0/doc/source/install/install-obs.rst0000664000175000017500000000222613652514273021610 0ustar zuulzuul00000000000000.. _install-obs: ============================================================ Install and configure for openSUSE and SUSE Linux Enterprise ============================================================ This section describes how to install and configure the Bare Metal service for openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 SP2. .. note:: Installation of the Bare Metal service on openSUSE and SUSE Linux Enterprise Server is not officially supported. Nevertheless, installation should be possible. .. include:: include/common-prerequisites.inc Install and configure components ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Install from packages .. code-block:: console # zypper install openstack-ironic-api openstack-ironic-conductor python-ironicclient #. Enable services .. code-block:: console # systemctl enable openstack-ironic-api openstack-ironic-conductor # systemctl start openstack-ironic-api openstack-ironic-conductor .. include:: include/common-configure.inc .. include:: include/configure-ironic-api.inc .. include:: include/configure-ironic-api-mod_wsgi.inc .. include:: include/configure-ironic-conductor.inc ironic-15.0.0/doc/source/install/include/0000775000175000017500000000000013652514443020247 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/install/include/console.inc0000664000175000017500000000021413652514273022402 0ustar zuulzuul00000000000000Configuring node web console ---------------------------- See :ref:`console`. .. TODO(dtantsur): move the installation documentation here ironic-15.0.0/doc/source/install/include/disk-label.inc0000664000175000017500000000477513652514273022767 0ustar zuulzuul00000000000000.. _choosing_the_disk_label: Choosing the disk label ----------------------- .. note:: The term ``disk label`` is historically used in Ironic and was taken from `parted `_. Apparently everyone seems to have a different word for ``disk label`` - these are all the same thing: disk type, partition table, partition map and so on... Ironic allows operators to choose which disk label they want their bare metal node to be deployed with when Ironic is responsible for partitioning the disk; therefore choosing the disk label does not apply when the image being deployed is a ``whole disk image``. There are some edge cases where someone may want to choose a specific disk label for the images being deployed, including but not limited to: * For machines in ``bios`` boot mode with disks larger than 2 terabytes it's recommended to use a ``gpt`` disk label. That's because a capacity beyond 2 terabytes is not addressable by using the MBR partitioning type. But, although GPT claims to be backward compatible with legacy BIOS systems `that's not always the case `_. * Operators may want to force the partitioning to be always MBR (even if the machine is deployed with boot mode ``uefi``) to avoid breakage of applications and tools running on those instances. The disk label can be configured in two ways; when Ironic is used with the Compute service or in standalone mode. The following bullet points and sections will describe both methods: * When no disk label is provided Ironic will configure it according to the boot mode (see :ref:`boot_mode_support`); ``bios`` boot mode will use ``msdos`` and ``uefi`` boot mode will use ``gpt``. * Only one disk label - either ``msdos`` or ``gpt`` - can be configured for the node. When used with Compute service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Ironic is used with the Compute service the disk label should be set to node's ``properties/capabilities`` field and also to the flavor which will request such capability, for example:: openstack baremetal node set --property capabilities='disk_label:gpt' As for the flavor:: nova flavor-key baremetal set capabilities:disk_label="gpt" When used in standalone mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When used without the Compute service, the disk label should be set directly to the node's ``instance_info`` field, as below:: openstack baremetal node set --instance-info capabilities='{"disk_label": "gpt"}' ironic-15.0.0/doc/source/install/include/boot-mode.inc0000664000175000017500000000663213652514273022637 0ustar zuulzuul00000000000000.. _boot_mode_support: Boot mode support ----------------- Some of the bare metal hardware types (namely, ``redfish``, ``ilo`` and generic ``ipmi``) support setting boot mode (Legacy BIOS or UEFI). .. note:: Setting boot mode support in generic ``ipmi`` driver is coupled with setting boot device. That makes boot mode support in the ``ipmi`` driver incomplete. .. note:: In this chapter we will distinguish *ironic node* from *bare metal node*. The difference is that *ironic node* refers to a logical node, as it is configured in ironic, while *bare metal node* indicates the hardware machine that ironic is managing. The following rules apply in order when ironic manages node boot mode: * If the hardware type (or bare metal node) does not implement reading current boot mode of the bare metal node, then ironic assumes that boot mode is not set on the bare metal node * If boot mode is not set on ironic node and bare metal node boot mode is unknown (not set, can't be read etc.), ironic node boot mode is set to the value of the `[deploy]/default_boot_mode` option * If boot mode is set on a bare metal node, but is not set on ironic node, bare metal node boot mode is set on ironic node * If boot mode is set on ironic node, but is not set on the bare metal node, ironic node boot mode is attempted to be set on the bare metal node (failure to set boot mode on the bare metal node will not fail ironic node deployment) * If different boot modes appear on to be set ironic node and on the bare metal node, ironic node boot mode is attempted to be set on the bare metal node (failure to set boot mode on the bare metal node will fail ironic node deployment) .. warning:: If a bare metal node does not support setting boot mode, then the operator needs to make sure that boot mode configuration is consistent between ironic node and the bare metal node. The boot modes can be configured in the Bare Metal service in the following way: * Only one boot mode (either ``uefi`` or ``bios``) can be configured for the node. * If the operator wants a node to boot always in ``uefi`` mode or ``bios`` mode, then they may use ``capabilities`` parameter within ``properties`` field of an bare metal node. The operator must manually set the appropriate boot mode on the bare metal node. To configure a node in ``uefi`` mode, then set ``capabilities`` as below:: openstack baremetal node set --property capabilities='boot_mode:uefi' Nodes having ``boot_mode`` set to ``uefi`` may be requested by adding an ``extra_spec`` to the Compute service flavor:: nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only bare metal nodes which have the ``boot_mode`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in the Compute service can be used in heterogeneous environments where there is a mix of ``uefi`` and ``bios`` machines, and operator wants to provide a choice to the user regarding boot modes. If the flavor doesn't contain ``boot_mode`` and ``boot_mode`` is configured for bare metal nodes, then nova scheduler will consider all nodes and user may get either ``bios`` or ``uefi`` machine. ironic-15.0.0/doc/source/install/include/kernel-boot-parameters.inc0000664000175000017500000000743613652514273025337 0ustar zuulzuul00000000000000.. _kernel-boot-parameters: Appending kernel parameters to boot instances --------------------------------------------- The Bare Metal service supports passing custom kernel parameters to boot instances to fit users' requirements. The way to append the kernel parameters is depending on how to boot instances. Network boot ~~~~~~~~~~~~ Currently, the Bare Metal service supports assigning unified kernel parameters to PXE booted instances by: * Modifying the ``[pxe]/pxe_append_params`` configuration option, for example:: [pxe] pxe_append_params = quiet splash * Copying a template from shipped templates to another place, for example:: https://opendev.org/openstack/ironic/src/branch/master/ironic/drivers/modules/pxe_config.template Making the modifications and pointing to the custom template via the configuration options: ``[pxe]/pxe_config_template`` and ``[pxe]/uefi_pxe_config_template``. Local boot ~~~~~~~~~~ For local boot instances, users can make use of configuration drive (see :ref:`configdrive`) to pass a custom script to append kernel parameters when creating an instance. This is more flexible and can vary per instance. Here is an example for grub2 with ubuntu, users can customize it to fit their use case: .. code:: python #!/usr/bin/env python import os # Default grub2 config file in Ubuntu grub_file = '/etc/default/grub' # Add parameters here to pass to instance. kernel_parameters = ['quiet', 'splash'] grub_cmd = 'GRUB_CMDLINE_LINUX' old_grub_file = grub_file+'~' os.rename(grub_file, old_grub_file) cmdline_existed = False with open(grub_file, 'w') as writer, \ open(old_grub_file, 'r') as reader: for line in reader: key = line.split('=')[0] if key == grub_cmd: #If there is already some value: if line.strip()[-1] == '"': line = line.strip()[:-1] + ' ' + ' '.join(kernel_parameters) + '"' cmdline_existed = True writer.write(line) if not cmdline_existed: line = grub_cmd + '=' + '"' + ' '.join(kernel_parameters) + '"' writer.write(line) os.remove(old_grub_file) os.system('update-grub') os.system('reboot') Console ~~~~~~~ In order to change default console configuration in the Bare Metal service configuration file (``[pxe]`` section in ``/etc/ironic/ironic.conf``), include the serial port terminal and serial speed. Serial speed must be the same as the serial configuration in the BIOS settings, so that the operating system boot process can be seen in the serial console or web console. Following examples represent possible parameters for serial and web console respectively. * Node serial console. The console parameter ``console=ttyS0,115200n8`` uses ``ttyS0`` for console output at ``115200bps, 8bit, non-parity``, e.g.:: [pxe] # Additional append parameters for baremetal PXE boot. pxe_append_params = nofb nomodeset vga=normal console=ttyS0,115200n8 * For node web console configuration is similar with the addition of ``ttyX`` parameter, see example:: [pxe] # Additional append parameters for baremetal PXE boot. pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8 For detailed information on how to add consoles see the reference documents `kernel params`_ and `serial console`_. In case of local boot the Bare Metal service is not able to control kernel boot parameters. To configure console locally, follow 'Local boot' section above. .. _`kernel params`: https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html .. _`serial console`: https://www.kernel.org/doc/html/latest/admin-guide/serial-console.html ironic-15.0.0/doc/source/install/include/configure-ironic-conductor.inc0000664000175000017500000001736413652514273026216 0ustar zuulzuul00000000000000Configuring ironic-conductor service ------------------------------------ #. Replace ``HOST_IP`` with IP of the conductor host. .. code-block:: ini [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use "127.0.0.1". # (string value) my_ip=HOST_IP .. note:: If a conductor host has multiple IPs, ``my_ip`` should be set to the IP which is on the same network as the bare metal nodes. #. Configure the location of the database. Ironic-conductor should use the same configuration as ironic-api. Replace ``IRONIC_DBPASSWORD`` with the password of your ``ironic`` user, and replace DB_IP with the IP address where the DB server is located: .. code-block:: ini [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection=mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic?charset=utf8 #. Configure the ironic-conductor service to use the RabbitMQ message broker by setting the following option. Ironic-conductor should use the same configuration as ironic-api. Replace ``RPC_*`` with appropriate address details and credentials of RabbitMQ server: .. code-block:: ini [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ Alternatively, you can use JSON RPC for interactions between ironic-conductor and ironic-api. Enable it in the configuration and provide the keystone credentials to use for authenticating incoming requests (can be the same as for the API): .. code-block:: ini [DEFAULT] rpc_transport = json-rpc [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default You can optionally change the host and the port the JSON RPC service will bind to, for example: .. code-block:: ini [json_rpc] host_ip = 192.168.0.10 port = 9999 .. warning:: Hostnames of ironic-conductor machines must be resolvable by ironic-api services when JSON RPC is used. #. Configure credentials for accessing other OpenStack services. In order to communicate with other OpenStack services, the Bare Metal service needs to use service users to authenticate to the OpenStack Identity service when making requests to other services. These users' credentials have to be configured in each configuration file section related to the corresponding service: * ``[neutron]`` - to access the OpenStack Networking service * ``[glance]`` - to access the OpenStack Image service * ``[swift]`` - to access the OpenStack Object Storage service * ``[cinder]`` - to access the OpenStack Block Storage service * ``[inspector]`` - to access the OpenStack Bare Metal Introspection service * ``[service_catalog]`` - a special section holding credentials the Bare Metal service will use to discover its own API URL endpoint as registered in the OpenStack Identity service catalog. For simplicity, you can use the same service user for all services. For backward compatibility, this should be the same user configured in the ``[keystone_authtoken]`` section for the ironic-api service (see "Configuring ironic-api service"). However, this is not necessary, and you can create and configure separate service users for each service. Under the hood, Bare Metal service uses ``keystoneauth`` library together with ``Authentication plugin``, ``Session`` and ``Adapter`` concepts provided by it to instantiate service clients. Please refer to `Keystoneauth documentation`_ for supported plugins, their available options as well as Session- and Adapter-related options for authentication, connection and endpoint discovery respectively. In the example below, authentication information for user to access the OpenStack Networking service is configured to use: * Networking service is deployed in the Identity service region named ``RegionTwo``, with only its ``public`` endpoint interface registered in the service catalog. * HTTPS connection with specific CA SSL certificate when making requests * the same service user as configured for ironic-api service * dynamic ``password`` authentication plugin that will discover appropriate version of Identity service API based on other provided options - replace ``IDENTITY_IP`` with the IP of the Identity server, and replace ``IRONIC_PASSWORD`` with the password you chose for the ``ironic`` user in the Identity service .. code-block:: ini [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionTwo # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public By default, in order to communicate with another service, the Bare Metal service will attempt to discover an appropriate endpoint for that service via the Identity service's service catalog. The relevant configuration options from that service group in the Bare Metal service configuration file are used for this purpose. If you want to use a different endpoint for a particular service, specify this via the ``endpoint_override`` configuration option of that service group, in the Bare Metal service's configuration file. Taking the previous Networking service example, this would be .. code-block:: ini [neutron] ... endpoint_override = (Replace `` with actual address of a specific Networking service endpoint.) #. Configure enabled drivers and hardware types as described in :doc:`/install/enabling-drivers`. A. If you enabled any driver that uses :ref:`direct-deploy`, Swift backend for the Image service must be installed and configured, see :ref:`image-store`. Ceph Object Gateway (RADOS Gateway) is also supported as the Image service's backend, see :ref:`radosgw support`. #. Configure the network for ironic-conductor service to perform node cleaning, see :ref:`cleaning` from the admin guide. #. Restart the ironic-conductor service: Fedora/RHEL7/CentOS7/SUSE:: sudo systemctl restart openstack-ironic-conductor Ubuntu:: sudo service ironic-conductor restart .. _Keystoneauth documentation: https://docs.openstack.org/keystoneauth/latest/ ironic-15.0.0/doc/source/install/include/common-configure.inc0000664000175000017500000000122113652514273024206 0ustar zuulzuul00000000000000The Bare Metal service is configured via its configuration file. This file is typically located at ``/etc/ironic/ironic.conf``. Although some configuration options are mentioned here, it is recommended that you review all the :doc:`/configuration/sample-config` so that the Bare Metal service is configured for your needs. It is possible to set up an ironic-api and an ironic-conductor services on the same host or different hosts. Users also can add new ironic-conductor hosts to deal with an increasing number of bare metal nodes. But the additional ironic-conductor services should be at the same version as that of existing ironic-conductor services. ironic-15.0.0/doc/source/install/include/configure-ironic-api.inc0000664000175000017500000000724513652514273024764 0ustar zuulzuul00000000000000Configuring ironic-api service ------------------------------ #. The Bare Metal service stores information in a database. This guide uses the MySQL database that is used by other OpenStack services. Configure the location of the database via the ``connection`` option. In the following, replace ``IRONIC_DBPASSWORD`` with the password of your ``ironic`` user, and replace ``DB_IP`` with the IP address where the DB server is located: .. code-block:: ini [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection=mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic?charset=utf8 #. Configure the ironic-api service to use the RabbitMQ message broker by setting the following option. Replace ``RPC_*`` with appropriate address details and credentials of RabbitMQ server: .. code-block:: ini [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ Alternatively, you can use JSON RPC for interactions between ironic-conductor and ironic-api. Enable it in the configuration and provide the keystone credentials to use for authentication: .. code-block:: ini [DEFAULT] rpc_transport = json-rpc [json_rpc] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default If you use port other than the default 8089 for JSON RPC, you have to configure it, for example: .. code-block:: ini [json_rpc] port = 9999 #. Configure the ironic-api service to use these credentials with the Identity service. Replace ``PUBLIC_IDENTITY_IP`` with the public IP of the Identity server, ``PRIVATE_IDENTITY_IP`` with the private IP of the Identity server and replace ``IRONIC_PASSWORD`` with the password you chose for the ``ironic`` user in the Identity service: .. code-block:: ini [DEFAULT] # Authentication strategy used by ironic-api: one of # "keystone" or "noauth". "noauth" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default #. Create the Bare Metal service database tables: .. code-block:: bash $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema #. Restart the ironic-api service: Fedora/RHEL7/CentOS7/SUSE:: sudo systemctl restart openstack-ironic-api Ubuntu:: sudo service ironic-api restart ironic-15.0.0/doc/source/install/include/root-device-hints.inc0000664000175000017500000000723313652514273024313 0ustar zuulzuul00000000000000.. _root-device-hints: Specifying the disk for deployment (root device hints) ------------------------------------------------------ The Bare Metal service supports passing hints to the deploy ramdisk about which disk it should pick for the deployment. The list of supported hints is: * model (STRING): device identifier * vendor (STRING): device vendor * serial (STRING): disk serial number * size (INT): size of the device in GiB .. note:: A node's 'local_gb' property is often set to a value 1 GiB less than the actual disk size to account for partitioning (this is how DevStack, TripleO and Ironic Inspector work, to name a few). However, in this case ``size`` should be the actual size. For example, for a 128 GiB disk ``local_gb`` will be 127, but size hint will be 128. * wwn (STRING): unique storage identifier * wwn_with_extension (STRING): unique storage identifier with the vendor extension appended * wwn_vendor_extension (STRING): unique vendor storage identifier * rotational (BOOLEAN): whether it's a rotational device or not. This hint makes it easier to distinguish HDDs (rotational) and SSDs (not rotational) when choosing which disk Ironic should deploy the image onto. * hctl (STRING): the SCSI address (Host, Channel, Target and Lun), e.g '1:0:0:0' * name (STRING): the device name, e.g /dev/md0 .. warning:: The root device hint name should only be used for devices with constant names (e.g RAID volumes). For SATA, SCSI and IDE disk controllers this hint is not recommended because the order in which the device nodes are added in Linux is arbitrary, resulting in devices like /dev/sda and /dev/sdb `switching around at boot time `_. To associate one or more hints with a node, update the node's properties with a ``root_device`` key, for example:: openstack baremetal node set --property root_device='{"wwn": "0x4000cca77fc4dba1"}' That will guarantee that Bare Metal service will pick the disk device that has the ``wwn`` equal to the specified wwn value, or fail the deployment if it can not be found. .. note:: Starting with the Ussuri release, root device hints can be specified per-instance, see :doc:`/install/standalone`. The hints can have an operator at the beginning of the value string. If no operator is specified the default is ``==`` (for numerical values) and ``s==`` (for string values). The supported operators are: * For numerical values: * ``=`` equal to or greater than. This is equivalent to ``>=`` and is supported for `legacy reasons `_ * ``==`` equal to * ``!=`` not equal to * ``>=`` greater than or equal to * ``>`` greater than * ``<=`` less than or equal to * ``<`` less than * For strings (as python comparisons): * ``s==`` equal to * ``s!=`` not equal to * ``s>=`` greater than or equal to * ``s>`` greater than * ``s<=`` less than or equal to * ``s<`` less than * ```` substring * For collections: * ```` all elements contained in collection * ```` find one of these Examples are: * Finding a disk larger or equal to 60 GiB and non-rotational (SSD):: openstack baremetal node set --property root_device='{"size": ">= 60", "rotational": false}' * Finding a disk whose vendor is ``samsung`` or ``winsys``:: openstack baremetal node set --property root_device='{"vendor": " samsung winsys"}' .. note:: If multiple hints are specified, a device must satisfy all the hints. ironic-15.0.0/doc/source/install/include/trusted-boot.inc0000664000175000017500000000554413652514273023406 0ustar zuulzuul00000000000000.. _trusted-boot: Trusted boot with partition image --------------------------------- The Bare metal service supports trusted boot with partition images. This means at the end of the deployment process, when the node is rebooted with the new user image, ``trusted boot`` will be performed. It will measure the node's BIOS, boot loader, Option ROM and the Kernel/Ramdisk, to determine whether a bare metal node deployed by Ironic should be trusted. It's important to note that in order for this to work the node being deployed **must** have Intel `TXT`_ hardware support. The image being deployed with Ironic must have ``oat-client`` installed within it. The following will describe how to enable ``trusted boot`` and boot with PXE and Nova: #. Create a customized user image with ``oat-client`` installed:: disk-image-create -u fedora baremetal oat-client -o $TRUST_IMG For more information on creating customized images, see :ref:`image-requirements`. #. Enable VT-x, VT-d, TXT and TPM on the node. This can be done manually through the BIOS. Depending on the platform, several reboots may be needed. #. Enroll the node and update the node capability value:: openstack baremetal node create --driver ipmi openstack baremetal node set $NODE_UUID --property capabilities={'trusted_boot':true} #. Create a special flavor:: nova flavor-key $TRUST_FLAVOR_UUID set 'capabilities:trusted_boot'=true #. Prepare `tboot`_ and mboot.c32 and put them into tftp_root or http_root directory on all nodes with the ironic-conductor processes:: Ubuntu: cp /usr/lib/syslinux/mboot.c32 /tftpboot/ Fedora: cp /usr/share/syslinux/mboot.c32 /tftpboot/ *Note: The actual location of mboot.c32 varies among different distribution versions.* tboot can be downloaded from https://sourceforge.net/projects/tboot/files/latest/download #. Install an OAT Server. An `OAT Server`_ should be running and configured correctly. #. Boot an instance with Nova:: nova boot --flavor $TRUST_FLAVOR_UUID --image $TRUST_IMG --user-data $TRUST_SCRIPT trusted_instance *Note* that the node will be measured during ``trusted boot`` and the hash values saved into `TPM`_. An example of TRUST_SCRIPT can be found in `trust script example`_. #. Verify the result via OAT Server. This is outside the scope of Ironic. At the moment, users can manually verify the result by following the `manual verify steps`_. .. _`TXT`: http://en.wikipedia.org/wiki/Trusted_Execution_Technology .. _`tboot`: https://sourceforge.net/projects/tboot .. _`TPM`: http://en.wikipedia.org/wiki/Trusted_Platform_Module .. _`OAT Server`: https://github.com/OpenAttestation/OpenAttestation/wiki .. _`trust script example`: https://wiki.openstack.org/wiki/Bare-metal-trust#Trust_Script_Example .. _`manual verify steps`: https://wiki.openstack.org/wiki/Bare-metal-trust#Manual_verify_result ironic-15.0.0/doc/source/install/include/common-prerequisites.inc0000664000175000017500000000220213652514273025131 0ustar zuulzuul00000000000000Install and configure prerequisites ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Bare Metal service is a collection of components that provides support to manage and provision physical machines. You can configure these components to run on separate nodes or the same node. In this guide, the components run on one node, typically the Compute Service's compute node. It assumes that the Identity, Image, Compute, and Networking services have already been set up. Set up the database for Bare Metal ---------------------------------- The Bare Metal service stores information in a database. This guide uses the MySQL database that is used by other OpenStack services. #. In MySQL, create an ``ironic`` database that is accessible by the ``ironic`` user. Replace ``IRONIC_DBPASSWORD`` with a suitable password: .. code-block:: console # mysql -u root -p mysql> CREATE DATABASE ironic CHARACTER SET utf8; mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ IDENTIFIED BY 'IRONIC_DBPASSWORD'; mysql> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ IDENTIFIED BY 'IRONIC_DBPASSWORD'; ironic-15.0.0/doc/source/install/include/notifications.inc0000664000175000017500000000144413652514273023617 0ustar zuulzuul00000000000000Notifications ------------- The Bare Metal service supports the emission of notifications, which are messages sent on a message broker (like RabbitMQ or anything else supported by the `oslo messaging library `_) that indicate various events which occur, such as when a node changes power states. These can be consumed by an external service reading from the message bus. For example, `Searchlight `_ is an OpenStack service that uses notifications to index (and make searchable) resources from the Bare Metal service. Notifications are disabled by default. For a complete list of available notifications and instructions for how to enable them, see the :doc:`/admin/notifications`. ironic-15.0.0/doc/source/install/include/configure-ironic-api-mod_wsgi.inc0000664000175000017500000000464513652514273026573 0ustar zuulzuul00000000000000Configuring ironic-api behind mod_wsgi -------------------------------------- Bare Metal service comes with an example file for configuring the ``ironic-api`` service to run behind Apache with mod_wsgi. #. Install the apache service: RHEL7/CentOS7:: sudo yum install httpd Fedora:: sudo dnf install httpd Debian/Ubuntu:: apt-get install apache2 SUSE:: zypper install apache2 #. Download the ``etc/apache2/ironic`` file from the `Ironic project tree `_ and copy it to the apache sites: Fedora/RHEL7/CentOS7:: sudo cp etc/apache2/ironic /etc/httpd/conf.d/ironic.conf Debian/Ubuntu:: sudo cp etc/apache2/ironic /etc/apache2/sites-available/ironic.conf SUSE:: sudo cp etc/apache2/ironic /etc/apache2/vhosts.d/ironic.conf #. Edit the recently copied ``/ironic.conf``: #. Modify the ``WSGIDaemonProcess``, ``APACHE_RUN_USER`` and ``APACHE_RUN_GROUP`` directives to set the user and group values to an appropriate user on your server. #. Modify the ``WSGIScriptAlias`` directive to point to the automatically generated ``ironic-api-wsgi`` script that is located in `IRONIC_BIN` directory. #. Modify the ``Directory`` directive to set the path to the Ironic API code. #. Modify the ``ErrorLog`` and ``CustomLog`` to redirect the logs to the right directory (on Red Hat systems this is usually under /var/log/httpd). #. Enable the apache ``ironic`` in site and reload: Fedora/RHEL7/CentOS7:: sudo systemctl reload httpd Debian/Ubuntu:: sudo a2ensite ironic sudo service apache2 reload SUSE:: sudo systemctl reload apache2 .. note:: The file ``ironic-api-wsgi`` is automatically generated by pbr and is available in `IRONIC_BIN` directory. It should not be modified. Configure another WSGI container -------------------------------- A slightly different approach has to be used for WSGI containers that cannot use ``ironic-api-wsgi``. For example, for *gunicorn*: .. code-block:: console gunicorn -b 0.0.0.0:6385 'ironic.api.wsgi:initialize_wsgi_app(argv=[])' If you want to pass a configuration file, use: .. code-block:: console gunicorn -b 0.0.0.0:6385 \ 'ironic.api.wsgi:initialize_wsgi_app(argv=["ironic-api", "--config-file=/path/to/_ironic.conf"])' ironic-15.0.0/doc/source/install/include/local-boot-partition-images.inc0000664000175000017500000000410713652514273026252 0ustar zuulzuul00000000000000.. _local-boot-partition-images: Local boot with partition images -------------------------------- The Bare Metal service supports local boot with partition images, meaning that after the deployment the node's subsequent reboots won't happen via PXE or Virtual Media. Instead, it will boot from a local boot loader installed on the disk. .. note:: Whole disk images, on the contrary, support only local boot, and use it by default. It's important to note that in order for this to work the image being deployed with Bare Metal service **must** contain ``grub2`` installed within it. Enabling the local boot is different when Bare Metal service is used with Compute service and without it. The following sections will describe both methods. .. _ironic-python-agent: https://docs.openstack.org/ironic-python-agent/latest/ Enabling local boot with Compute service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable local boot we need to set a capability on the bare metal node, for example:: openstack baremetal node set --property capabilities="boot_option:local" Nodes having ``boot_option`` set to ``local`` may be requested by adding an ``extra_spec`` to the Compute service flavor, for example:: nova flavor-key baremetal set capabilities:boot_option="local" .. note:: If the node is configured to use ``UEFI``, Bare Metal service will create an ``EFI partition`` on the disk and switch the partition table format to ``gpt``. The ``EFI partition`` will be used later by the boot loader (which is installed from the deploy ramdisk). .. _local-boot-without-compute: Enabling local boot without Compute ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since adding ``capabilities`` to the node's properties is only used by the nova scheduler to perform more advanced scheduling of instances, we need a way to enable local boot when Compute is not present. To do that we can simply specify the capability via the ``instance_info`` attribute of the node, for example:: openstack baremetal node set --instance-info capabilities='{"boot_option": "local"}' ironic-15.0.0/doc/source/install/install-rdo.rst0000664000175000017500000000216513652514273021613 0ustar zuulzuul00000000000000.. _install-rdo: ============================================================= Install and configure for Red Hat Enterprise Linux and CentOS ============================================================= This section describes how to install and configure the Bare Metal service for Red Hat Enterprise Linux 7 and CentOS 7. .. include:: include/common-prerequisites.inc Install and configure components ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Install from packages - Using ``dnf`` .. code-block:: console # dnf install openstack-ironic-api openstack-ironic-conductor python-ironicclient - Using ``yum`` .. code-block:: console # yum install openstack-ironic-api openstack-ironic-conductor python-ironicclient #. Enable services .. code-block:: console # systemctl enable openstack-ironic-api openstack-ironic-conductor # systemctl start openstack-ironic-api openstack-ironic-conductor .. include:: include/common-configure.inc .. include:: include/configure-ironic-api.inc .. include:: include/configure-ironic-api-mod_wsgi.inc .. include:: include/configure-ironic-conductor.inc ironic-15.0.0/doc/source/install/configure-ipmi.rst0000664000175000017500000000612513652514273022300 0ustar zuulzuul00000000000000Configuring IPMI support ------------------------ Installing ipmitool command ~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable one of the drivers that use IPMI_ protocol for power and management actions (for example, ``ipmi``), the ``ipmitool`` command must be present on the service node(s) where ``ironic-conductor`` is running. On most distros, it is provided as part of the ``ipmitool`` package. Source code is available at http://ipmitool.sourceforge.net/. .. warning:: Certain distros, notably Mac OS X and SLES, install ``openipmi`` instead of ``ipmitool`` by default. This driver is not compatible with ``openipmi`` as it relies on error handling options not provided by this tool. Please refer to the :doc:`/admin/drivers/ipmitool` for information on how to use IPMItool-based drivers. Validation and troubleshooting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Check that you can connect to, and authenticate with, the IPMI controller in your bare metal server by running ``ipmitool``:: ipmitool -I lanplus -H -U -P chassis power status where ```` is the IP of the IPMI controller you want to access. This is not the bare metal node's main IP. The IPMI controller should have its own unique IP. If the above command doesn't return the power status of the bare metal server, check that - ``ipmitool`` is installed and is available via the ``$PATH`` environment variable. - The IPMI controller on your bare metal server is turned on. - The IPMI controller credentials and IP address passed in the command are correct. - The conductor node has a route to the IPMI controller. This can be checked by just pinging the IPMI controller IP from the conductor node. IPMI configuration ~~~~~~~~~~~~~~~~~~ If there are slow or unresponsive BMCs in the environment, the ``min_command_interval`` configuration option in the ``[ipmi]`` section may need to be raised. The default is fairly conservative, as setting this timeout too low can cause older BMCs to crash and require a hard-reset. .. _ipmi-sensor-data: Collecting sensor data ~~~~~~~~~~~~~~~~~~~~~~ Bare Metal service supports sending IPMI sensor data to Telemetry with certain hardware types, such as ``ipmi``, ``ilo`` and ``irmc``. By default, support for sending IPMI sensor data to Telemetry is disabled. If you want to enable it, you should make the following two changes in ``ironic.conf``: .. code-block:: ini [conductor] send_sensor_data = true [oslo_messaging_notifications] driver = messagingv2 If you want to customize the sensor types which will be sent to Telemetry, change the ``send_sensor_data_types`` option. For example, the below settings will send information about temperature, fan, voltage from sensors to the Telemetry service: .. code-block:: ini send_sensor_data_types=Temperature,Fan,Voltage Supported sensor types are defined by the Telemetry service, currently these are ``Temperature``, ``Fan``, ``Voltage``, ``Current``. Special value ``All`` (the default) designates all supported sensor types. .. _IPMI: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface ironic-15.0.0/doc/source/install/configure-pxe.rst0000664000175000017500000004147313652514273022143 0ustar zuulzuul00000000000000Configuring PXE and iPXE ======================== DHCP server setup ----------------- A DHCP server is required by PXE/iPXE client. You need to follow steps below. #. Set the ``[dhcp]/dhcp_provider`` to ``neutron`` in the Bare Metal Service's configuration file (``/etc/ironic/ironic.conf``): .. note:: Refer :doc:`/install/configure-tenant-networks` for details. The ``dhcp_provider`` configuration is already set by the configuration defaults, and when you create subnet, DHCP is also enabled if you do not add any dhcp options at "openstack subnet create" command. #. Enable DHCP in the subnet of PXE network. #. Set the ip address range in the subnet for DHCP. .. note:: Refer :doc:`/install/configure-networking` for details about the two precedent steps. #. Connect the openstack DHCP agent to the external network through the OVS bridges and the interface ``eth2``. .. note:: Refer :doc:`/install/configure-networking` for details. You do not require this part if br-int, br-eth2 and eth2 are already connected. #. Configure the host ip at ``br-eth2``. If it locates at ``eth2``, do below:: ip addr del 192.168.2.10/24 dev eth2 ip addr add 192.168.2.10/24 dev br-eth2 .. note:: Replace eth2 with the interface on the network node which you are using to connect to the Bare Metal service. TFTP server setup ----------------- In order to deploy instances via PXE, a TFTP server needs to be set up on the Bare Metal service nodes which run the ``ironic-conductor``. #. Make sure the tftp root directory exist and can be written to by the user the ``ironic-conductor`` is running as. For example:: sudo mkdir -p /tftpboot sudo chown -R ironic /tftpboot #. Install tftp server: Ubuntu:: sudo apt-get install xinetd tftpd-hpa RHEL7/CentOS7:: sudo yum install tftp-server xinetd Fedora:: sudo dnf install tftp-server xinetd SUSE:: sudo zypper install tftp xinetd #. Using xinetd to provide a tftp server setup to serve ``/tftpboot``. Create or edit ``/etc/xinetd.d/tftp`` as below:: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = root server = /usr/sbin/in.tftpd server_args = -v -v -v -v -v --map-file /tftpboot/map-file /tftpboot disable = no # This is a workaround for Fedora, where TFTP will listen only on # IPv6 endpoint, if IPv4 flag is not used. flags = IPv4 } and restart the ``xinetd`` service: Ubuntu:: sudo service xinetd restart Fedora/RHEL7/CentOS7/SUSE:: sudo systemctl restart xinetd .. note:: In certain environments the network's MTU may cause TFTP UDP packets to get fragmented. Certain PXE firmwares struggle to reconstruct the fragmented packets which can cause significant slow down or even prevent the server from PXE booting. In order to avoid this, TFTPd provides an option to limit the packet size so that it they do not get fragmented. To set this additional option in the server_args above:: --blocksize #. Create a map file in the tftp boot directory (``/tftpboot``):: echo 're ^(/tftpboot/) /tftpboot/\2' > /tftpboot/map-file echo 're ^/tftpboot/ /tftpboot/' >> /tftpboot/map-file echo 're ^(^/) /tftpboot/\1' >> /tftpboot/map-file echo 're ^([^/]) /tftpboot/\1' >> /tftpboot/map-file UEFI PXE - Grub setup --------------------- In order to deploy instances with PXE on bare metal nodes which support UEFI, perform these additional steps on the ironic conductor node to configure the PXE UEFI environment. #. Install Grub2 and shim packages: Ubuntu (16.04LTS and later):: sudo apt-get install grub-efi-amd64-signed shim-signed RHEL7/CentOS7:: sudo yum install grub2-efi shim Fedora:: sudo dnf install grub2-efi shim SUSE:: sudo zypper install grub2-x86_64-efi shim #. Copy grub and shim boot loader images to ``/tftpboot`` directory: Ubuntu (16.04LTS and later):: sudo cp /usr/lib/shim/shim.efi.signed /tftpboot/bootx64.efi sudo cp /usr/lib/grub/x86_64-efi-signed/grubnetx64.efi.signed /tftpboot/grubx64.efi Fedora:: sudo cp /boot/efi/EFI/fedora/shim.efi /tftpboot/bootx64.efi sudo cp /boot/efi/EFI/fedora/grubx64.efi /tftpboot/grubx64.efi RHEL7/CentOS7:: sudo cp /boot/efi/EFI/centos/shim.efi /tftpboot/bootx64.efi sudo cp /boot/efi/EFI/centos/grubx64.efi /tftpboot/grubx64.efi SUSE:: sudo cp /usr/lib64/efi/shim.efi /tftpboot/bootx64.efi sudo cp /usr/lib/grub2/x86_64-efi/grub.efi /tftpboot/grubx64.efi #. Create master grub.cfg: Ubuntu: Create grub.cfg under ``/tftpboot/grub`` directory:: GRUB_DIR=/tftpboot/grub Fedora: Create grub.cfg under ``/tftpboot/EFI/fedora`` directory:: GRUB_DIR=/tftpboot/EFI/fedora RHEL7/CentOS7: Create grub.cfg under ``/tftpboot/EFI/centos`` directory:: GRUB_DIR=/tftpboot/EFI/centos SUSE: Create grub.cfg under ``/tftpboot/boot/grub`` directory:: GRUB_DIR=/tftpboot/boot/grub Create directory ``GRUB_DIR``:: sudo mkdir -p $GRUB_DIR This file is used to redirect grub to baremetal node specific config file. It redirects it to specific grub config file based on DHCP IP assigned to baremetal node. .. literalinclude:: ../../../ironic/drivers/modules/master_grub_cfg.txt Change the permission of grub.cfg:: sudo chmod 644 $GRUB_DIR/grub.cfg #. Update the bare metal node with ``boot_mode:uefi`` capability in node's properties field. See :ref:`boot_mode_support` for details. #. Make sure that bare metal node is configured to boot in UEFI boot mode and boot device is set to network/pxe. .. note:: Some drivers, e.g. ``ilo``, ``irmc`` and ``redfish``, support automatic setting of the boot mode during deployment. This step is not required for them. Please check :doc:`../admin/drivers` for information on whether your driver requires manual UEFI configuration. Legacy BIOS - Syslinux setup ---------------------------- In order to deploy instances with PXE on bare metal using Legacy BIOS boot mode, perform these additional steps on the ironic conductor node. #. Install the syslinux package with the PXE boot images: Ubuntu (16.04LTS and later):: sudo apt-get install syslinux-common pxelinux RHEL7/CentOS7:: sudo yum install syslinux-tftpboot Fedora:: sudo dnf install syslinux-tftpboot SUSE:: sudo zypper install syslinux #. Copy the PXE image to ``/tftpboot``. The PXE image might be found at [1]_: Ubuntu (16.04LTS and later):: sudo cp /usr/lib/PXELINUX/pxelinux.0 /tftpboot RHEL7/CentOS7/SUSE:: sudo cp /usr/share/syslinux/pxelinux.0 /tftpboot #. If whole disk images need to be deployed via PXE-netboot, copy the chain.c32 image to ``/tftpboot`` to support it: Ubuntu (16.04LTS and later):: sudo cp /usr/lib/syslinux/modules/bios/chain.c32 /tftpboot Fedora:: sudo cp /boot/extlinux/chain.c32 /tftpboot RHEL7/CentOS7/SUSE:: sudo cp /usr/share/syslinux/chain.c32 /tftpboot/ #. If the version of syslinux is **greater than** 4 we also need to make sure that we copy the library modules into the ``/tftpboot`` directory [2]_ [1]_. For example, for Ubuntu run:: sudo cp /usr/lib/syslinux/modules/*/ldlinux.* /tftpboot #. Update the bare metal node with ``boot_mode:bios`` capability in node's properties field. See :ref:`boot_mode_support` for details. #. Make sure that bare metal node is configured to boot in Legacy BIOS boot mode and boot device is set to network/pxe. .. [1] On **Fedora/RHEL** the ``syslinux-tftpboot`` package already installs the library modules and PXE image at ``/tftpboot``. If the TFTP server is configured to listen to a different directory you should copy the contents of ``/tftpboot`` to the configured directory .. [2] http://www.syslinux.org/wiki/index.php/Library_modules iPXE setup ---------- If you will be using iPXE to boot instead of PXE, iPXE needs to be set up on the Bare Metal service node(s) where ``ironic-conductor`` is running. #. Make sure these directories exist and can be written to by the user the ``ironic-conductor`` is running as. For example:: sudo mkdir -p /tftpboot sudo mkdir -p /httpboot sudo chown -R ironic /tftpboot sudo chown -R ironic /httpboot #. Create a map file in the tftp boot directory (``/tftpboot``):: echo 'r ^([^/]) /tftpboot/\1' > /tftpboot/map-file echo 'r ^(/tftpboot/) /tftpboot/\2' >> /tftpboot/map-file .. _HTTP server: #. Set up TFTP and HTTP servers. These servers should be running and configured to use the local /tftpboot and /httpboot directories respectively, as their root directories. (Setting up these servers is outside the scope of this install guide.) These root directories need to be mounted locally to the ``ironic-conductor`` services, so that the services can access them. The Bare Metal service's configuration file (/etc/ironic/ironic.conf) should be edited accordingly to specify the TFTP and HTTP root directories and server addresses. For example: .. code-block:: ini [pxe] # Ironic compute node's tftp root path. (string value) tftp_root=/tftpboot # IP address of Ironic compute node's tftp server. (string # value) tftp_server=192.168.0.2 [deploy] # Ironic compute node's http root path. (string value) http_root=/httpboot # Ironic compute node's HTTP server URL. Example: # http://192.1.2.3:8080 (string value) http_url=http://192.168.0.2:8080 #. Install the iPXE package with the boot images: Ubuntu:: apt-get install ipxe RHEL7/CentOS7:: yum install ipxe-bootimgs Fedora:: dnf install ipxe-bootimgs .. note:: SUSE does not provide a package containing iPXE boot images. If you are using SUSE or if the packaged version of the iPXE boot image doesn't work, you can download a prebuilt one from http://boot.ipxe.org or build one image from source, see http://ipxe.org/download for more information. #. Copy the iPXE boot image (``undionly.kpxe`` for **BIOS** and ``ipxe.efi`` for **UEFI**) to ``/tftpboot``. The binary might be found at: Ubuntu:: cp /usr/lib/ipxe/{undionly.kpxe,ipxe.efi} /tftpboot Fedora/RHEL7/CentOS7:: cp /usr/share/ipxe/{undionly.kpxe,ipxe.efi} /tftpboot #. Enable/Configure iPXE in the Bare Metal Service's configuration file (/etc/ironic/ironic.conf): .. code-block:: ini [pxe] # Enable iPXE boot. (boolean value) ipxe_enabled=True # Neutron bootfile DHCP parameter. (string value) pxe_bootfile_name=undionly.kpxe # Bootfile DHCP parameter for UEFI boot mode. (string value) uefi_pxe_bootfile_name=ipxe.efi # Template file for PXE configuration. (string value) pxe_config_template=$pybasedir/drivers/modules/ipxe_config.template # Template file for PXE configuration for UEFI boot loader. # (string value) uefi_pxe_config_template=$pybasedir/drivers/modules/ipxe_config.template .. note:: The ``[pxe]ipxe_enabled`` option has been deprecated and will be removed in the T* development cycle. Users should instead consider use of the ``ipxe`` boot interface. The same default use of iPXE functionality can be achieved by setting the ``[DEFAULT]default_boot_interface`` option to ``ipxe``. #. It is possible to configure the Bare Metal service in such a way that nodes will boot into the deploy image directly from Object Storage. Doing this avoids having to cache the images on the ironic-conductor host and serving them via the ironic-conductor's `HTTP server`_. This can be done if: #. the Image Service is used for image storage; #. the images in the Image Service are internally stored in Object Storage; #. the Object Storage supports generating temporary URLs for accessing objects stored in it. Both the OpenStack Swift and RADOS Gateway provide support for this. * See :doc:`/admin/radosgw` on how to configure the Bare Metal Service with RADOS Gateway as the Object Storage. Configure this by setting the ``[pxe]/ipxe_use_swift`` configuration option to ``True`` as follows: .. code-block:: ini [pxe] # Download deploy images directly from swift using temporary # URLs. If set to false (default), images are downloaded to # the ironic-conductor node and served over its local HTTP # server. Applicable only when 'ipxe_enabled' option is set to # true. (boolean value) ipxe_use_swift=True Although the `HTTP server`_ still has to be deployed and configured (as it will serve iPXE boot script and boot configuration files for nodes), such configuration will shift some load from ironic-conductor hosts to the Object Storage service which can be scaled horizontally. Note that when SSL is enabled on the Object Storage service you have to ensure that iPXE firmware on the nodes can indeed boot from generated temporary URLs that use HTTPS protocol. #. Restart the ``ironic-conductor`` process: Fedora/RHEL7/CentOS7/SUSE:: sudo systemctl restart openstack-ironic-conductor Ubuntu:: sudo service ironic-conductor restart PXE multi-architecture setup ---------------------------- It is possible to deploy servers of different architecture by one conductor. To use this feature, architecture-specific boot and template files must be configured using the configuration options ``[pxe]pxe_bootfile_name_by_arch`` and ``[pxe]pxe_config_template_by_arch`` respectively, in the Bare Metal service's configuration file (/etc/ironic/ironic.conf). These two options are dictionary values; the key is the architecture and the value is the boot (or config template) file. A node's ``cpu_arch`` property is used as the key to get the appropriate boot file and template file. If the node's ``cpu_arch`` is not in the dictionary, the configuration options (in [pxe] group) ``pxe_bootfile_name``, ``pxe_config_template``, ``uefi_pxe_bootfile_name`` and ``uefi_pxe_config_template`` will be used instead. In the following example, since 'x86' and 'x86_64' keys are not in the ``pxe_bootfile_name_by_arch`` or ``pxe_config_template_by_arch`` options, x86 and x86_64 nodes will be deployed by 'pxelinux.0' or 'bootx64.efi', depending on the node's ``boot_mode`` capability ('bios' or 'uefi'). However, aarch64 nodes will be deployed by 'grubaa64.efi', and ppc64 nodes by 'bootppc64':: [pxe] # Bootfile DHCP parameter. (string value) pxe_bootfile_name=pxelinux.0 # On ironic-conductor node, template file for PXE # configuration. (string value) pxe_config_template = $pybasedir/drivers/modules/pxe_config.template # Bootfile DHCP parameter for UEFI boot mode. (string value) uefi_pxe_bootfile_name=bootx64.efi # On ironic-conductor node, template file for PXE # configuration for UEFI boot loader. (string value) uefi_pxe_config_template=$pybasedir/drivers/modules/pxe_grub_config.template # Bootfile DHCP parameter per node architecture. (dict value) pxe_bootfile_name_by_arch=aarch64:grubaa64.efi,ppc64:bootppc64 # On ironic-conductor node, template file for PXE # configuration per node architecture. For example: # aarch64:/opt/share/grubaa64_pxe_config.template (dict value) pxe_config_template_by_arch=aarch64:pxe_grubaa64_config.template,ppc64:pxe_ppc64_config.template .. note:: The grub implementation may vary on different architecture, you may need to tweak the pxe config template for a specific arch. For example, grubaa64.efi shipped with CentoOS7 does not support ``linuxefi`` and ``initrdefi`` commands, you'll need to switch to use ``linux`` and ``initrd`` command instead. PXE timeouts tuning ------------------- Because of its reliance on UDP-based protocols (DHCP and TFTP), PXE is particularly vulnerable to random failures during the booting stage. If the deployment ramdisk never calls back to the bare metal conductor, the build will be aborted, and the node will be moved to the ``deploy failed`` state, after the deploy callback timeout. This timeout can be changed via the :oslo.config:option:`conductor.deploy_callback_timeout` configuration option. Starting with the Train release, the Bare Metal service can retry PXE boot if it takes too long. The timeout is defined via :oslo.config:option:`pxe.boot_retry_timeout` and must be smaller than the ``deploy_callback_timeout``, otherwise it will have no effect. For example, the following configuration sets the overall timeout to 60 minutes, allowing two retries after 20 minutes: .. code-block:: ini [conductor] deploy_callback_timeout = 3600 [pxe] boot_retry_timeout = 1200 ironic-15.0.0/doc/source/install/next-steps.rst0000664000175000017500000000016313652514273021471 0ustar zuulzuul00000000000000.. _next-steps: ========== Next steps ========== Your OpenStack environment now includes the Bare Metal service. ironic-15.0.0/doc/source/install/install.rst0000664000175000017500000000054313652514273021027 0ustar zuulzuul00000000000000.. _install: Install and configure the Bare Metal service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Bare Metal service, code-named ironic. Note that installation and configuration vary by distribution. .. toctree:: :maxdepth: 2 install-rdo.rst install-ubuntu.rst install-obs.rst ironic-15.0.0/doc/source/install/configure-integration.rst0000664000175000017500000000062513652514273023664 0ustar zuulzuul00000000000000========================================= Integration with other OpenStack services ========================================= .. toctree:: :maxdepth: 1 configure-identity configure-compute configure-networking configure-ipv6-networking configure-glance-swift enabling-https configure-cleaning configure-tenant-networks.rst configure-glance-images configure-nova-flavors ironic-15.0.0/doc/source/install/advanced.rst0000664000175000017500000000060013652514273021120 0ustar zuulzuul00000000000000.. _advanced: Advanced features ================= .. include:: include/local-boot-partition-images.inc .. include:: include/root-device-hints.inc .. include:: include/kernel-boot-parameters.inc .. include:: include/boot-mode.inc .. include:: include/disk-label.inc .. include:: include/trusted-boot.inc .. include:: include/notifications.inc .. include:: include/console.inc ironic-15.0.0/doc/source/install/get_started.rst0000664000175000017500000001414113652514273021665 0ustar zuulzuul00000000000000=========================== Bare Metal service overview =========================== The Bare Metal service, codenamed ``ironic``, is a collection of components that provides support to manage and provision physical machines. Bare Metal service components ----------------------------- The Bare Metal service includes the following components: ironic-api A RESTful API that processes application requests by sending them to the ironic-conductor over `remote procedure call (RPC)`_. Can be run through WSGI_ or as a separate process. ironic-conductor Adds/edits/deletes nodes; powers on/off nodes with IPMI or other vendor-specific protocol; provisions/deploys/cleans bare metal nodes. ironic-conductor uses :doc:`drivers ` to execute operations on hardware. ironic-python-agent A python service which is run in a temporary ramdisk to provide ironic-conductor and ironic-inspector services with remote access, in-band hardware control, and hardware introspection. Additionally, the Bare Metal service has certain external dependencies, which are very similar to other OpenStack services: - A database to store hardware information and state. You can set the database back-end type and location. A simple approach is to use the same database back end as the Compute service. Another approach is to use a separate database back-end to further isolate bare metal resources (and associated metadata) from users. - An :oslo.messaging-doc:`oslo.messaging <>` compatible queue, such as RabbitMQ. It may use the same implementation as that of the Compute service, but that is not a requirement. Used to implement RPC between ironic-api and ironic-conductor. Deployment architecture ----------------------- The Bare Metal RESTful API service is used to enroll hardware that the Bare Metal service will manage. A cloud administrator usually registers it, specifying their attributes such as MAC addresses and IPMI credentials. There can be multiple instances of the API service. The *ironic-conductor* process does the bulk of the work. For security reasons, it is advisable to place it on an isolated host, since it is the only service that requires access to both the data plane and IPMI control plane. There can be multiple instances of the conductor service to support various class of drivers and also to manage fail over. Instances of the conductor service should be on separate nodes. Each conductor can itself run many drivers to operate heterogeneous hardware. This is depicted in the following figure. .. figure:: ../images/deployment_architecture_2.png :alt: Deployment Architecture The API exposes a list of supported drivers and the names of conductor hosts servicing them. Interaction with OpenStack components ------------------------------------- The Bare Metal service may, depending upon configuration, interact with several other OpenStack services. This includes: - the OpenStack Telemetry module (``ceilometer``) for consuming the IPMI metrics - the OpenStack Identity service (``keystone``) for request authentication and to locate other OpenStack services - the OpenStack Image service (``glance``) from which to retrieve images and image meta-data - the OpenStack Networking service (``neutron``) for DHCP and network configuration - the OpenStack Compute service (``nova``) works with the Bare Metal service and acts as a user-facing API for instance management, while the Bare Metal service provides the admin/operator API for hardware management. The OpenStack Compute service also provides scheduling facilities (matching flavors <-> images <-> hardware), tenant quotas, IP assignment, and other services which the Bare Metal service does not, in and of itself, provide. - the OpenStack Object Storage (``swift``) provides temporary storage for the configdrive, user images, deployment logs and inspection data. Logical architecture -------------------- The diagram below shows the logical architecture. It shows the basic components that form the Bare Metal service, the relation of the Bare Metal service with other OpenStack services and the logical flow of a boot instance request resulting in the provisioning of a physical server. .. figure:: ../images/logical_architecture.png :alt: Logical Architecture A user's request to boot an instance is passed to the Compute service via the Compute API and the Compute Scheduler. The Compute service uses the *ironic virt driver* to hand over this request to the Bare Metal service, where the request passes from the Bare Metal API, to the Conductor, to a Driver to successfully provision a physical server for the user. Just as the Compute service talks to various OpenStack services like Image, Network, Object Store etc to provision a virtual machine instance, here the Bare Metal service talks to the same OpenStack services for image, network and other resource needs to provision a bare metal instance. See :ref:`understanding-deployment` for a more detailed breakdown of a typical deployment process. Associated projects ------------------- Optionally, one may wish to utilize the following associated projects for additional functionality: :python-ironicclient-doc:`python-ironicclient <>` A command-line interface (CLI) and python bindings for interacting with the Bare Metal service. :ironic-ui-doc:`ironic-ui <>` Horizon dashboard, providing graphical interface (GUI) for the Bare Metal API. :ironic-inspector-doc:`ironic-inspector <>` An associated service which performs in-band hardware introspection by PXE booting unregistered hardware into the ironic-python-agent ramdisk. diskimage-builder_ A related project to help facilitate the creation of ramdisks and machine images, such as those running the ironic-python-agent. :bifrost-doc:`bifrost <>` A set of Ansible playbooks that automates the task of deploying a base image onto a set of known hardware using ironic in a standalone mode. .. _remote procedure call (RPC): https://en.wikipedia.org/wiki/Remote_procedure_call .. _WSGI: https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface .. _diskimage-builder: https://docs.openstack.org/diskimage-builder/latest/ ironic-15.0.0/doc/source/install/configure-identity.rst0000664000175000017500000000672713652514273023203 0ustar zuulzuul00000000000000Configure the Identity service for the Bare Metal service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Create the Bare Metal service user (for example, ``ironic``). The service uses this to authenticate with the Identity service. Use the ``service`` tenant and give the user the ``admin`` role: .. code-block:: console $ openstack user create --password IRONIC_PASSWORD \ --email ironic@example.com ironic $ openstack role add --project service --user ironic admin #. You must register the Bare Metal service with the Identity service so that other OpenStack services can locate it. To register the service: .. code-block:: console $ openstack service create --name ironic --description \ "Ironic baremetal provisioning service" baremetal #. Use the ``id`` property that is returned from the Identity service when registering the service (above), to create the endpoint, and replace ``IRONIC_NODE`` with your Bare Metal service's API node: .. code-block:: console $ openstack endpoint create --region RegionOne \ baremetal admin http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne \ baremetal public http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne \ baremetal internal http://$IRONIC_NODE:6385 #. You may delegate limited privileges related to the Bare Metal service to your Users by creating Roles with the OpenStack Identity service. By default, the Bare Metal service expects the "baremetal_admin" and "baremetal_observer" Roles to exist, in addition to the default "admin" Role. There is no negative consequence if you choose not to create these Roles. They can be created with the following commands: .. code-block:: console $ openstack role create baremetal_admin $ openstack role create baremetal_observer If you choose to customize the names of Roles used with the Bare Metal service, do so by changing the "is_member", "is_observer", and "is_admin" policy settings in ``/etc/ironic/policy.json``. More complete documentation on managing Users and Roles within your OpenStack deployment are outside the scope of this document, but may be found :keystone-doc:`here `. #. You can further restrict access to the Bare Metal service by creating a separate "baremetal" Project, so that Bare Metal resources (Nodes, Ports, etc) are only accessible to members of this Project: .. code-block:: console $ openstack project create baremetal At this point, you may grant read-only access to the Bare Metal service API without granting any other access by issuing the following commands: .. code-block:: console $ openstack user create \ --domain default --project-domain default --project baremetal \ --password PASSWORD USERNAME $ openstack role add \ --user-domain default --project-domain default --project baremetal \ --user USERNAME baremetal_observer #. Further documentation is available elsewhere for the ``openstack`` :python-openstackclient-doc:`command-line client ` and the :keystone-doc:`Identity ` service. A :doc:`policy.json.sample ` file, which enumerates the service's default policies, is provided for your convenience with the Bare Metal Service. ironic-15.0.0/doc/source/install/index.rst0000664000175000017500000000127513652514273020473 0ustar zuulzuul00000000000000===================================== Bare Metal Service Installation Guide ===================================== The Bare Metal service is a collection of components that provides support to manage and provision physical machines. This chapter assumes a working setup of OpenStack following the `OpenStack Installation Guides `_. It contains the following sections: .. toctree:: :maxdepth: 2 get_started.rst refarch/index install.rst creating-images.rst deploy-ramdisk.rst configure-integration.rst setup-drivers.rst enrollment.rst standalone.rst configdrive.rst advanced.rst troubleshooting.rst next-steps.rst ironic-15.0.0/doc/source/install/enrollment.rst0000664000175000017500000010135413652514273021542 0ustar zuulzuul00000000000000.. _enrollment: Enrollment ========== After all the services have been properly configured, you should enroll your hardware with the Bare Metal service, and confirm that the Compute service sees the available hardware. The nodes will be visible to the Compute service once they are in the ``available`` provision state. .. note:: After enrolling nodes with the Bare Metal service, the Compute service will not be immediately notified of the new resources. The Compute service's resource tracker syncs periodically, and so any changes made directly to the Bare Metal service's resources will become visible in the Compute service only after the next run of that periodic task. More information is in the :ref:`troubleshooting-install` section. .. note:: Any bare metal node that is visible to the Compute service may have a workload scheduled to it, if both the ``power`` and ``management`` interfaces pass the ``validate`` check. If you wish to exclude a node from the Compute service's scheduler, for instance so that you can perform maintenance on it, you can set the node to "maintenance" mode. For more information see the :ref:`maintenance_mode` section. Choosing a driver ----------------- When enrolling a node, the most important information to supply is *driver*. See :doc:`enabling-drivers` for a detailed explanation of bare metal drivers, hardware types and interfaces. The ``driver list`` command can be used to list all drivers enabled on all hosts: .. code-block:: console openstack baremetal driver list +---------------------+-----------------------+ | Supported driver(s) | Active host(s) | +---------------------+-----------------------+ | ipmi | localhost.localdomain | +---------------------+-----------------------+ The specific driver to use should be picked based on actual hardware capabilities and expected features. See :doc:`/admin/drivers` for more hints on that. Each driver has a list of *driver properties* that need to be specified via the node's ``driver_info`` field, in order for the driver to operate on node. This list consists of the properties of the hardware interfaces that the driver uses. These driver properties are available with the ``driver property list`` command: .. code-block:: console $ openstack baremetal driver property list ipmi +----------------------+-------------------------------------------------------------------------------------------------------------+ | Property | Description | +----------------------+-------------------------------------------------------------------------------------------------------------+ | ipmi_address | IP address or hostname of the node. Required. | | ipmi_password | password. Optional. | | ipmi_username | username; default is NULL user. Optional. | | ... | ... | | deploy_kernel | UUID (from Glance) of the deployment kernel. Required. | | deploy_ramdisk | UUID (from Glance) of the ramdisk that is mounted at boot time. Required. | +----------------------+-------------------------------------------------------------------------------------------------------------+ The properties marked as required must be supplied either during node creation or shortly after. Some properties may only be required for certain features. Note on API versions -------------------- Starting with API version 1.11, the Bare Metal service added a new initial provision state of ``enroll`` to its state machine. When this or later API version is used, new nodes get this state instead of ``available``. Existing automation tooling that use an API version lower than 1.11 are not affected, since the initial provision state is still ``available``. However, using API version 1.11 or above may break existing automation tooling with respect to node creation. The default API version used by (the most recent) python-ironicclient is 1.9, but it may change in the future and should not be relied on. In the examples below we will use version 1.11 of the Bare metal API. This gives us the following advantages: * Explicit power credentials validation before leaving the ``enroll`` state. * Running node cleaning before entering the ``available`` state. * Not exposing half-configured nodes to the scheduler. To set the API version for all commands, you can set the environment variable ``IRONIC_API_VERSION``. For the OpenStackClient baremetal plugin, set the ``OS_BAREMETAL_API_VERSION`` variable to the same value. For example: .. code-block:: console $ export IRONIC_API_VERSION=1.11 $ export OS_BAREMETAL_API_VERSION=1.11 Enrollment process ------------------ Creating a node ~~~~~~~~~~~~~~~ This section describes the main steps to enroll a node and make it available for provisioning. Some steps are shown separately for illustration purposes, and may be combined if desired. #. Create a node in the Bare Metal service with the ``node create`` command. At a minimum, you must specify the driver name (for example, ``ipmi``). This command returns the node UUID along with other information about the node. The node's provision state will be ``enroll``: .. code-block:: console $ export OS_BAREMETAL_API_VERSION=1.11 $ openstack baremetal node create --driver ipmi +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 | | driver_info | {} | | extra | {} | | driver | ipmi | | chassis_uuid | | | properties | {} | | name | None | +--------------+--------------------------------------+ $ openstack baremetal node show dfc6189f-ad83-4261-9bda-b27258eb1987 +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | maintenance_reason | None | | provision_state | enroll | | uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | None | | maintenance | False | | power_state | None | | driver | ipmi | | properties | {} | | instance_uuid | None | | name | None | | driver_info | {} | | ... | ... | +------------------------+--------------------------------------+ A node may also be referred to by a logical name as well as its UUID. A name can be assigned to the node during its creation by adding the ``-n`` option to the ``node create`` command or by updating an existing node with the ``node set`` command. See `Logical Names`_ for examples. #. Starting with API version 1.31 (and ``python-ironicclient`` 1.13), you can pick which hardware interface to use with nodes that use hardware types. Each interface is represented by a node field called ``_interface`` where ```` in the interface type, e.g. ``boot``. See :doc:`enabling-drivers` for details on hardware interfaces. An interface can be set either separately: .. code-block:: console $ openstack baremetal --os-baremetal-api-version 1.31 node set $NODE_UUID \ --deploy-interface direct \ --raid-interface agent or set during node creation: .. code-block:: console $ openstack baremetal --os-baremetal-api-version 1.31 node create --driver ipmi \ --deploy-interface direct \ --raid-interface agent If no value is provided for some interfaces, `Defaults for hardware interfaces`_ are used instead. #. Update the node ``driver_info`` with the required driver properties, so that the Bare Metal service can manage the node: .. code-block:: console $ openstack baremetal node set $NODE_UUID \ --driver-info ipmi_username=$USER \ --driver-info ipmi_password=$PASS \ --driver-info ipmi_address=$ADDRESS .. note:: If IPMI is running on a port other than 623 (the default). The port must be added to ``driver_info`` by specifying the ``ipmi_port`` value. Example: .. code-block:: console $ openstack baremetal node set $NODE_UUID --driver-info ipmi_port=$PORT_NUMBER You may also specify all ``driver_info`` parameters during node creation by passing the **--driver-info** option multiple times: .. code-block:: console $ openstack baremetal node create --driver ipmi \ --driver-info ipmi_username=$USER \ --driver-info ipmi_password=$PASS \ --driver-info ipmi_address=$ADDRESS See `Choosing a driver`_ above for details on driver properties. #. Specify a deploy kernel and ramdisk compatible with the node's driver, for example: .. code-block:: console $ openstack baremetal node set $NODE_UUID \ --driver-info deploy_kernel=$DEPLOY_VMLINUZ_UUID \ --driver-info deploy_ramdisk=$DEPLOY_INITRD_UUID See :doc:`configure-glance-images` for details. #. Optionally you can specify the provisioning and/or cleaning network UUID or name in the node's ``driver_info``. The ``neutron`` network interface requires both ``provisioning_network`` and ``cleaning_network``, while the ``flat`` network interface requires the ``cleaning_network`` to be set either in the configuration or on the nodes. For example: .. code-block:: console $ openstack baremetal node set $NODE_UUID \ --driver-info cleaning_network=$CLEAN_UUID_OR_NAME \ --driver-info provisioning_network=$PROVISION_UUID_OR_NAME See :doc:`configure-tenant-networks` for details. #. You must also inform the Bare Metal service of the network interface cards which are part of the node by creating a port with each NIC's MAC address. These MAC addresses are passed to the Networking service during instance provisioning and used to configure the network appropriately: .. code-block:: console $ openstack baremetal port create $MAC_ADDRESS --node $NODE_UUID .. _enrollment-scheduling: Adding scheduling information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Assign a *resource class* to the node. A *resource class* should represent a class of hardware in your data center, that corresponds to a Compute flavor. For example, let's split hardware into these three groups: #. nodes with a lot of RAM and powerful CPU for computational tasks, #. nodes with powerful GPU for OpenCL computing, #. smaller nodes for development and testing. We can define three resource classes to reflect these hardware groups, named ``large-cpu``, ``large-gpu`` and ``small`` respectively. Then, for each node in each of the hardware groups, we'll set their ``resource_class`` appropriately via: .. code-block:: console $ openstack --os-baremetal-api-version 1.21 baremetal node set $NODE_UUID \ --resource-class $CLASS_NAME The ``--resource-class`` argument can also be used when creating a node: .. code-block:: console $ openstack --os-baremetal-api-version 1.21 baremetal node create \ --driver $DRIVER --resource-class $CLASS_NAME To use resource classes for scheduling you need to update your flavors as described in :doc:`configure-nova-flavors`. .. note:: This is not required for standalone deployments, only for those using the Compute service for provisioning bare metal instances. #. Update the node's properties to match the actual hardware of the node: .. code-block:: console $ openstack baremetal node set $NODE_UUID \ --property cpus=$CPU_COUNT \ --property memory_mb=$RAM_MB \ --property local_gb=$DISK_GB As above, these can also be specified at node creation by passing the **--property** option to ``node create`` multiple times: .. code-block:: console $ openstack baremetal node create --driver ipmi \ --driver-info ipmi_username=$USER \ --driver-info ipmi_password=$PASS \ --driver-info ipmi_address=$ADDRESS \ --property cpus=$CPU_COUNT \ --property memory_mb=$RAM_MB \ --property local_gb=$DISK_GB These values can also be discovered during `Hardware Inspection`_. .. warning:: The value provided for the ``local_gb`` property must match the size of the root device you're going to deploy on. By default **ironic-python-agent** picks the smallest disk which is not smaller than 4 GiB. If you override this logic by using root device hints (see :ref:`root-device-hints`), the ``local_gb`` value should match the size of picked target disk. #. If you wish to perform more advanced scheduling of the instances based on hardware capabilities, you may add metadata to each node that will be exposed to the Compute scheduler (see: :nova-doc:`ComputeCapabilitiesFilter `). A full explanation of this is outside of the scope of this document. It can be done through the special ``capabilities`` member of node properties: .. code-block:: console $ openstack baremetal node set $NODE_UUID \ --property capabilities=key1:val1,key2:val2 Some capabilities can also be discovered during `Hardware Inspection`_. #. If you wish to perform advanced scheduling of instances based on qualitative attributes of bare metal nodes, you may add traits to each bare metal node that will be exposed to the Compute scheduler (see: :ref:`scheduling-traits` for a more in-depth discussion of traits in the Bare Metal service). For example, to add the standard trait ``HW_CPU_X86_VMX`` and a custom trait ``CUSTOM_TRAIT1`` to a node: .. code-block:: console $ openstack baremetal node add trait $NODE_UUID \ CUSTOM_TRAIT1 HW_CPU_X86_VMX Validating node information ~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. To check if Bare Metal service has the minimum information necessary for a node's driver to be functional, you may ``validate`` it: .. code-block:: console $ openstack baremetal node validate $NODE_UUID +------------+--------+--------+ | Interface | Result | Reason | +------------+--------+--------+ | boot | True | | | console | True | | | deploy | True | | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+--------+ If the node fails validation, each driver interface will return information as to why it failed: .. code-block:: console $ openstack baremetal node validate $NODE_UUID +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ | boot | True | | | console | None | not supported | | deploy | False | Cannot validate iSCSI deploy. Some parameters were missing in node's instance_info. Missing are: ['root_gb', 'image_source'] | | inspect | True | | | management | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. | | network | True | | | power | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. | | raid | None | not supported | | storage | True | | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+ When using the Compute Service with the Bare Metal service, it is safe to ignore the deploy interface's validation error due to lack of image information. You may continue the enrollment process. This information will be set by the Compute Service just before deploying, when an instance is requested: .. code-block:: console $ openstack baremetal node validate $NODE_UUID +------------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | boot | False | Cannot validate image information for node because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | console | True | | | deploy | False | Cannot validate image information for node because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | None | not supported | | storage | True | | +------------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Making node available for deployment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order for nodes to be available for deploying workloads on them, nodes must be in the ``available`` provision state. To do this, nodes created with API version 1.11 and above must be moved from the ``enroll`` state to the ``manageable`` state and then to the ``available`` state. This section can be safely skipped, if API version 1.10 or earlier is used (which is the case by default). After creating a node and before moving it from its initial provision state of ``enroll``, basic power and port information needs to be configured on the node. The Bare Metal service needs this information because it verifies that it is capable of controlling the node when transitioning the node from ``enroll`` to ``manageable`` state. To move a node from ``enroll`` to ``manageable`` provision state: .. code-block:: console $ openstack baremetal --os-baremetal-api-version 1.11 node manage $NODE_UUID $ openstack baremetal node show $NODE_UUID +------------------------+--------------------------------------------------------------------+ | Property | Value | +------------------------+--------------------------------------------------------------------+ | ... | ... | | provision_state | manageable | <- verify correct state | uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | | ... | ... | +------------------------+--------------------------------------------------------------------+ .. note:: Since it is an asynchronous call, the response for ``openstack baremetal node manage`` will not indicate whether the transition succeeded or not. You can check the status of the operation via ``openstack baremetal node show``. If it was successful, ``provision_state`` will be in the desired state. If it failed, there will be information in the node's ``last_error``. When a node is moved from the ``manageable`` to ``available`` provision state, the node will go through automated cleaning if configured to do so (see :ref:`configure-cleaning`). To move a node from ``manageable`` to ``available`` provision state: .. code-block:: console $ openstack baremetal --os-baremetal-api-version 1.11 node provide $NODE_UUID $ openstack baremetal node show $NODE_UUID +------------------------+--------------------------------------------------------------------+ | Property | Value | +------------------------+--------------------------------------------------------------------+ | ... | ... | | provision_state | available | < - verify correct state | uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | | ... | ... | +------------------------+--------------------------------------------------------------------+ For more details on the Bare Metal service's state machine, see the :doc:`/contributor/states` documentation. Mapping nodes to Compute cells ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the Compute service is used for scheduling, and the ``discover_hosts_in_cells_interval`` was not set as described in :doc:`configure-compute`, then log into any controller node and run the following command to map the new node(s) to Compute cells:: nova-manage cell_v2 discover_hosts Logical names ------------- A node may also be referred to by a logical name as well as its UUID. Names can be assigned either during its creation by adding the ``-n`` option to the ``node create`` command or by updating an existing node with the ``node set`` command. Node names must be unique, and conform to: - rfc952_ - rfc1123_ - wiki_hostname_ The node is named 'example' in the following examples: .. code-block:: console $ openstack baremetal node create --driver ipmi --name example or .. code-block:: console $ openstack baremetal node set $NODE_UUID --name example Once assigned a logical name, a node can then be referred to by name or UUID interchangeably: .. code-block:: console $ openstack baremetal node create --driver ipmi --name example +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | 71e01002-8662-434d-aafd-f068f69bb85e | | driver_info | {} | | extra | {} | | driver | ipmi | | chassis_uuid | | | properties | {} | | name | example | +--------------+--------------------------------------+ $ openstack baremetal node show example +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-04-24T16:23:46+00:00 | | ... | ... | | instance_info | {} | +------------------------+--------------------------------------+ .. _rfc952: https://tools.ietf.org/html/rfc952 .. _rfc1123: https://tools.ietf.org/html/rfc1123 .. _wiki_hostname: https://en.wikipedia.org/wiki/Hostname .. _hardware_interfaces_defaults: Defaults for hardware interfaces -------------------------------- For *hardware types*, users can request one of enabled implementations when creating or updating a node as explained in `Creating a node`_. When no value is provided for a certain interface when creating a node, or changing a node's hardware type, the default value is used. You can use the driver details command to list the current enabled and default interfaces for a hardware type (for your deployment): .. code-block:: console $ openstack baremetal --os-baremetal-api-version 1.31 driver show ipmi +-------------------------------+----------------+ | Field | Value | +-------------------------------+----------------+ | default_boot_interface | pxe | | default_console_interface | no-console | | default_deploy_interface | iscsi | | default_inspect_interface | no-inspect | | default_management_interface | ipmitool | | default_network_interface | flat | | default_power_interface | ipmitool | | default_raid_interface | no-raid | | default_vendor_interface | no-vendor | | enabled_boot_interfaces | pxe | | enabled_console_interfaces | no-console | | enabled_deploy_interfaces | iscsi, direct | | enabled_inspect_interfaces | no-inspect | | enabled_management_interfaces | ipmitool | | enabled_network_interfaces | flat, noop | | enabled_power_interfaces | ipmitool | | enabled_raid_interfaces | no-raid, agent | | enabled_vendor_interfaces | no-vendor | | hosts | ironic-host-1 | | name | ipmi | | type | dynamic | +-------------------------------+----------------+ The defaults are calculated as follows: #. If the ``default__interface`` configuration option (where ```` is the interface name) is set, its value is used as the default. If this implementation is not compatible with the node's hardware type, an error is returned to a user. An explicit value has to be provided for the node's ``_interface`` field in this case. #. Otherwise, the first supported implementation that is enabled by an operator is used as the default. A list of supported implementations is calculated by taking the intersection between the implementations supported by the node's hardware type and implementations enabled by the ``enabled__interfaces`` option (where ```` is the interface name). The calculation preserves the order of items, as provided by the hardware type. If the list of supported implementations is not empty, the first one is used. Otherwise, an error is returned to a user. In this case, an explicit value has to be provided for the ``_interface`` field. See :doc:`enabling-drivers` for more details on configuration. Example ~~~~~~~ Consider the following configuration (shortened for simplicity): .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi,redfish enabled_console_interfaces = no-console,ipmitool-shellinabox enabled_deploy_interfaces = iscsi,direct enabled_management_interfaces = ipmitool,redfish enabled_power_interfaces = ipmitool,redfish default_deploy_interface = direct A new node is created with the ``ipmi`` driver and no interfaces specified: .. code-block:: console $ export OS_BAREMETAL_API_VERSION=1.31 $ openstack baremetal node create --driver ipmi +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 | | driver_info | {} | | extra | {} | | driver | ipmi | | chassis_uuid | | | properties | {} | | name | None | +--------------+--------------------------------------+ Then the defaults for the interfaces that will be used by the node in this example are calculated as follows: deploy An explicit value of ``direct`` is provided for ``default_deploy_interface``, so it is used. power No default is configured. The ``ipmi`` hardware type supports only ``ipmitool`` power. The intersection between supported power interfaces and values provided in the ``enabled_power_interfaces`` option has only one item: ``ipmitool``. It is used. console No default is configured. The ``ipmi`` hardware type supports the following console interfaces: ``ipmitool-socat``, ``ipmitool-shellinabox`` and ``no-console`` (in this order). Of these three, only two are enabled: ``no-console`` and ``ipmitool-shellinabox`` (order does not matter). The intersection contains ``ipmitool-shellinabox`` and ``no-console``. The first item is used, and it is ``ipmitool-shellinabox``. management Following the same calculation as *power*, the ``ipmitool`` management interface is used. Hardware Inspection ------------------- The Bare Metal service supports hardware inspection that simplifies enrolling nodes - please see :doc:`/admin/inspection` for details. Tenant Networks and Port Groups ------------------------------- See :doc:`/admin/multitenancy` and :doc:`/admin/portgroups`. ironic-15.0.0/doc/source/install/configdrive.rst0000664000175000017500000001450113652514273021657 0ustar zuulzuul00000000000000.. _configdrive: Enabling the configuration drive (configdrive) ============================================== The Bare Metal service supports exposing a configuration drive image to the instances. The configuration drive is used to store instance-specific metadata and is present to the instance as a disk partition labeled ``config-2``. The configuration drive has a maximum size of 64MB. One use case for using the configuration drive is to expose a networking configuration when you do not use DHCP to assign IP addresses to instances. The configuration drive is usually used in conjunction with the Compute service, but the Bare Metal service also offers a standalone way of using it. The following sections will describe both methods. When used with Compute service ------------------------------ To enable the configuration drive for a specific request, pass ``--config-drive true`` parameter to the :command:`nova boot` command, for example:: nova boot --config-drive true --flavor baremetal --image test-image instance-1 It's also possible to enable the configuration drive automatically on all instances by configuring the ``OpenStack Compute service`` to always create a configuration drive by setting the following option in the ``/etc/nova/nova.conf`` file, for example:: [DEFAULT] ... force_config_drive=True In some cases, you may wish to pass a user customized script when deploying an instance. To do this, pass ``--user-data /path/to/file`` to the :command:`nova boot` command. When used standalone -------------------- When used without the Compute service, the operator needs to create a configuration drive and provide the file or HTTP URL to the Bare Metal service. For the format of the configuration drive, Bare Metal service expects a ``gzipped`` and ``base64`` encoded ISO 9660 [#]_ file with a ``config-2`` label. The :python-ironicclient-doc:`openstack baremetal client ` can generate a configuration drive in the `expected format`_. Just pass a directory path containing the files that will be injected into it via the ``--config-drive`` parameter of the ``openstack baremetal node deploy`` command, for example:: openstack baremetal node deploy $node_identifier --config-drive /dir/configdrive_files Starting with the Stein release and `ironicclient` 2.7.0, you can request building a configdrive on the server side by providing a JSON with keys ``meta_data``, ``user_data`` and ``network_data`` (all optional), e.g.: .. code-block:: bash openstack baremetal node deploy $node_identifier \ --config-drive '{"meta_data": {"hostname": "server1.cluster"}}' Configuration drive storage in an object store ---------------------------------------------- Under normal circumstances, the configuration drive can be stored in the Bare Metal service when the size is less than 64KB. Optionally, if the size is larger than 64KB there is support to store it in a swift endpoint. Both swift and radosgw use swift-style APIs. The following option in ``/etc/ironic/ironic.conf`` enables swift as an object store backend to store config drive. This uses the Identity service to establish a session between the Bare Metal service and the Object Storage service. :: [deploy] ... configdrive_use_object_store = True Use the following options in ``/etc/ironic/ironic.conf`` to enable radosgw. Credentials in the swift section are needed because radosgw will not use the Identity service and relies on radosgw's username and password authentication instead. :: [deploy] ... configdrive_use_object_store = True [swift] ... username = USERNAME password = PASSWORD auth_url = http://RADOSGW_IP:8000/auth/v1 If the :ref:`direct-deploy` is being used, edit ``/etc/glance/glance-api.conf`` to store the instance images in respective object store (radosgw or swift) as well:: [glance_store] ... swift_store_user = USERNAME swift_store_key = PASSWORD swift_store_auth_address = http://RADOSGW_OR_SWIFT_IP:PORT/auth/v1 Accessing the configuration drive data -------------------------------------- When the configuration drive is enabled, the Bare Metal service will create a partition on the instance disk and write the configuration drive image onto it. The configuration drive must be mounted before use. This is performed automatically by many tools, such as cloud-init and cloudbase-init. To mount it manually on a Linux distribution that supports accessing devices by labels, simply run the following:: mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config If the guest OS doesn't support accessing devices by labels, you can use other tools such as ``blkid`` to identify which device corresponds to the configuration drive and mount it, for example:: CONFIG_DEV=$(blkid -t LABEL="config-2" -odevice) mkdir -p /mnt/config mount $CONFIG_DEV /mnt/config .. [#] A configuration drive could also be a data block with a VFAT filesystem on it instead of ISO 9660. But it's unlikely that it would be needed since ISO 9660 is widely supported across operating systems. Cloud-init integration ---------------------- The configuration drive can be especially useful when used with `cloud-init `_, but in order to use it we should follow some rules: * ``Cloud-init`` data should be organized in the `expected format`_. * Since the Bare Metal service uses a disk partition as the configuration drive, it will only work with `cloud-init version >= 0.7.5 `_. * ``Cloud-init`` has a collection of data source modules, so when building the image with `disk-image-builder`_ we have to define ``DIB_CLOUD_INIT_DATASOURCES`` environment variable and set the appropriate sources to enable the configuration drive, for example:: DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack" disk-image-create -o fedora-cloud-image fedora baremetal For more information see `how to configure cloud-init data sources `_. .. _`expected format`: https://docs.openstack.org/nova/latest/user/vendordata.html .. _disk-image-builder: https://docs.openstack.org/diskimage-builder/latest/ ironic-15.0.0/doc/source/admin/0000775000175000017500000000000013652514443016246 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/admin/drivers.rst0000664000175000017500000000776613652514273020477 0ustar zuulzuul00000000000000=============================================== Drivers, Hardware Types and Hardware Interfaces =============================================== Generic Interfaces ------------------ .. toctree:: :maxdepth: 2 interfaces/boot interfaces/deploy Hardware Types -------------- .. toctree:: :maxdepth: 1 drivers/ibmc drivers/idrac drivers/ilo drivers/intel-ipmi drivers/ipmitool drivers/irmc drivers/redfish drivers/snmp drivers/xclarity Changing Hardware Types and Interfaces -------------------------------------- Hardware types and interfaces are enabled in the configuration as described in :doc:`/install/enabling-drivers`. Usually, a hardware type is configured on enrolling as described in :doc:`/install/enrollment`:: openstack baremetal node create --driver Any hardware interfaces can be specified on enrollment as well:: openstack baremetal node create --driver \ --deploy-interface direct ---interface For the remaining interfaces the default value is assigned as described in :ref:`hardware_interfaces_defaults`. Both the hardware type and the hardware interfaces can be changed later via the node update API. Changing Hardware Interfaces ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hardware interfaces can be changed by the following command:: openstack baremetal node set \ --deploy-interface direct \ ---interface The modified interfaces must be enabled and compatible with the current node's hardware type. Changing Hardware Type ~~~~~~~~~~~~~~~~~~~~~~ Changing the node's hardware type can pose a problem. When the ``driver`` field is updated, the final result must be consistent, that is, the resulting hardware interfaces must be compatible with the new hardware type. This will not work:: openstack baremetal node create --name test --driver fake-hardware openstack baremetal node set test --driver ipmi This is because the ``fake-hardware`` hardware type defaults to ``fake`` implementations for some or all interfaces, but the ``ipmi`` hardware type is not compatible with them. There are three ways to deal with this situation: #. Provide new values for all incompatible interfaces, for example:: openstack baremetal node set test --driver ipmi \ --boot-interface pxe \ --deploy-interface iscsi \ --management-interface ipmitool \ --power-interface ipmitool #. Request resetting some of the interfaces to their new defaults by using the ``--reset--interface`` family of arguments, for example:: openstack baremetal node set test --driver ipmi \ --reset-boot-interface \ --reset-deploy-interface \ --reset-management-interface \ --reset-power-interface .. note:: This feature is available starting with ironic 11.1.0 (Rocky series, API version 1.45). #. Request resetting all interfaces to their new defaults:: openstack baremetal node set test --driver ipmi --reset-interfaces You can still specify explicit values for some interfaces:: openstack baremetal node set test --driver ipmi --reset-interfaces \ --deploy-interface direct .. note:: This feature is available starting with ironic 11.1.0 (Rocky series, API version 1.45). Unsupported drivers ------------------- The following drivers were declared as unsupported in ironic Newton release and as of Ocata release they are removed from ironic: - AMT driver - available as part of ironic-staging-drivers_ - iBoot driver - available as part of ironic-staging-drivers_ - Wake-On-Lan driver - available as part of ironic-staging-drivers_ - Virtualbox drivers - SeaMicro drivers - MSFT OCS drivers The SSH drivers were removed in the Pike release. Similar functionality can be achieved either with VirtualBMC_ or using libvirt drivers from ironic-staging-drivers_. .. _ironic-staging-drivers: http://ironic-staging-drivers.readthedocs.io .. _VirtualBMC: https://opendev.org/openstack/virtualbmc ironic-15.0.0/doc/source/admin/portgroups.rst0000664000175000017500000001406513652514273021233 0ustar zuulzuul00000000000000=================== Port groups support =================== The Bare Metal service supports static configuration of port groups (bonds) in the instances via configdrive. See `kernel documentation on bonding`_ to see why it may be useful and how it is setup in linux. The sections below describe how to make use of them in the Bare Metal service. Switch-side configuration ------------------------- If port groups are desired in the ironic deployment, they need to be configured on the switches. It needs to be done manually, and the mode and properties configured on the switch have to correspond to the mode and properties that will be configured on the ironic side, as bonding mode and properties may be named differently on your switch, or have possible values different from the ones described in `kernel documentation on bonding`_. Please refer to your switch configuration documentation for more details. Provisioning and cleaning cannot make use of port groups if they need to boot the deployment ramdisk via (i)PXE. If your switches or desired port group configuration do not support port group fallback, which will allow port group members to be used by themselves, you need to set port group's ``standalone_ports_supported`` value to be ``False`` in ironic, as it is ``True`` by default. Physical networks ----------------- If any port in a port group has a physical network, then all ports in that port group must have the same physical network. In order to change the physical network of the ports in a port group, all ports must first be removed from the port group, before changing their physical networks (to the same value), then adding them back to the port group. See :ref:`physical networks ` for further information on using physical networks in the Bare Metal service. Port groups configuration in the Bare Metal service --------------------------------------------------- Port group configuration is supported in ironic API microversions 1.26, the CLI commands below specify it for completeness. #. When creating a port group, the node to which it belongs must be specified, along with, optionally, its name, address, mode, properties, and if it supports fallback to standalone ports:: openstack --os-baremetal-api-version 1.26 baremetal port group create \ --node $NODE_UUID --name test --address fa:ab:25:48:fd:ba --mode 802.3ad \ --property miimon=100 --property xmit_hash_policy="layer2+3" \ --support-standalone-ports A port group can also be updated with ``openstack baremetal port group set`` command, see its help for more details. If an address is not specified, the port group address on the deployed instance will be the same as the address of the neutron port that is attached to the port group. If the neutron port is not attached, the port group will not be configured. .. note:: In standalone mode, port groups have to be configured manually. It can be done either statically inside the image, or by generating the configdrive and adding it to the node's ``instance_info``. For more information on how to configure bonding via configdrive, refer to `cloud-init documentation `_ and `code `_. cloud-init version 0.7.7 or later is required for bonding configuration to work. If the port group's address is not explicitly set in standalone mode, it will be set automatically by the process described in `kernel documentation on bonding`_. During interface attachment, port groups have higher priority than ports, so they will be used first. (It is not yet possible to specify which one is desired, a port group or a port, in an interface attachment request). Port groups that don't have any ports will be ignored. The mode and properties values are described in the `kernel documentation on bonding`_. The default port group mode is ``active-backup``, and this default can be changed by setting the ``[DEFAULT]default_portgroup_mode`` configuration option in the ironic API service configuration file. #. Associate ports with the created port group. It can be done on port creation:: openstack --os-baremetal-api-version 1.26 baremetal port create \ --node $NODE_UUID --address fa:ab:25:48:fd:ba --port-group test Or by updating an existing port:: openstack --os-baremetal-api-version 1.26 baremetal port set \ $PORT_UUID --port-group $PORT_GROUP_UUID When updating a port, the node associated with the port has to be in ``enroll``, ``manageable``, or ``inspecting`` states. A port group can have the same or different address as individual ports. #. Boot an instance (or node directly, in case of using standalone ironic) providing an image that has cloud-init version 0.7.7 or later and supports bonding. When the deployment is done, you can check that the port group is set up properly by running the following command in the instance:: cat /proc/net/bonding/bondX where ``X`` is a number autogenerated by cloud-init for each configured port group, in no particular order. It starts with 0 and increments by 1 for every configured port group. .. _`kernel documentation on bonding`: https://www.kernel.org/doc/Documentation/networking/bonding.txt Link aggregation/teaming on windows ----------------------------------- Portgroups are supported for Windows Server images, which can created by :ref:`building_image_windows` instruction. You can customise an instance after it is launched along with `script file `_ in ``Configuration`` of ``Instance`` and selected ``Configuration Drive`` option. Then ironic virt driver will generate network metadata and add all the additional information, such as bond mode, transmit hash policy, MII link monitoring interval, and of which links the bond consists. The information in InstanceMetadata will be used afterwards to generate the config drive. ironic-15.0.0/doc/source/admin/notifications.rst0000664000175000017500000007005413652514273021660 0ustar zuulzuul00000000000000.. _deploy-notifications: ============= Notifications ============= Ironic, when configured to do so, will emit notifications over a message bus that indicate different events that occur within the service. These can be consumed by any external service. Examples may include a billing or usage system, a monitoring data store, or other OpenStack services. This page describes how to enable notifications and the different kinds of notifications that ironic may emit. The external consumer will see notifications emitted by ironic as JSON objects structured in the following manner:: { "priority": , "event_type": , "timestamp": , "publisher_id": , "message_id": , "payload": } Configuration ============= To enable notifications with ironic, there are two configuration options in ironic.conf that must be adjusted. The first option is the ``notification_level`` option in the ``[DEFAULT]`` section of the configuration file. This can be set to "debug", "info", "warning", "error", or "critical", and determines the minimum priority level for which notifications are emitted. For example, if the option is set to "warning", all notifications with priority level "warning", "error", or "critical" are emitted, but not notifications with priority level "debug" or "info". For information about the semantics of each log level, see the OpenStack logging standards [1]_. If this option is unset, no notifications will be emitted. The priority level of each available notification is documented below. The second option is the ``transport_url`` option in the ``[oslo_messaging_notifications]`` section of the configuration. This determines the message bus used when sending notifications. If this is unset, the default transport used for RPC is used. All notifications are emitted on the "ironic_versioned_notifications" topic in the message bus. Generally, each type of message that traverses the message bus is associated with a topic describing what the message is about. For more information, see the documentation of your chosen message bus, such as the RabbitMQ documentation [2]_. Note that notifications may be lossy, and there's no guarantee that a notification will make it across the message bus to a consumer. Versioning ========== Each notification has an associated version in the "ironic_object.version" field of the payload. Consumers are guaranteed that microversion bumps will add new fields, while macroversion bumps are backwards-incompatible and may have fields removed. Versioned notifications are emitted by default to the `ironic_versioned_notifications` topic. This can be changed and it is configurable in the ironic.conf with the `versioned_notifications_topics` config option. Available notifications ======================= .. TODO(mariojv) Add some form of tabular formatting below The notifications that ironic emits are described here. They are listed (alphabetically) by service first, then by event_type. All examples below show payloads before serialization to JSON. ------------------------ ironic-api notifications ------------------------ Resources CRUD notifications ---------------------------- These notifications are emitted from API service when ironic resources are modified as part of create, update, or delete (CRUD) [3]_ procedures. All CRUD notifications are emitted at INFO level, except for "error" status that is emitted at ERROR level. List of CRUD notifications for chassis: * ``baremetal.chassis.create.start`` * ``baremetal.chassis.create.end`` * ``baremetal.chassis.create.error`` * ``baremetal.chassis.update.start`` * ``baremetal.chassis.update.end`` * ``baremetal.chassis.update.error`` * ``baremetal.chassis.delete.start`` * ``baremetal.chassis.delete.end`` * ``baremetal.chassis.delete.error`` Example of chassis CRUD notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"ChassisCRUDPayload", "ironic_object.version":"1.0", "ironic_object.data":{ "created_at": "2016-04-10T10:13:03+00:00", "description": "bare 28", "extra": {}, "updated_at": "2016-04-27T21:11:03+00:00", "uuid": "1910f669-ce8b-43c2-b1d8-cf3d65be815e" } }, "event_type":"baremetal.chassis.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for deploy template: * ``baremetal.deploy_template.create.start`` * ``baremetal.deploy_template.create.end`` * ``baremetal.deploy_template.create.error`` * ``baremetal.deploy_template.update.start`` * ``baremetal.deploy_template.update.end`` * ``baremetal.deploy_template.update.error`` * ``baremetal.deploy_template.delete.start`` * ``baremetal.deploy_template.delete.end`` * ``baremetal.deploy_template.delete.error`` Example of deploy template CRUD notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"DeployTemplateCRUDPayload", "ironic_object.version":"1.0", "ironic_object.data":{ "created_at": "2019-02-10T10:13:03+00:00", "extra": {}, "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "priority": 150 } ], "updated_at": "2019-02-27T21:11:03+00:00", "uuid": "1910f669-ce8b-43c2-b1d8-cf3d65be815e" } }, "event_type":"baremetal.deploy_template.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for node: * ``baremetal.node.create.start`` * ``baremetal.node.create.end`` * ``baremetal.node.create.error`` * ``baremetal.node.update.start`` * ``baremetal.node.update.end`` * ``baremetal.node.update.error`` * ``baremetal.node.delete.start`` * ``baremetal.node.delete.end`` * ``baremetal.node.delete.error`` Example of node CRUD notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodeCRUDPayload", "ironic_object.version":"1.8", "ironic_object.data":{ "chassis_uuid": "db0eef9d-45b2-4dc0-94a8-fc283c01171f", "clean_step": None, "conductor_group": "", "console_enabled": False, "created_at": "2016-01-26T20:41:03+00:00", "deploy_step": None, "driver": "ipmi", "driver_info": { "ipmi_address": "192.168.0.111", "ipmi_username": "root"}, "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_info": {}, "instance_uuid": None, "last_error": None, "maintenance": False, "maintenance_reason": None, "fault": None, "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "deploying", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": "active", "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123" } }, "event_type":"baremetal.node.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for port: * ``baremetal.port.create.start`` * ``baremetal.port.create.end`` * ``baremetal.port.create.error`` * ``baremetal.port.update.start`` * ``baremetal.port.update.end`` * ``baremetal.port.update.error`` * ``baremetal.port.delete.start`` * ``baremetal.port.delete.end`` * ``baremetal.port.delete.error`` Example of port CRUD notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"PortCRUDPayload", "ironic_object.version":"1.3", "ironic_object.data":{ "address": "77:66:23:34:11:b7", "created_at": "2016-02-11T15:23:03+00:00", "node_uuid": "5b236cab-ad4e-4220-b57c-e827e858745a", "extra": {}, "is_smartnic": True, "local_link_connection": {}, "physical_network": "physnet1", "portgroup_uuid": "bd2f385e-c51c-4752-82d1-7a9ec2c25f24", "pxe_enabled": True, "updated_at": "2016-03-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123" } }, "event_type":"baremetal.port.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for port group: * ``baremetal.portgroup.create.start`` * ``baremetal.portgroup.create.end`` * ``baremetal.portgroup.create.error`` * ``baremetal.portgroup.update.start`` * ``baremetal.portgroup.update.end`` * ``baremetal.portgroup.update.error`` * ``baremetal.portgroup.delete.start`` * ``baremetal.portgroup.delete.end`` * ``baremetal.portgroup.delete.error`` Example of portgroup CRUD notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"PortgroupCRUDPayload", "ironic_object.version":"1.0", "ironic_object.data":{ "address": "11:44:32:87:61:e5", "created_at": "2017-01-11T11:33:03+00:00", "node_uuid": "5b236cab-ad4e-4220-b57c-e827e858745a", "extra": {}, "mode": "7", "name": "portgroup-node-18", "properties": {}, "standalone_ports_supported": True, "updated_at": "2017-01-31T11:41:07+00:00", "uuid": "db033a40-bfed-4c84-815a-3db26bb268bb", } }, "event_type":"baremetal.portgroup.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for volume connector: * ``baremetal.volumeconnector.create.start`` * ``baremetal.volumeconnector.create.end`` * ``baremetal.volumeconnector.create.error`` * ``baremetal.volumeconnector.update.start`` * ``baremetal.volumeconnector.update.end`` * ``baremetal.volumeconnector.update.error`` * ``baremetal.volumeconnector.delete.start`` * ``baremetal.volumeconnector.delete.end`` * ``baremetal.volumeconnector.delete.error`` Example of volume connector CRUD notification:: { "priority": "info", "payload": { "ironic_object.namespace": "ironic", "ironic_object.name": "VolumeConnectorCRUDPayload", "ironic_object.version": "1.0", "ironic_object.data": { "connector_id": "iqn.2017-05.org.openstack:01:d9a51732c3f", "created_at": "2017-05-11T05:57:36+00:00", "extra": {}, "node_uuid": "4dbb4e69-99a8-4e13-b6e8-dd2ad4a20caf", "type": "iqn", "updated_at": "2017-05-11T08:28:58+00:00", "uuid": "19b9f3ab-4754-4725-a7a4-c43ea7e57360" } }, "event_type": "baremetal.volumeconnector.update.end", "publisher_id":"ironic-api.hostname02" } List of CRUD notifications for volume target: * ``baremetal.volumetarget.create.start`` * ``baremetal.volumetarget.create.end`` * ``baremetal.volumetarget.create.error`` * ``baremetal.volumetarget.update.start`` * ``baremetal.volumetarget.update.end`` * ``baremetal.volumetarget.update.error`` * ``baremetal.volumetarget.delete.start`` * ``baremetal.volumetarget.delete.end`` * ``baremetal.volumetarget.delete.error`` Example of volume target CRUD notification:: { "priority": "info", "payload": { "ironic_object.namespace": "ironic", "ironic_object.version": "1.0", "ironic_object.name": "VolumeTargetCRUDPayload" "ironic_object.data": { "boot_index": 0, "created_at": "2017-05-11T09:38:59+00:00", "extra": {}, "node_uuid": "4dbb4e69-99a8-4e13-b6e8-dd2ad4a20caf", "properties": { "access_mode": "rw", "auth_method": "CHAP" "auth_password": "***", "auth_username": "urxhQCzAKr4sjyE8DivY", "encrypted": false, "qos_specs": null, "target_discovered": false, "target_iqn": "iqn.2010-10.org.openstack:volume-f0d9b0e6-b242-9105-91d4-a20331693ad8", "target_lun": 1, "target_portal": "192.168.12.34:3260", "volume_id": "f0d9b0e6-b042-4105-91d4-a20331693ad8", }, "updated_at": "2017-05-11T09:52:04+00:00", "uuid": "82a45833-9c58-4ec1-943c-2091ab10e47b", "volume_id": "f0d9b0e6-b242-9105-91d4-a20331693ad8", "volume_type": "iscsi" } }, "event_type": "baremetal.volumetarget.update.end", "publisher_id":"ironic-api.hostname02" } Node maintenance notifications ------------------------------ These notifications are emitted from API service when maintenance mode is changed via API service. List of maintenance notifications for a node: * ``baremetal.node.maintenance_set.start`` * ``baremetal.node.maintenance_set.end`` * ``baremetal.node.maintenance_set.error`` "start" and "end" notifications have INFO level, "error" has ERROR. Example of node maintenance notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodePayload", "ironic_object.version":"1.10", "ironic_object.data":{ "clean_step": None, "conductor_group": "", "console_enabled": False, "created_at": "2016-01-26T20:41:03+00:00", "driver": "ipmi", "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_info": {}, "instance_uuid": None, "last_error": None, "maintenance": True, "maintenance_reason": "hw upgrade", "fault": None, "bios_interface": "no-bios", "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "available", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": None, "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123" } }, "event_type":"baremetal.node.maintenance_set.start", "publisher_id":"ironic-api.hostname02" } ------------------------------ ironic-conductor notifications ------------------------------ Node console notifications ------------------------------ These notifications are emitted by the ironic-conductor service when conductor service starts or stops console for the node. The notification event types for a node console are: * ``baremetal.node.console_set.start`` * ``baremetal.node.console_set.end`` * ``baremetal.node.console_set.error`` * ``baremetal.node.console_restore.start`` * ``baremetal.node.console_restore.end`` * ``baremetal.node.console_restore.error`` ``console_set`` action is used when start or stop console is initiated. The ``console_restore`` action is used when the console was already enabled, but a driver must restart the console because an ironic-conductor was restarted. This may also be sent when an ironic-conductor takes over a node that was being managed by another ironic-conductor. "start" and "end" notifications have INFO level, "error" has ERROR. Example of node console notification:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodePayload", "ironic_object.version":"1.10", "ironic_object.data":{ "clean_step": None, "conductor_group": "", "console_enabled": True, "created_at": "2016-01-26T20:41:03+00:00", "driver": "ipmi", "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_info": {}, "instance_uuid": None, "last_error": None, "maintenance": False, "maintenance_reason": None, "fault": None, "bios_interface": "no-bios", "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "available", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": None, "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123" } }, "event_type":"baremetal.node.console_set.end", "publisher_id":"ironic-conductor.hostname01" } baremetal.node.power_set ------------------------ * ``baremetal.node.power_set.start`` is emitted by the ironic-conductor service when it begins a power state change. It has notification level "info". * ``baremetal.node.power_set.end`` is emitted when ironic-conductor successfully completes a power state change task. It has notification level "info". * ``baremetal.node.power_set.error`` is emitted by ironic-conductor when it fails to set a node's power state. It has notification level "error". This can occur when ironic fails to retrieve the old power state prior to setting the new one on the node, or when it fails to set the power state if a change is requested. Here is an example payload for a notification with this event type. The "to_power" payload field indicates the power state to which the ironic-conductor is attempting to change the node:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodeSetPowerStatePayload", "ironic_object.version":"1.10", "ironic_object.data":{ "clean_step": None, "conductor_group": "", "console_enabled": False, "created_at": "2016-01-26T20:41:03+00:00", "deploy_step": None, "driver": "ipmi", "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_uuid": "d6ea00c1-1f94-4e95-90b3-3462d7031678", "last_error": None, "maintenance": False, "maintenance_reason": None, "fault": None, "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "available", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": None, "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123", "to_power": "power on" } }, "event_type":"baremetal.node.power_set.start", "publisher_id":"ironic-conductor.hostname01" } baremetal.node.power_state_corrected ------------------------------------ * ``baremetal.node.power_state_corrected.success`` is emitted by ironic-conductor when the power state on the baremetal hardware is different from the previous known power state of the node and the database is corrected to reflect this new power state. It has notification level "info". Here is an example payload for a notification with this event_type. The "from_power" payload field indicates the previous power state on the node, prior to the correction:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodeCorrectedPowerStatePayload", "ironic_object.version":"1.10", "ironic_object.data":{ "clean_step": None, "conductor_group": "", "console_enabled": False, "created_at": "2016-01-26T20:41:03+00:00", "deploy_step": None, "driver": "ipmi", "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_uuid": "d6ea00c1-1f94-4e95-90b3-3462d7031678", "last_error": None, "maintenance": False, "maintenance_reason": None, "fault": None, "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "available", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": None, "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123", "from_power": "power on" } }, "event_type":"baremetal.node.power_state_corrected.success", "publisher_id":"ironic-conductor.cond-hostname02" } baremetal.node.provision_set ---------------------------- * ``baremetal.node.provision_set.start`` is emitted by the ironic-conductor service when it begins a provision state transition. It has notification level INFO. * ``baremetal.node.provision_set.end`` is emitted when ironic-conductor successfully completes a provision state transition. It has notification level INFO. * ``baremetal.node.provision_set.success`` is emitted when ironic-conductor successfully changes provision state instantly, without any intermediate work required (example is AVAILABLE to MANAGEABLE). It has notification level INFO. * ``baremetal.node.provision_set.error`` is emitted by ironic-conductor when it changes provision state as result of error event processing. It has notification level ERROR. Here is an example payload for a notification with this event type. The "previous_provision_state" and "previous_target_provision_state" payload fields indicate a node's provision states before state change, "event" is the FSM (finite state machine) event that triggered the state change:: { "priority": "info", "payload":{ "ironic_object.namespace":"ironic", "ironic_object.name":"NodeSetProvisionStatePayload", "ironic_object.version":"1.10", "ironic_object.data":{ "clean_step": None, "conductor_group": "", "console_enabled": False, "created_at": "2016-01-26T20:41:03+00:00", "deploy_step": None, "driver": "ipmi", "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_info": {}, "instance_uuid": None, "last_error": None, "maintenance": False, "maintenance_reason": None, "fault": None, "boot_interface": "pxe", "console_interface": "no-console", "deploy_interface": "iscsi", "inspect_interface": "no-inspect", "management_interface": "ipmitool", "network_interface": "flat", "power_interface": "ipmitool", "raid_interface": "no-raid", "rescue_interface": "no-rescue", "storage_interface": "noop", "vendor_interface": "no-vendor", "name": None, "power_state": "power off", "properties": { "memory_mb": 4096, "cpu_arch": "x86_64", "local_gb": 10, "cpus": 8}, "provision_state": "deploying", "provision_updated_at": "2016-01-27T20:41:03+00:00", "resource_class": None, "target_power_state": None, "target_provision_state": "active", "traits": [ "CUSTOM_TRAIT1", "HW_CPU_X86_VMX"], "updated_at": "2016-01-27T20:41:03+00:00", "uuid": "1be26c0b-03f2-4d2e-ae87-c02d7f33c123", "previous_provision_state": "available", "previous_target_provision_state": None, "event": "deploy" } }, "event_type":"baremetal.node.provision_set.start", "publisher_id":"ironic-conductor.hostname01" } .. [1] https://wiki.openstack.org/wiki/LoggingStandards#Log_level_definitions .. [2] https://www.rabbitmq.com/documentation.html .. [3] https://en.wikipedia.org/wiki/Create,_read,_update_and_delete ironic-15.0.0/doc/source/admin/cleaning.rst0000664000175000017500000003053713652514273020571 0ustar zuulzuul00000000000000.. _cleaning: ============= Node cleaning ============= Overview ======== Ironic provides two modes for node cleaning: ``automated`` and ``manual``. ``Automated cleaning`` is automatically performed before the first workload has been assigned to a node and when hardware is recycled from one workload to another. ``Manual cleaning`` must be invoked by the operator. .. _automated_cleaning: Automated cleaning ================== When hardware is recycled from one workload to another, ironic performs automated cleaning on the node to ensure it's ready for another workload. This ensures the tenant will get a consistent bare metal node deployed every time. Ironic implements automated cleaning by collecting a list of cleaning steps to perform on a node from the Power, Deploy, Management, BIOS, and RAID interfaces of the driver assigned to the node. These steps are then ordered by priority and executed on the node when the node is moved to ``cleaning`` state, if automated cleaning is enabled. With automated cleaning, nodes move to ``cleaning`` state when moving from ``active`` -> ``available`` state (when the hardware is recycled from one workload to another). Nodes also traverse cleaning when going from ``manageable`` -> ``available`` state (before the first workload is assigned to the nodes). For a full understanding of all state transitions into cleaning, please see :ref:`states`. Ironic added support for automated cleaning in the Kilo release. .. _enabling-cleaning: Enabling automated cleaning --------------------------- To enable automated cleaning, ensure that your ironic.conf is set as follows: .. code-block:: ini [conductor] automated_clean=true This will enable the default set of cleaning steps, based on your hardware and ironic hardware types used for nodes. This includes, by default, erasing all of the previous tenant's data. You may also need to configure a `Cleaning Network`_. Cleaning steps -------------- Cleaning steps used for automated cleaning are ordered from higher to lower priority, where a larger integer is a higher priority. In case of a conflict between priorities across interfaces, the following resolution order is used: Power, Management, Deploy, BIOS, and RAID interfaces. You can skip a cleaning step by setting the priority for that cleaning step to zero or 'None'. You can reorder the cleaning steps by modifying the integer priorities of the cleaning steps. See `How do I change the priority of a cleaning step?`_ for more information. .. show-steps:: :phase: cleaning .. _manual_cleaning: Manual cleaning =============== ``Manual cleaning`` is typically used to handle long running, manual, or destructive tasks that an operator wishes to perform either before the first workload has been assigned to a node or between workloads. When initiating a manual clean, the operator specifies the cleaning steps to be performed. Manual cleaning can only be performed when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. Ironic added support for manual cleaning in the 4.4 (Mitaka series) release. Setup ----- In order for manual cleaning to work, you may need to configure a `Cleaning Network`_. Starting manual cleaning via API -------------------------------- Manual cleaning can only be performed when a node is in the ``manageable`` state. The REST API request to initiate it is available in API version 1.15 and higher:: PUT /v1/nodes//states/provision (Additional information is available `here `_.) This API will allow operators to put a node directly into ``cleaning`` provision state from ``manageable`` state via 'target': 'clean'. The PUT will also require the argument 'clean_steps' to be specified. This is an ordered list of cleaning steps. A cleaning step is represented by a dictionary (JSON), in the form:: { "interface": "", "step": "", "args": {"": "", ..., "": } } The 'interface' and 'step' keys are required for all steps. If a cleaning step method takes keyword arguments, the 'args' key may be specified. It is a dictionary of keyword variable arguments, with each keyword-argument entry being : . If any step is missing a required keyword argument, manual cleaning will not be performed and the node will be put in ``clean failed`` provision state with an appropriate error message. If, during the cleaning process, a cleaning step determines that it has incorrect keyword arguments, all earlier steps will be performed and then the node will be put in ``clean failed`` provision state with an appropriate error message. An example of the request body for this API:: { "target":"clean", "clean_steps": [{ "interface": "raid", "step": "create_configuration", "args": {"create_nonroot_volumes": false} }, { "interface": "deploy", "step": "erase_devices" }] } In the above example, the node's RAID interface would configure hardware RAID without non-root volumes, and then all devices would be erased (in that order). Starting manual cleaning via "openstack baremetal" CLI ------------------------------------------------------ Manual cleaning is available via the ``openstack baremetal node clean`` command, starting with Bare Metal API version 1.15. The argument ``--clean-steps`` must be specified. Its value is one of: - a JSON string - path to a JSON file whose contents are passed to the API - '-', to read from stdin. This allows piping in the clean steps. Using '-' to signify stdin is common in Unix utilities. The following examples assume that the Bare Metal API version was set via the ``OS_BAREMETAL_API_VERSION`` environment variable. (The alternative is to add ``--os-baremetal-api-version 1.15`` to the command.):: export OS_BAREMETAL_API_VERSION=1.15 Examples of doing this with a JSON string:: openstack baremetal node clean \ --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]' openstack baremetal node clean \ --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]' Or with a file:: openstack baremetal node clean \ --clean-steps my-clean-steps.txt Or with stdin:: cat my-clean-steps.txt | openstack baremetal node clean \ --clean-steps - Cleaning Network ================ If you are using the Neutron DHCP provider (the default) you will also need to ensure you have configured a cleaning network. This network will be used to boot the ramdisk for in-band cleaning. You can use the same network as your tenant network. For steps to set up the cleaning network, please see :ref:`configure-cleaning`. .. _InbandvsOutOfBandCleaning: In-band vs out-of-band ====================== Ironic uses two main methods to perform actions on a node: in-band and out-of-band. Ironic supports using both methods to clean a node. In-band ------- In-band steps are performed by ironic making API calls to a ramdisk running on the node using a deploy interface. Currently, all the deploy interfaces support in-band cleaning. By default, ironic-python-agent ships with a minimal cleaning configuration, only erasing disks. However, you can add your own cleaning steps and/or override default cleaning steps with a custom Hardware Manager. Out-of-band ----------- Out-of-band are actions performed by your management controller, such as IPMI, iLO, or DRAC. Out-of-band steps will be performed by ironic using a power or management interface. Which steps are performed depends on the hardware type and hardware itself. For Out-of-Band cleaning operations supported by iLO hardware types, refer to :ref:`ilo_node_cleaning`. FAQ === How are cleaning steps ordered? ------------------------------- For automated cleaning, cleaning steps are ordered by integer priority, where a larger integer is a higher priority. In case of a conflict between priorities across hardware interfaces, the following resolution order is used: #. Power interface #. Management interface #. Deploy interface #. BIOS interface #. RAID interface For manual cleaning, the cleaning steps should be specified in the desired order. How do I skip a cleaning step? ------------------------------ For automated cleaning, cleaning steps with a priority of 0 or None are skipped. How do I change the priority of a cleaning step? ------------------------------------------------ For manual cleaning, specify the cleaning steps in the desired order. For automated cleaning, it depends on whether the cleaning steps are out-of-band or in-band. Most out-of-band cleaning steps have an explicit configuration option for priority. Changing the priority of an in-band (ironic-python-agent) cleaning step requires use of a custom HardwareManager. The only exception is ``erase_devices``, which can have its priority set in ironic.conf. For instance, to disable erase_devices, you'd set the following configuration option:: [deploy] erase_devices_priority=0 To enable/disable the in-band disk erase using ``ilo`` hardware type, use the following configuration option:: [ilo] clean_priority_erase_devices=0 The generic hardware manager first tries to perform ATA disk erase by using ``hdparm`` utility. If ATA disk erase is not supported, it performs software based disk erase using ``shred`` utility. By default, the number of iterations performed by ``shred`` for software based disk erase is 1. To configure the number of iterations, use the following configuration option:: [deploy] erase_devices_iterations=1 What cleaning step is running? ------------------------------ To check what cleaning step the node is performing or attempted to perform and failed, run the following command; it will return the value in the node's ``driver_internal_info`` field:: openstack baremetal node show $node_ident -f value -c driver_internal_info The ``clean_steps`` field will contain a list of all remaining steps with their priorities, and the first one listed is the step currently in progress or that the node failed before going into ``clean failed`` state. Should I disable automated cleaning? ------------------------------------ Automated cleaning is recommended for ironic deployments, however, there are some tradeoffs to having it enabled. For instance, ironic cannot deploy a new instance to a node that is currently cleaning, and cleaning can be a time consuming process. To mitigate this, we suggest using disks with support for cryptographic ATA Security Erase, as typically the erase_devices step in the deploy interface takes the longest time to complete of all cleaning steps. Why can't I power on/off a node while it's cleaning? ---------------------------------------------------- During cleaning, nodes may be performing actions that shouldn't be interrupted, such as BIOS or Firmware updates. As a result, operators are forbidden from changing power state via the ironic API while a node is cleaning. Troubleshooting =============== If cleaning fails on a node, the node will be put into ``clean failed`` state and placed in maintenance mode, to prevent ironic from taking actions on the node. Nodes in ``clean failed`` will not be powered off, as the node might be in a state such that powering it off could damage the node or remove useful information about the nature of the cleaning failure. A ``clean failed`` node can be moved to ``manageable`` state, where it cannot be scheduled by nova and you can safely attempt to fix the node. To move a node from ``clean failed`` to ``manageable``:: openstack baremetal node manage $node_ident You can now take actions on the node, such as replacing a bad disk drive. Strategies for determining why a cleaning step failed include checking the ironic conductor logs, viewing logs on the still-running ironic-python-agent (if an in-band step failed), or performing general hardware troubleshooting on the node. When the node is repaired, you can move the node back to ``available`` state, to allow it to be scheduled by nova. :: # First, move it out of maintenance mode openstack baremetal node maintenance unset $node_ident # Now, make the node available for scheduling by nova openstack baremetal node provide $node_ident The node will begin automated cleaning from the start, and move to ``available`` state when complete. ironic-15.0.0/doc/source/admin/power-sync.rst0000664000175000017500000001014613652514273021111 0ustar zuulzuul00000000000000=================================== Power Sync with the Compute Service =================================== Baremetal Power Sync ==================== Each Baremetal conductor process runs a periodic task which synchronizes the power state of the nodes between its database and the actual hardware. If the value of the :oslo.config:option:`conductor.force_power_state_during_sync` option is set to ``true`` the power state in the database will be forced on the hardware and if it is set to ``false`` the hardware state will be forced on the database. If this periodic task is enabled, it runs at an interval defined by the :oslo.config:option:`conductor.sync_power_state_interval` config option for those nodes which are not in maintenance. Compute-Baremetal Power Sync ============================ Each ``nova-compute`` process in the Compute service runs a periodic task which synchronizes the power state of servers between its database and the compute driver. If enabled, it runs at an interval defined by the `sync_power_state_interval` config option on the ``nova-compute`` process. In case of the compute driver being baremetal driver, this sync will happen between the databases of the compute and baremetal services. Since the sync happens on the ``nova-compute`` process, the state in the compute database will be forced on the baremetal database in case of inconsistencies. Hence a node which was put down using the compute service API cannot be brought up through the baremetal service API since the power sync task will regard the compute service's knowledge of the power state as the source of truth. In order to get around this disadvantage of the compute-baremetal power sync, baremetal service does power state change callbacks to the compute service using external events. Power State Change Callbacks to the Compute Service --------------------------------------------------- Whenever the Baremetal service changes the power state of a node, it can issue a notification to the Compute service. The Compute service will consume this notification and update the power state of the instance in its database. By conveying all the power state changes to the compute service, the baremetal service becomes the source of truth thus preventing the compute service from forcing wrong power states on the physical instance during the compute-baremetal power sync. It also adds the possibility of bringing up/down a physical instance through the baremetal service API even if it was put down/up through the compute service API. This change requires the :oslo.config:group:`nova` section and the necessary authentication options like the :oslo.config:option:`nova.auth_url` to be defined in the configuration file of the baremetal service. If it is not configured the baremetal service will not be able to send notifications to the compute service and it will fall back to the behaviour of the compute service forcing power states on the baremetal service during the power sync. See :oslo.config:group:`nova` group for more details on the available config options. In case of baremetal stand alone deployments where there is no compute service running, the :oslo.config:option:`nova.send_power_notifications` config option should be set to ``False`` to disable power state change callbacks to the compute service. .. note:: The baremetal service sends notifications to the compute service only if the target power state is ``power on`` or ``power off``. Other error and ``None`` states will be ignored. In situations where the power state change is originally coming from the compute service, the notification will still be sent by the baremetal service and it will be a no-op on the compute service side with a debug log stating the node is already powering on/off. .. note:: Although an exclusive lock is used when sending notifications to the compute service, there can still be a race condition if the compute-baremetal power sync happens to happen a nano-second before the power state change event is received from the baremetal service in which case the power state from compute service's database will be forced on the node. ironic-15.0.0/doc/source/admin/console.rst0000664000175000017500000002312313652514273020444 0ustar zuulzuul00000000000000.. _console: ================================= Configuring Web or Serial Console ================================= Overview -------- There are two types of console which are available in Bare Metal service, one is web console (`Node web console`_) which is available directly from web browser, another is serial console (`Node serial console`_). Node web console ---------------- The web console can be configured in Bare Metal service in the following way: * Install shellinabox in ironic conductor node. For RHEL/CentOS, shellinabox package is not present in base repositories, user must enable EPEL repository, you can find more from `FedoraProject page`_. .. note:: shellinabox is no longer maintained by the authorized author. `This `_ is a fork of the project on GitHub that aims to continue with maintenance of the shellinabox project. Installation example: Ubuntu:: sudo apt-get install shellinabox RHEL7/CentOS7:: sudo yum install shellinabox Fedora:: sudo dnf install shellinabox You can find more about shellinabox on the `shellinabox page`_. You can optionally use the SSL certificate in shellinabox. If you want to use the SSL certificate in shellinabox, you should install openssl and generate the SSL certificate. 1. Install openssl, for example: Ubuntu:: sudo apt-get install openssl RHEL7/CentOS7:: sudo yum install openssl Fedora:: sudo dnf install openssl 2. Generate the SSL certificate, here is an example, you can find more about openssl on the `openssl page`_:: cd /tmp/ca openssl genrsa -des3 -out my.key 1024 openssl req -new -key my.key -out my.csr cp my.key my.key.org openssl rsa -in my.key.org -out my.key openssl x509 -req -days 3650 -in my.csr -signkey my.key -out my.crt cat my.crt my.key > certificate.pem * Customize the console section in the Bare Metal service configuration file (/etc/ironic/ironic.conf), if you want to use SSL certificate in shellinabox, you should specify ``terminal_cert_dir``. for example:: [console] # # Options defined in ironic.drivers.modules.console_utils # # Path to serial console terminal program. Used only by Shell # In A Box console. (string value) #terminal=shellinaboxd # Directory containing the terminal SSL cert (PEM) for serial # console access. Used only by Shell In A Box console. (string # value) terminal_cert_dir=/tmp/ca # Directory for holding terminal pid files. If not specified, # the temporary directory will be used. (string value) #terminal_pid_dir= # Time interval (in seconds) for checking the status of # console subprocess. (integer value) #subprocess_checking_interval=1 # Time (in seconds) to wait for the console subprocess to # start. (integer value) #subprocess_timeout=10 * Append console parameters for bare metal PXE boot in the Bare Metal service configuration file (/etc/ironic/ironic.conf). See the reference for configuration in :ref:`kernel-boot-parameters`. * Enable the ``ipmitool-shellinabox`` console interface, for example: .. code-block:: ini [DEFAULT] enabled_console_interfaces = ipmitool-shellinabox,no-console * Configure node web console. If the node uses a hardware type, for example ``ipmi``, set the node's console interface to ``ipmitool-shellinabox``:: openstack --os-baremetal-api-version 1.31 baremetal node set \ --console-interface ipmitool-shellinabox Enable the web console, for example:: openstack baremetal node set \ --driver-info = openstack baremetal node console enable Check whether the console is enabled, for example:: openstack baremetal node validate Disable the web console, for example:: openstack baremetal node console disable openstack baremetal node unset --driver-info The ```` is driver dependent. The actual name of this field can be checked in driver properties, for example:: openstack baremetal driver property list For the ``ipmi`` hardware type, this option is ``ipmi_terminal_port``. Give a customized port number to ````, for example ``8023``, this customized port is used in web console url. Get web console information for a node as follows:: openstack baremetal node console show +-----------------+----------------------------------------------------------------------+ | Property | Value | +-----------------+----------------------------------------------------------------------+ | console_enabled | True | | console_info | {u'url': u'http://:', u'type': u'shellinabox'} | +-----------------+----------------------------------------------------------------------+ You can open web console using above ``url`` through web browser. If ``console_enabled`` is ``false``, ``console_info`` is ``None``, web console is disabled. If you want to launch web console, see the ``Configure node web console`` part. .. _`shellinabox page`: https://code.google.com/archive/p/shellinabox/ .. _`openssl page`: https://www.openssl.org/ .. _`FedoraProject page`: https://fedoraproject.org/wiki/Infrastructure/Mirroring Node serial console ------------------- Serial consoles for nodes are implemented using `socat`_. It is supported by the ``ipmi`` and ``irmc`` hardware types. Serial consoles can be configured in the Bare Metal service as follows: * Install socat on the ironic conductor node. Also, ``socat`` needs to be in the $PATH environment variable that the ironic-conductor service uses. Installation example: Ubuntu:: sudo apt-get install socat RHEL7/CentOS7:: sudo yum install socat Fedora:: sudo dnf install socat * Append console parameters for bare metal PXE boot in the Bare Metal service configuration file. See the reference on how to configure them in :ref:`kernel-boot-parameters`. * Enable the ``ipmitool-socat`` console interface, for example: .. code-block:: ini [DEFAULT] enabled_console_interfaces = ipmitool-socat,no-console * Configure node console. If the node uses a hardware type, for example ``ipmi``, set the node's console interface to ``ipmitool-socat``:: openstack --os-baremetal-api-version 1.31 baremetal node set \ --console-interface ipmitool-socat Enable the serial console, for example:: openstack baremetal node set --driver-info ipmi_terminal_port= openstack baremetal node console enable Check whether the serial console is enabled, for example:: openstack baremetal node validate Disable the serial console, for example:: openstack baremetal node console disable openstack baremetal node unset --driver-info Serial console information is available from the Bare Metal service. Get serial console information for a node from the Bare Metal service as follows:: openstack baremetal node console show +-----------------+----------------------------------------------------------------------+ | Property | Value | +-----------------+----------------------------------------------------------------------+ | console_enabled | True | | console_info | {u'url': u'tcp://:', u'type': u'socat'} | +-----------------+----------------------------------------------------------------------+ If ``console_enabled`` is ``false`` or ``console_info`` is ``None`` then the serial console is disabled. If you want to launch serial console, see the ``Configure node console``. Node serial console of the Bare Metal service is compatible with the serial console of the Compute service. Hence, serial consoles to Bare Metal nodes can be seen and interacted with via the Dashboard service. In order to achieve that, you need to follow the documentation for :nova-doc:`Serial Console ` from the Compute service. Configuring HA ~~~~~~~~~~~~~~ When using Bare Metal serial console under High Availability (HA) configuration, you may consider some settings below. * If you use HAProxy, you may need to set the timeout for both client and server sides with appropriate values. Here is an example of the configuration for the timeout parameter. :: frontend nova_serial_console bind 192.168.20.30:6083 timeout client 10m # This parameter is necessary use_backend nova_serial_console if <...> backend nova_serial_console balance source timeout server 10m # This parameter is necessary option tcpka option tcplog server controller01 192.168.30.11:6083 check inter 2000 rise 2 fall 5 server controller02 192.168.30.12:6083 check inter 2000 rise 2 fall 5 * The Compute service's caching feature may need to be enabled in order to make the Bare Metal serial console work under a HA configuration. Here is an example of caching configuration in ``nova.conf``. .. code-block:: ini [cache] enabled = true backend = dogpile.cache.memcached memcache_servers = memcache01:11211,memcache02:11211,memcache03:11211 .. _`socat`: http://www.dest-unreach.org/socat ironic-15.0.0/doc/source/admin/troubleshooting.rst0000664000175000017500000003666613652514273022251 0ustar zuulzuul00000000000000.. _troubleshooting: ====================== Troubleshooting Ironic ====================== Nova returns "No valid host was found" Error ============================================ Sometimes Nova Conductor log file "nova-conductor.log" or a message returned from Nova API contains the following error:: NoValidHost: No valid host was found. There are not enough hosts available. "No valid host was found" means that the Nova Scheduler could not find a bare metal node suitable for booting the new instance. This in turn usually means some mismatch between resources that Nova expects to find and resources that Ironic advertised to Nova. A few things should be checked in this case: #. Make sure that enough nodes are in ``available`` state, not in maintenance mode and not already used by an existing instance. Check with the following command:: openstack baremetal node list --provision-state available --no-maintenance --unassociated If this command does not show enough nodes, use generic ``openstack baremetal node list`` to check other nodes. For example, nodes in ``manageable`` state should be made available:: openstack baremetal node provide The Bare metal service automatically puts a node in maintenance mode if there are issues with accessing its management interface. Check the power credentials (e.g. ``ipmi_address``, ``ipmi_username`` and ``ipmi_password``) and then move the node out of maintenance mode:: openstack baremetal node maintenance unset The ``node validate`` command can be used to verify that all required fields are present. The following command should not return anything:: openstack baremetal node validate | grep -E '(power|management)\W*False' Maintenance mode will be also set on a node if automated cleaning has failed for it previously. #. Make sure that you have Compute services running and enabled:: $ openstack compute service list --service nova-compute +----+--------------+-------------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+-------------+------+---------+-------+----------------------------+ | 7 | nova-compute | example.com | nova | enabled | up | 2017-09-04T13:14:03.000000 | +----+--------------+-------------+------+---------+-------+----------------------------+ By default, a Compute service is disabled after 10 consecutive build failures on it. This is to ensure that new build requests are not routed to a broken Compute service. If it is the case, make sure to fix the source of the failures, then re-enable it:: openstack compute service set --enable nova-compute #. Starting with the Pike release, check that all your nodes have the ``resource_class`` field set using the following command:: openstack --os-baremetal-api-version 1.21 baremetal node list --fields uuid name resource_class Then check that the flavor(s) are configured to request these resource classes via their properties:: openstack flavor show -f value -c properties For example, if your node has resource class ``baremetal-large``, it will be matched by a flavor with property ``resources:CUSTOM_BAREMETAL_LARGE`` set to ``1``. See :doc:`/install/configure-nova-flavors` for more details on the correct configuration. #. If you do not use scheduling based on resource classes, then the node's properties must have been set either manually or via inspection. For each node with ``available`` state check that the ``properties`` JSON field has valid values for the keys ``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``. Example of valid properties:: $ openstack baremetal node show --fields properties +------------+------------------------------------------------------------------------------------+ | Property | Value | +------------+------------------------------------------------------------------------------------+ | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'41', u'cpus': u'4'} | +------------+------------------------------------------------------------------------------------+ .. warning:: If you're using exact match filters in the Nova Scheduler, make sure the flavor and the node properties match exactly. #. The Nova flavor that you are using does not match any properties of the available Ironic nodes. Use :: openstack flavor show to compare. The extra specs in your flavor starting with ``capability:`` should match ones in ``node.properties['capabilities']``. .. note:: The format of capabilities is different in Nova and Ironic. E.g. in Nova flavor:: $ openstack flavor show -c properties +------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | properties | capabilities:boot_option='local' | +------------+----------------------------------+ But in Ironic node:: $ openstack baremetal node show --fields properties +------------+-----------------------------------------+ | Property | Value | +------------+-----------------------------------------+ | properties | {u'capabilities': u'boot_option:local'} | +------------+-----------------------------------------+ #. After making changes to nodes in Ironic, it takes time for those changes to propagate from Ironic to Nova. Check that :: openstack hypervisor stats show correctly shows total amount of resources in your system. You can also check ``openstack hypervisor show `` to see the status of individual Ironic nodes as reported to Nova. .. TODO(dtantsur): explain inspecting the placement API #. Figure out which Nova Scheduler filter ruled out your nodes. Check the ``nova-scheduler`` logs for lines containing something like:: Filter ComputeCapabilitiesFilter returned 0 hosts The name of the filter that removed the last hosts may give some hints on what exactly was not matched. See :nova-doc:`Nova filters documentation ` for more details. #. If none of the above helped, check Ironic conductor log carefully to see if there are any conductor-related errors which are the root cause for "No valid host was found". If there are any "Error in deploy of node : [Errno 28] ..." error messages in Ironic conductor log, it means the conductor run into a special error during deployment. So you can check the log carefully to fix or work around and then try again. Patching the Deploy Ramdisk =========================== When debugging a problem with deployment and/or inspection you may want to quickly apply a change to the ramdisk to see if it helps. Of course you can inject your code and/or SSH keys during the ramdisk build (depends on how exactly you've built your ramdisk). But it's also possible to quickly modify an already built ramdisk. Create an empty directory and unpack the ramdisk content there:: mkdir unpack cd unpack gzip -dc /path/to/the/ramdisk | cpio -id The last command will result in the whole Linux file system tree unpacked in the current directory. Now you can modify any files you want. The actual location of the files will depend on the way you've built the ramdisk. .. note:: On a systemd-based system you can use the ``systemd-nspawn`` tool (from the ``systemd-container`` package) to create a lightweight container from the unpacked filesystem tree:: sudo systemd-nspawn --directory /path/to/unpacked/ramdisk/ /bin/bash This will allow you to run commands within the filesystem, e.g. use package manager. If the ramdisk is also systemd-based, and you have login credentials set up, you can even boot a real ramdisk enviroment with :: sudo systemd-nspawn --directory /path/to/unpacked/ramdisk/ --boot After you've done the modifications, pack the whole content of the current directory back:: find . | cpio -H newc -o | gzip -c > /path/to/the/new/ramdisk .. note:: You don't need to modify the kernel (e.g. ``tinyipa-master.vmlinuz``), only the ramdisk part. API Errors ========== The `debug_tracebacks_in_api` config option may be set to return tracebacks in the API response for all 4xx and 5xx errors. .. _retrieve_deploy_ramdisk_logs: Retrieving logs from the deploy ramdisk ======================================= When troubleshooting deployments (specially in case of a deploy failure) it's important to have access to the deploy ramdisk logs to be able to identify the source of the problem. By default, Ironic will retrieve the logs from the deploy ramdisk when the deployment fails and save it on the local filesystem at ``/var/log/ironic/deploy``. To change this behavior, operators can make the following changes to ``/etc/ironic/ironic.conf`` under the ``[agent]`` group: * ``deploy_logs_collect``: Whether Ironic should collect the deployment logs on deployment. Valid values for this option are: * ``on_failure`` (**default**): Retrieve the deployment logs upon a deployment failure. * ``always``: Always retrieve the deployment logs, even if the deployment succeed. * ``never``: Disable retrieving the deployment logs. * ``deploy_logs_storage_backend``: The name of the storage backend where the logs will be stored. Valid values for this option are: * ``local`` (**default**): Store the logs in the local filesystem. * ``swift``: Store the logs in Swift. * ``deploy_logs_local_path``: The path to the directory where the logs should be stored, used when the ``deploy_logs_storage_backend`` is configured to ``local``. By default logs will be stored at **/var/log/ironic/deploy**. * ``deploy_logs_swift_container``: The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to "swift". By default **ironic_deploy_logs_container**. * ``deploy_logs_swift_days_to_expire``: Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to "swift". By default **30** days. When the logs are collected, Ironic will store a *tar.gz* file containing all the logs according to the ``deploy_logs_storage_backend`` configuration option. All log objects will be named with the following pattern:: [_]_.tar.gz .. note:: The *instance_uuid* field is not required for deploying a node when Ironic is configured to be used in standalone mode. If present it will be appended to the name. Accessing the log data ---------------------- When storing in the local filesystem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When storing the logs in the local filesystem, the log files can be found at the path configured in the ``deploy_logs_local_path`` configuration option. For example, to find the logs from the node ``5e9258c4-cfda-40b6-86e2-e192f523d668``: .. code-block:: bash $ ls /var/log/ironic/deploy | grep 5e9258c4-cfda-40b6-86e2-e192f523d668 5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz 5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz .. note:: When saving the logs to the filesystem, operators may want to enable some form of rotation for the logs to avoid disk space problems. When storing in Swift ~~~~~~~~~~~~~~~~~~~~~ When using Swift, operators can associate the objects in the container with the nodes in Ironic and search for the logs for the node ``5e9258c4-cfda-40b6-86e2-e192f523d668`` using the **prefix** parameter. For example: .. code-block:: bash $ swift list ironic_deploy_logs_container -p 5e9258c4-cfda-40b6-86e2-e192f523d668 5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz 5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz To download a specific log from Swift, do: .. code-block:: bash $ swift download ironic_deploy_logs_container "5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz" 5e9258c4-cfda-40b6-86e2-e192f523d668_db87f2c5-7a9a-48c2-9a76-604287257c1b_2016-08-08-14:07:25.tar.gz [auth 0.341s, headers 0.391s, total 0.391s, 0.531 MB/s] The contents of the log file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The log is just a ``.tar.gz`` file that can be extracted as: .. code-block:: bash $ tar xvf The contents of the file may differ slightly depending on the distribution that the deploy ramdisk is using: * For distributions using ``systemd`` there will be a file called **journal** which contains all the system logs collected via the ``journalctl`` command. * For other distributions, the ramdisk will collect all the contents of the ``/var/log`` directory. For all distributions, the log file will also contain the output of the following commands (if present): ``ps``, ``df``, ``ip addr`` and ``iptables``. Here's one example when extracting the content of a log file for a distribution that uses ``systemd``: .. code-block:: bash $ tar xvf 5e9258c4-cfda-40b6-86e2-e192f523d668_88595d8a-6725-4471-8cd5-c0f3106b6898_2016-08-08-13:52:12.tar.gz df ps journal ip_addr iptables .. _troubleshooting-stp: DHCP during PXE or iPXE is inconsistent or unreliable ===================================================== This can be caused by the spanning tree protocol delay on some switches. The delay prevents the switch port moving to forwarding mode during the nodes attempts to PXE, so the packets never make it to the DHCP server. To resolve this issue you should set the switch port that connects to your baremetal nodes as an edge or PortFast type port. Configured in this way the switch port will move to forwarding mode as soon as the link is established. An example on how to do that for a Cisco Nexus switch is: .. code-block:: bash $ config terminal $ (config) interface eth1/11 $ (config-if) spanning-tree port type edge IPMI errors =========== When working with IPMI, several settings need to be enabled depending on vendors. Enable IPMI over LAN -------------------- Machines may not have IPMI access over LAN enabled by default. This could cause the IPMI port to be unreachable through ipmitool, as shown: .. code-block:: bash $ipmitool -I lan -H ipmi_host -U ipmi_user -P ipmi_pass chassis power status Error: Unable to establish LAN session To fix this, enable `IPMI over lan` setting using your BMC tool or web app. Troubleshooting lanplus interface --------------------------------- When working with lanplus interfaces, you may encounter the following error: .. code-block:: bash $ipmitool -I lanplus -H ipmi_host -U ipmi_user -P ipmi_pass power status Error in open session response message : insufficient resources for session Error: Unable to establish IPMI v2 / RMCP+ session To fix that issue, please enable `RMCP+ Cipher Suite3 Configuration` setting using your BMC tool or web app. ironic-15.0.0/doc/source/admin/conductor-groups.rst0000664000175000017500000000464013652514273022322 0ustar zuulzuul00000000000000.. _conductor-groups: ================ Conductor Groups ================ Overview ======== Large scale operators tend to have needs that involve creating well defined and delinated resources. In some cases, these systems may reside close by or in far away locations. Reasoning may be simple or complex, and yet is only known to the deployer and operator of the infrastructure. A common case is the need for delineated high availability domains where it would be much more efficient to manage a datacenter in Antarctica with a conductor in Antarctica, as opposed to a conductor in New York City. How it works ============ Starting in ironic 11.1, each node has a ``conductor_group`` field which influences how the ironic conductor calculates (and thus allocates) baremetal nodes under ironic's management. This calculation is performed independently by each operating conductor and as such if a conductor has a ``[conductor]conductor_group`` configuration option defined in its `ironic.conf` configuration file, the conductor will then be limited to only managing nodes with a matching ``conductor_group`` string. .. note:: Any conductor without a ``[conductor]conductor_group`` setting will only manage baremetal nodes without a ``conductor_group`` value set upon node creation. If no such conductor is present when conductor groups are configured, node creation will fail unless a ``conductor_group`` is specified upon node creation. .. warning:: Nodes without a ``conductor_group`` setting can only be managed when a conductor exists that does not have a ``[conductor]conductor_group`` defined. If all conductors have been migrated to use a conductor group, such nodes are effectively "orphaned". How to use ========== A conductor group value may be any case insensitive string up to 255 characters long which matches the ``^[a-zA-Z0-9_\-\.]*$`` regular expression. #. Set the ``[conductor]conductor_group`` option in ironic.conf on one or more, but not all conductors:: [conductor] conductor_group = OperatorDefinedString #. Restart the ironic-conductor service. #. Set the conductor group on one or more nodes:: openstack baremetal node set \ --conductor-group "OperatorDefinedString" #. As desired and as needed, remaining conductors can be updated with the first two steps. Please be mindful of the constraints covered earlier in the document related to ability to manage nodes. ironic-15.0.0/doc/source/admin/rescue.rst0000664000175000017500000000675413652514273020303 0ustar zuulzuul00000000000000.. _rescue: =========== Rescue Mode =========== Overview ======== The Bare Metal Service supports putting nodes in rescue mode using hardware types that support rescue interfaces. The hardware types utilizing ironic-python-agent with ``PXE``/``Virtual Media`` based boot interface can support rescue operation when configured appropriately. .. note:: The rescue operation is currently supported only when tenant networks use DHCP to obtain IP addresses. Rescue operation can be used to boot nodes into a rescue ramdisk so that the ``rescue`` user can access the node, in order to provide the ability to access the node in case access to OS is not possible. For example, if there is a need to perform manual password reset or data recovery in the event of some failure, rescue operation can be used. Configuring The Bare Metal Service ================================== Configure the Bare Metal Service appropriately so that the service has the information needed to boot the ramdisk before a user tries to initiate rescue operation. This will differ somewhat between different deploy environments, but an example of how to do this is outlined below: #. Create and configure ramdisk that supports rescue operation. Please see :doc:`/install/deploy-ramdisk` for detailed instructions to build a ramdisk. #. Configure a network to use for booting nodes into the rescue ramdisk in neutron, and note the UUID or name of this network. This is required if you're using the neutron DHCP provider and have Bare Metal Service managing ramdisk booting (the default). This can be the same network as your cleaning or tenant network (for flat network). For an example of how to configure new networks with Bare Metal Service, see the :doc:`/install/configure-networking` documentation. #. Add the unique name or UUID of your rescue network to ``ironic.conf``: .. code-block:: ini [neutron] rescuing_network= .. note:: This can be set per node via driver_info['rescuing_network'] #. Restart the ironic conductor service. #. Specify a rescue kernel and ramdisk or rescue ISO compatible with the node's driver for pxe based boot interface or virtual-media based boot interface respectively. Example for pxe based boot interface: .. code-block:: console openstack baremetal node set $NODE_UUID \ --driver-info rescue_ramdisk=$RESCUE_INITRD_UUID \ --driver-info rescue_kernel=$RESCUE_VMLINUZ_UUID See :doc:`/install/configure-glance-images` for details. If you are not using Image service, it is possible to provide images to Bare Metal service via hrefs. After this, The Bare Metal Service should be ready for ``rescue`` operation. Test it out by attempting to rescue an active node and connect to the instance using ssh, as given below: .. code-block:: console openstack baremetal node rescue $NODE_UUID \ --rescue-password --wait ssh rescue@$INSTANCE_IP_ADDRESS To move a node back to active state after using rescue mode you can use ``unrescue``. Please unmount any filesystems that were manually mounted before proceeding with unrescue. The node unrescue can be done as given below: .. code-block:: console openstack baremetal node unrescue $NODE_UUID ``rescue`` and ``unrescue`` operations can also be triggered via the Compute Service using the following commands: .. code-block:: console openstack server rescue --password openstack server unrescue ironic-15.0.0/doc/source/admin/bios.rst0000664000175000017500000001000613652514273017732 0ustar zuulzuul00000000000000.. _bios: ================== BIOS Configuration ================== Overview ======== The Bare Metal service supports BIOS configuration for bare metal nodes. It allows administrators to retrieve and apply the desired BIOS settings via CLI or REST API. The desired BIOS settings are applied during manual cleaning. Prerequisites ============= Bare metal servers must be configured by the administrator to be managed via ironic hardware type that supports BIOS configuration. Enabling hardware types ----------------------- Enable a specific hardware type that supports BIOS configuration. Refer to :doc:`/install/enabling-drivers` for how to enable a hardware type. Enabling hardware interface --------------------------- To enable the bios interface: .. code-block:: ini [DEFAULT] enabled_bios_interfaces = no-bios Append the actual bios interface name supported by the enabled hardware type to ``enabled_bios_interfaces`` with comma separated values in ``ironic.conf``. All available in-tree bios interfaces are listed in setup.cfg file in the source code tree, for example: .. code-block:: ini ironic.hardware.interfaces.bios = fake = ironic.drivers.modules.fake:FakeBIOS no-bios = ironic.drivers.modules.noop:NoBIOS Retrieve BIOS settings ====================== To retrieve the cached BIOS configuration from a specified node:: $ openstack baremetal node bios setting list BIOS settings are cached on each node cleaning operation or when settings have been applied successfully via BIOS cleaning steps. The return of above command is a table of last cached BIOS settings from specified node. If ``-f json`` is added as suffix to above command, it returns BIOS settings as following:: [ { "setting name": { "name": "setting name", "value": "value" } }, { "setting name": { "name": "setting name", "value": "value" } }, ... ] To get a specified BIOS setting for a node:: $ openstack baremetal node bios setting show If ``-f json`` is added as suffix to above command, it returns BIOS settings as following:: { "setting name": { "name": "setting name", "value": "value" } } Configure BIOS settings ======================= Two :ref:`manual_cleaning` steps are available for managing nodes' BIOS settings: Factory reset ------------- This cleaning step resets all BIOS settings to factory default for a given node:: { "target":"clean", "clean_steps": [ { "interface": "bios", "step": "factory_reset" } ] } The ``factory_reset`` cleaning step does not require any arguments, as it resets all BIOS settings to factory defaults. Apply BIOS configuration ------------------------ This cleaning step applies a set of BIOS settings for a node:: { "target":"clean", "clean_steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "name", "value": "value" }, { "name": "name", "value": "value" } ] } } ] } The representation of ``apply_configuration`` cleaning step follows the same format of :ref:`manual_cleaning`. The desired BIOS settings can be provided via the ``settings`` argument which contains a list of BIOS options to be applied, each BIOS option is a dictionary with ``name`` and ``value`` keys. To check whether the desired BIOS configuration is set properly, use the command mentioned in the `Retrieve BIOS settings`_ section. .. note:: When applying BIOS settings to a node, vendor-specific driver may take the given BIOS settings from the argument and compare them with the current BIOS settings on the node and only apply when there is a difference. ironic-15.0.0/doc/source/admin/upgrade-to-hardware-types.rst0000664000175000017500000002444713652514273024020 0ustar zuulzuul00000000000000Upgrading to Hardware Types =========================== Starting with the Rocky release, the Bare Metal service does not support *classic drivers* any more. If you still use *classic drivers*, please upgrade to *hardware types* immediately. Please see :doc:`/install/enabling-drivers` for details on *hardware types* and *hardware interfaces*. Planning the upgrade -------------------- It is necessary to figure out which hardware types and hardware interfaces correspond to which classic drivers used in your deployment. The following table lists the classic drivers with their corresponding hardware types and the boot, deploy, inspect, management, and power hardware interfaces: ===================== ==================== ==================== ============== ========== ========== ========= Classic Driver Hardware Type Boot Deploy Inspect Management Power ===================== ==================== ==================== ============== ========== ========== ========= agent_ilo ilo ilo-virtual-media direct ilo ilo ilo agent_ipmitool ipmi pxe direct inspector ipmitool ipmitool agent_ipmitool_socat ipmi pxe direct inspector ipmitool ipmitool agent_irmc irmc irmc-virtual-media direct irmc irmc irmc iscsi_ilo ilo ilo-virtual-media iscsi ilo ilo ilo iscsi_irmc irmc irmc-virtual-media iscsi irmc irmc irmc pxe_drac idrac pxe iscsi idrac idrac idrac pxe_drac_inspector idrac pxe iscsi inspector idrac idrac pxe_ilo ilo ilo-pxe iscsi ilo ilo ilo pxe_ipmitool ipmi pxe iscsi inspector ipmitool ipmitool pxe_ipmitool_socat ipmi pxe iscsi inspector ipmitool ipmitool pxe_irmc irmc irmc-pxe iscsi irmc irmc irmc pxe_snmp snmp pxe iscsi no-inspect fake snmp ===================== ==================== ==================== ============== ========== ========== ========= .. note:: The ``inspector`` *inspect* interface was only used if explicitly enabled in the configuration. Otherwise, ``no-inspect`` was used. .. note:: ``pxe_ipmitool_socat`` and ``agent_ipmitool_socat`` use ``ipmitool-socat`` *console* interface (the default for the ``ipmi`` hardware type), while ``pxe_ipmitool`` and ``agent_ipmitool`` use ``ipmitool-shellinabox``. See Console_ for details. For out-of-tree drivers you may need to reach out to their maintainers or figure out the appropriate interfaces by researching the source code. Configuration ------------- You will need to enable hardware types and interfaces that correspond to your currently enabled classic drivers. For example, if you have the following configuration in your ``ironic.conf``: .. code-block:: ini [DEFAULT] enabled_drivers = pxe_ipmitool,agent_ipmitool You will have to add this configuration as well: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = iscsi,direct enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool .. note:: For every interface type there is an option ``default__interface``, where ```` is the interface type name. For example, one can make all nodes use the ``direct`` deploy method by default by setting: .. code-block:: ini [DEFAULT] default_deploy_interface = direct Migrating nodes --------------- After the required items are enabled in the configuration, each node's ``driver`` field has to be updated to a new value. You may need to also set new values for some or all interfaces: .. code-block:: console export OS_BAREMETAL_API_VERSION=1.31 for uuid in $(openstack baremetal node list --driver pxe_ipmitool -f value -c UUID); do openstack baremetal node set $uuid --driver ipmi --deploy-interface iscsi done for uuid in $(openstack baremetal node list --driver agent_ipmitool -f value -c UUID); do openstack baremetal node set $uuid --driver ipmi --deploy-interface direct done See :doc:`/install/enrollment` for more details on setting hardware types and interfaces. .. warning:: It is not recommended to change the interfaces for ``active`` nodes. If absolutely needed, the nodes have to be put in the maintenance mode first: .. code-block:: console openstack baremetal node maintenance set $UUID \ --reason "Changing driver and/or hardware interfaces" # do the update, validate its correctness openstack baremetal node maintenance unset $UUID Other interfaces ---------------- Care has to be taken to migrate from classic drivers using non-default interfaces. This chapter covers a few of the most commonly used. Ironic Inspector ~~~~~~~~~~~~~~~~ Some classic drivers, notably ``pxe_ipmitool``, ``agent_ipmitool`` and ``pxe_drac_inspector``, use ironic-inspector_ for their *inspect* interface. The same functionality is available for all hardware types, but the appropriate ``inspect`` interface has to be enabled in the Bare Metal service configuration file, for example: .. code-block:: ini [DEFAULT] enabled_inspect_interfaces = inspector,no-inspect See :doc:`/install/enabling-drivers` for more details. .. note:: The configuration option ``[inspector]enabled`` does not affect hardware types. Then you can tell your nodes to use this interface, for example: .. code-block:: console export OS_BAREMETAL_API_VERSION=1.31 for uuid in $(openstack baremetal node list --driver ipmi -f value -c UUID); do openstack baremetal node set $uuid --inspect-interface inspector done .. note:: A node configured with the IPMI hardware type, will use the inspector inspection implementation automatically if it is enabled. This is not the case for the most of the vendor drivers. .. _ironic-inspector: https://docs.openstack.org/ironic-inspector/ Console ~~~~~~~ Several classic drivers, notably ``pxe_ipmitool_socat`` and ``agent_ipmitool_socat``, use socat-based serial console implementation. For the ``ipmi`` hardware type it is used by default, if enabled in the configuration file: .. code-block:: ini [DEFAULT] enabled_console_interfaces = ipmitool-socat,no-console If you want to use the ``shellinabox`` implementation instead, it has to be enabled as well: .. code-block:: ini [DEFAULT] enabled_console_interfaces = ipmitool-shellinabox,no-console Then you need to update some or all nodes to use it explicitly. For example, to update all nodes use: .. code-block:: console export OS_BAREMETAL_API_VERSION=1.31 for uuid in $(openstack baremetal node list --driver ipmi -f value -c UUID); do openstack baremetal node set $uuid --console-interface ipmitool-shellinabox done RAID ~~~~ Many classic drivers, including ``pxe_ipmitool`` and ``agent_ipmitool`` use the IPA-based in-band RAID implementation by default. For the hardware types it is not used by default. To use it, you need to enable it in the configuration first: .. code-block:: ini [DEFAULT] enabled_raid_interfaces = agent,no-raid Then you can update those nodes that support in-band RAID to use the ``agent`` RAID interface. For example, to update all nodes use: .. code-block:: console export OS_BAREMETAL_API_VERSION=1.31 for uuid in $(openstack baremetal node list --driver ipmi -f value -c UUID); do openstack baremetal node set $uuid --raid-interface agent done .. note:: The ability of a node to use the ``agent`` RAID interface depends on the ramdisk (more specifically, a :ironic-python-agent-doc:`hardware manager ` used in it), not on the driver. Network and storage ~~~~~~~~~~~~~~~~~~~ The network and storage interfaces have always been dynamic, and thus do not require any special treatment during upgrade. Vendor ~~~~~~ Classic drivers are allowed to use the ``VendorMixin`` functionality to combine and expose several node or driver vendor passthru methods from different vendor interface implementations in one driver. **This is no longer possible with hardware types.** With hardware types, a vendor interface can only have a single active implementation from the list of vendor interfaces supported by a given hardware type. Ironic no longer has in-tree drivers (both classic and hardware types) that rely on this ``VendorMixin`` functionality support. However if you are using an out-of-tree classic driver that depends on it, you'll need to do the following in order to use vendor passthru methods from different vendor passthru implementations: #. While creating a new hardware type to replace your classic driver, specify all vendor interface implementations your classic driver was using to build its ``VendorMixin`` as supported vendor interfaces (property ``supported_vendor_interfaces`` of the Python class that defines your hardware type). #. Ensure all required vendor interfaces are enabled in the ironic configuration file under the ``[DEFAULT]enabled_vendor_interfaces`` option. You should also consider setting the ``[DEFAULT]default_vendor_interface`` option to specify the vendor interface for nodes that do not have one set explicitly. #. Before invoking a specific vendor passthru method, make sure that the node's vendor interface is set to the interface with the desired vendor passthru method. For example, if you want to invoke the vendor passthru method ``vendor_method_foo()`` from ``vendor_foo`` vendor interface: .. code-block:: shell # set the vendor interface to 'vendor_foo` openstack --os-baremetal-api-version 1.31 baremetal node set --vendor-interface vendor_foo # invoke the vendor passthru method openstack baremetal node passthru call vendor_method_foo ironic-15.0.0/doc/source/admin/security.rst0000664000175000017500000002024313652514273020651 0ustar zuulzuul00000000000000.. _security: ================= Security Overview ================= While the Bare Metal service is intended to be a secure application, it is important to understand what it does and does not cover today. Deployers must properly evaluate their use case and take the appropriate actions to secure their environment(s). This document is intended to provide an overview of what risks an operator of the Bare Metal service should be aware of. It is not intended as a How-To guide for securing a data center or an OpenStack deployment. .. TODO: add "Security Considerations for Network Boot" section .. TODO: add "Credential Storage and Management" section .. TODO: add "Multi-tenancy Considerations" section REST API: user roles and policy settings ======================================== Beginning with the Newton (6.1.0) release, the Bare Metal service allows operators significant control over API access: * Access may be restricted to each method (GET, PUT, etc) for each REST resource. Defaults are provided with the release and defined in code. * Access may be divided between an "administrative" role with full access and "observer" role with read-only access. By default, these roles are assigned the names ``baremetal_admin`` and ``baremetal_observer``, respectively. * By default, passwords and instance secrets are hidden in ``driver_info`` and ``instance_info``, respectively. In case of debugging or diagnosing, the behavior can be overridden by changing the policy file. To allow password in ``driver_info`` unmasked for users with administrative privileges, apply following changes to policy configuration file:: "show_password": "role:is_admin" And restart the Bare Metal API service to take effect. Please check :doc:`/configuration/policy` for more details. Prior to the Newton (6.1.0) release, the Bare Metal service only supported two policy options: * API access may be secured by a simple policy rule: users with administrative privileges may access all API resources, whereas users without administrative privileges may only access public API resources. * Passwords contained in the ``driver_info`` field may be hidden from all API responses with the ``show_password`` policy setting. This defaults to always hide passwords, regardless of the user's role. You can override it with policy configuration as described above. Multi-tenancy ============= There are two aspects of multitenancy to consider when evaluating a deployment of the Bare Metal Service: interactions between tenants on the network, and actions one tenant can take on a machine that will affect the next tenant. Network Interactions -------------------- Interactions between tenants' workloads running simultaneously on separate servers include, but are not limited to: IP spoofing, packet sniffing, and network man-in-the-middle attacks. By default, the Bare Metal service provisions all nodes on a "flat" network, and does not take any precautions to avoid or prevent interaction between tenants. This can be addressed by integration with the OpenStack Identity, Compute, and Networking services, so as to provide tenant-network isolation. Additional documentation on `network multi-tenancy `_ is available. Lingering Effects ----------------- Interactions between tenants placed sequentially on the same server include, but are not limited to: changes in BIOS settings, modifications to firmware, or files left on disk or peripheral storage devices (if these devices are not erased between uses). By default, the Bare Metal service will erase (clean) the local disk drives during the "cleaning" phase, after deleting an instance. It *does not* reset BIOS or reflash firmware or peripheral devices. This can be addressed through customizing the utility ramdisk used during the "cleaning" phase. See details in the `Firmware security`_ section. Firmware security ================= When the Bare Metal service deploys an operating system image to a server, that image is run natively on the server without virtualization. Any user with administrative access to the deployed instance has administrative access to the underlying hardware. Most servers' default settings do not prevent a privileged local user from gaining direct access to hardware devices. Such a user could modify device or firmware settings, and potentially flash new firmware to the device, before deleting their instance and allowing the server to be allocated to another user. If the ``[conductor]/automated_clean`` configuration option is enabled (and the ``[deploy]/erase_devices_priority`` configuration option is not zero), the Bare Metal service will securely erase all local disk devices within a machine during instance deletion. However, the service does not ship with any code that will validate the integrity of, or make any modifications to, system or device firmware or firmware settings. Operators are encouraged to write their own hardware manager plugins for the ``ironic-python-agent`` ramdisk. This should include custom ``clean steps`` that would be run during the :ref:`cleaning` process, as part of Node de-provisioning. The ``clean steps`` would perform the specific actions necessary within that environment to ensure the integrity of each server's firmware. Ideally, an operator would work with their hardware vendor to ensure that proper firmware security measures are put in place ahead of time. This could include: - installing signed firmware for BIOS and peripheral devices - using a TPM (Trusted Platform Module) to validate signatures at boot time - booting machines in :ref:`iLO UEFI Secure Boot Support`, rather than BIOS mode, to validate kernel signatures - disabling local (in-band) access from the host OS to the management controller (BMC) - disabling modifications to boot settings from the host OS Additional references: - :ref:`cleaning` - :ref:`trusted-boot` Other considerations ==================== Internal networks ----------------- Access to networks which the Bare Metal service uses internally should be prohibited from outside. These networks are the ones used for management (with the nodes' BMC controllers), provisioning, cleaning (if used) and rescuing (if used). This can be done with physical or logical network isolation, traffic filtering, etc. Management interface technologies --------------------------------- Some nodes support more than one management interface technology (vendor and IPMI for example). If you use only one modern technology for out-of-band node access, it is recommended that you disable IPMI since the IPMI protocol is not secure. If IPMI is enabled, in most cases a local OS administrator is able to work in-band with IPMI settings without specifying any credentials, as this is a DCMI specification requirement. Tenant network isolation ------------------------ If you use tenant network isolation, services (TFTP or HTTP) that handle the nodes' boot files should serve requests only from the internal networks that are used for the nodes being deployed and cleaned. TFTP protocol does not support per-user access control at all. For HTTP, there is no generic and safe way to transfer credentials to the node. Also, tenant network isolation is not intended to work with network-booting a node by default, once the node has been provisioned. API endpoints for RAM disk use ------------------------------ There are `two (unauthorized) endpoints `_ in the Bare Metal API that are intended for use by the ironic-python-agent RAM disk. They are not intended for public use. These endpoints can potentially cause security issues. Access to these endpoints from external or untrusted networks should be prohibited. An easy way to do this is to: * set up two groups of API services: one for external requests, the second for deploy RAM disks' requests. * to disable unauthorized access to these endpoints in the (first) API services group that serves external requests, the following lines should be added to the :ironic-doc:`policy.yaml file `:: # Send heartbeats from IPA ramdisk "baremetal:node:ipa_heartbeat": "rule:is_admin" # Access IPA ramdisk functions "baremetal:driver:ipa_lookup": "rule:is_admin" ironic-15.0.0/doc/source/admin/node-multitenancy.rst0000664000175000017500000001302413652514273022440 0ustar zuulzuul00000000000000================== Node Multi-Tenancy ================== This guide explains the steps needed to enable node multi-tenancy. This feature enables non-admins to perform API actions on nodes, limited by policy configuration. The Bare Metal service supports two kinds of non-admin users: * Owner: owns specific nodes and performs administrative actions on them * Lessee: receives temporary and limited access to a node Setting the Owner and Lessee ============================ Non-administrative access to a node is controlled through a node's ``owner`` or ``lessee`` attribute:: openstack baremetal node set --owner 080925ee2f464a2c9dce91ee6ea354e2 node-7 openstack baremetal node set --lessee 2a210e5ff114c8f2b6e994218f51a904 node-10 Configuring the Bare Metal Service Policy ========================================= By default, the Bare Metal service policy is configured so that a node owner or lessee has no access to any node APIs. However, the policy :doc:`policy file ` contains rules that can be used to enable node API access:: # Owner of node #"is_node_owner": "project_id:%(node.owner)s" # Lessee of node #"is_node_lessee": "project_id:%(node.lessee)s" An administrator can then modify the policy file to expose individual node APIs as follows:: # Change Node provision status # PUT /nodes/{node_ident}/states/provision #"baremetal:node:set_provision_state": "rule:is_admin" "baremetal:node:set_provision_state": "rule:is_admin or rule:is_node_owner or rule:is_node_lessee" # Update Node records # PATCH /nodes/{node_ident} #"baremetal:node:update": "rule:is_admin or rule:is_node_owner" In addition, it is safe to expose the ``baremetal:node:list`` rule, as the node list function now filters non-admins by owner and lessee:: # Retrieve multiple Node records, filtered by owner # GET /nodes # GET /nodes/detail #"baremetal:node:list": "rule:baremetal:node:get" "baremetal:node:list": "" Note that ``baremetal:node:list_all`` permits users to see all nodes regardless of owner/lessee, so it should remain restricted to admins. Ports ----- Port APIs can be similarly exposed to node owners and lessees:: # Retrieve Port records # GET /ports/{port_id} # GET /nodes/{node_ident}/ports # GET /nodes/{node_ident}/ports/detail # GET /portgroups/{portgroup_ident}/ports # GET /portgroups/{portgroup_ident}/ports/detail #"baremetal:port:get": "rule:is_admin or rule:is_observer" "baremetal:port:get": "rule:is_admin or rule:is_observer or rule:is_node_owner or rule:is_node_lessee" # Retrieve multiple Port records, filtered by owner # GET /ports # GET /ports/detail #"baremetal:port:list": "rule:baremetal:port:get" "baremetal:port:list": "" Allocations ----------- Allocations respect node tenancy as well. A restricted allocation creates an allocation tied to a project, and that can only match nodes where that project is the owner or lessee. Here is a sample set of allocation policy rules that allow non-admins to use allocations effectively:: # Retrieve Allocation records # GET /allocations/{allocation_id} # GET /nodes/{node_ident}/allocation #"baremetal:allocation:get": "rule:is_admin or rule:is_observer" "baremetal:allocation:get": "rule:is_admin or rule:is_observer or rule:is_allocation_owner" # Retrieve multiple Allocation records, filtered by owner # GET /allocations #"baremetal:allocation:list": "rule:baremetal:allocation:get" "baremetal:allocation:list": "" # Retrieve multiple Allocation records # GET /allocations #"baremetal:allocation:list_all": "rule:baremetal:allocation:get" # Create Allocation records # POST /allocations #"baremetal:allocation:create": "rule:is_admin" # Create Allocation records that are restricted to an owner # POST /allocations #"baremetal:allocation:create_restricted": "rule:baremetal:allocation:create" "baremetal:allocation:create_restricted": "" # Delete Allocation records # DELETE /allocations/{allocation_id} # DELETE /nodes/{node_ident}/allocation #"baremetal:allocation:delete": "rule:is_admin" "baremetal:allocation:delete": "rule:is_admin or rule:is_allocation_owner" # Change name and extra fields of an allocation # PATCH /allocations/{allocation_id} #"baremetal:allocation:update": "rule:is_admin" "baremetal:allocation:update": "rule:is_admin or rule:is_allocation_owner" Deployment and Metalsmith ------------------------- Provisioning a node requires a specific set of APIs to be made available. The following policy specifications are enough to allow a node owner to use :metalsmith-doc:`Metalsmith ` to deploy upon a node:: "baremetal:node:get": "rule:is_admin or rule:is_observer or rule:is_node_owner" "baremetal:node:list": "" "baremetal:node:update_extra": "rule:is_admin or rule:is_node_owner" "baremetal:node:update_instance_info": "rule:is_admin or rule:is_node_owner" "baremetal:node:validate": "rule:is_admin or rule:is_node_owner" "baremetal:node:set_provision_state": "rule:is_admin or rule:is_node_owner" "baremetal:node:vif:list": "rule:is_admin or rule:is_node_owner" "baremetal:node:vif:attach": "rule:is_admin or rule:is_node_owner" "baremetal:node:vif:detach": "rule:is_admin or rule:is_node_owner" "baremetal:allocation:get": "rule:is_admin or rule:is_observer or rule:is_allocation_owner" "baremetal:allocation:list": "" "baremetal:allocation:create_restricted": "" "baremetal:allocation:delete": "rule:is_admin or rule:is_allocation_owner" "baremetal:allocation:update": "rule:is_admin or rule:is_allocation_owner" ironic-15.0.0/doc/source/admin/api-audit-support.rst0000664000175000017500000000720413652514273022373 0ustar zuulzuul00000000000000.. _api-audit-support: ================= API Audit Logging ================= Audit middleware supports delivery of CADF audit events via Oslo messaging notifier capability. Based on `notification_driver` configuration, audit events can be routed to messaging infrastructure (notification_driver = messagingv2) or can be routed to a log file (`[oslo_messaging_notifications]/driver = log`). Audit middleware creates two events per REST API interaction. First event has information extracted from request data and the second one has request outcome (response). Enabling API Audit Logging ========================== Audit middleware is available as part of `keystonemiddleware` (>= 1.6) library. For information regarding how audit middleware functions refer :keystonemiddleware-doc:`here `. Auditing can be enabled for the Bare Metal service by making the following changes to ``/etc/ironic/ironic.conf``. #. To enable audit logging of API requests:: [audit] ... enabled=true #. To customize auditing API requests, the audit middleware requires the audit_map_file setting to be defined. Update the value of configuration setting 'audit_map_file' to set its location. Audit map file configuration options for the Bare Metal service are included in the etc/ironic/ironic_api_audit_map.conf.sample file. To understand CADF format specified in ironic_api_audit_map.conf file refer to `CADF Format. `_:: [audit] ... audit_map_file=/etc/ironic/api_audit_map.conf #. Comma separated list of Ironic REST API HTTP methods to be ignored during audit. It is used only when API audit is enabled. For example:: [audit] ... ignore_req_list=GET,POST Sample Audit Event ================== Following is the sample of audit event for ironic node list request. .. code-block:: json { "event_type":"audit.http.request", "timestamp":"2016-06-15 06:04:30.904397", "payload":{ "typeURI":"http://schemas.dmtf.org/cloud/audit/1.0/event", "eventTime":"2016-06-15T06:04:30.903071+0000", "target":{ "id":"ironic", "typeURI":"unknown", "addresses":[ { "url":"http://{ironic_admin_host}:6385", "name":"admin" }, { "url":"http://{ironic_internal_host}:6385", "name":"private" }, { "url":"http://{ironic_public_host}:6385", "name":"public" } ], "name":"ironic" }, "observer":{ "id":"target" }, "tags":[ "correlation_id?value=685f1abb-620e-5d5d-b74a-b4135fb32373" ], "eventType":"activity", "initiator":{ "typeURI":"service/security/account/user", "name":"admin", "credential":{ "token":"***", "identity_status":"Confirmed" }, "host":{ "agent":"python-ironicclient", "address":"10.1.200.129" }, "project_id":"d8f52dd7d9e1475dbbf3ba47a4a83313", "id":"8c1a948bad3948929aa5d5b50627a174" }, "action":"read", "outcome":"pending", "id":"061b7aa7-5879-5225-a331-c002cf23cb6c", "requestPath":"/v1/nodes/?associated=True" }, "priority":"INFO", "publisher_id":"ironic-api", "message_id":"2f61ebaa-2d3e-4023-afba-f9fca6f21fc2" } ironic-15.0.0/doc/source/admin/gmr.rst0000664000175000017500000000427713652514273017600 0ustar zuulzuul00000000000000Bare Metal Service state report (via Guru Meditation Reports) ============================================================= The Bare Metal service contains a mechanism whereby developers and system administrators can generate a report about the state of running Bare Metal executables (ironic-api and ironic-conductor). This report is called a Guru Meditation Report (GMR for short). GMR provides useful debugging information that can be used to obtain an accurate view on the current live state of the system. For example, what threads are running, what configuration parameters are in effect, and more. The eventlet backdoor facility provides an interactive shell interface for any eventlet based process, allowing an administrator to telnet to a pre-defined port and execute a variety of commands. Configuration ------------- The GMR feature is optional and requires the oslo.reports_ package to be installed. For example, using pip:: pip install 'oslo.reports>=1.18.0' .. _oslo.reports: https://opendev.org/openstack/oslo.reports Generating a GMR ---------------- A *GMR* can be generated by sending the *USR2* signal to any Bare Metal process that supports it. The *GMR* will then be output to stderr for that particular process. For example: Suppose that ``ironic-api`` has process ID ``6385``, and was run with ``2>/var/log/ironic/ironic-api-err.log``. Then, sending the *USR* signal:: kill -USR2 6385 will trigger the Guru Meditation report to be printed to ``/var/log/ironic/ironic-api-err.log``. Structure of a GMR ------------------ The *GMR* consists of the following sections: Package Shows information about the package to which this process belongs, including version information. Threads Shows stack traces and thread IDs for each of the threads within this process. Green Threads Shows stack traces for each of the green threads within this process (green threads don't have thread IDs). Configuration Lists all the configuration options currently accessible via the CONF object for the current process. .. only:: html Sample GMR Report ----------------- Below is a sample GMR report generated for ``ironic-api`` service: .. include:: report.txt :literal: ironic-15.0.0/doc/source/admin/retirement.rst0000664000175000017500000000452013652514273021160 0ustar zuulzuul00000000000000.. _retirement: =============== Node retirement =============== Overview ======== Retiring nodes is a natural part of a server’s life cycle, for instance when the end of the warranty is reached and the physical space is needed for new deliveries to install replacement capacity. However, depending on the type of the deployment, removing nodes from service can be a full workflow by itself as it may include steps like moving applications to other hosts, cleaning sensitive data from disks or the BMC, or tracking the dismantling of servers from their racks. Ironic provides some means to support such workflows by allowing to tag nodes as ``retired`` which will prevent any further scheduling of instances, but will still allow for other operations, such as cleaning, to happen (this marks an important difference to nodes which have the ``maintenance`` flag set). How to use ========== When it is known that a node shall be retired, set the ``retired`` flag on the node with:: openstack baremetal node set --retired node-001 This can be done irrespective of the state the node is in, so in particular while the node is ``active``. .. NOTE:: An exception are nodes which are in ``available``. For backwards compatibility reasons, these nodes need to be moved to ``manageable`` first. Trying to set the ``retired`` flag for ``available`` nodes will result in an error. Optionally, a reason can be specified when a node is retired, e.g.:: openstack baremetal node set --retired node-001 \ --retired-reason "End of warranty for delivery abc123" Upon instance deletion, an ``active`` node with the ``retired`` flag set will not move to ``available``, but to ``manageable``. The node will hence not be eligible for scheduling of new instances. Equally, nodes with ``retired`` set to True cannot move from ``manageable`` to ``available``: the ``provide`` verb is blocked. This is to prevent accidental re-use of nodes tagged for removal from the fleet. In order to move these nodes to ``available`` none the less, the ``retired`` field needs to be removed first. This can be done via:: openstack baremetal node unset --retired node-001 In order to facilitate the identification of nodes marked for retirement, e.g. by other teams, ironic also allows to list all nodes which have the ``retired`` flag set:: openstack baremetal node list --retired ironic-15.0.0/doc/source/admin/metrics.rst0000664000175000017500000001016513652514273020452 0ustar zuulzuul00000000000000.. _metrics: ========================= Emitting Software Metrics ========================= Beginning with the Newton (6.1.0) release, the ironic services support emitting internal performance data to `statsd `_. This allows operators to graph and understand performance bottlenecks in their system. This guide assumes you have a statsd server setup. For information on using and configuring statsd, please see the `statsd `_ README and documentation. These performance measurements, herein referred to as "metrics", can be emitted from the Bare Metal service, including ironic-api, ironic-conductor, and ironic-python-agent. By default, none of the services will emit metrics. Configuring the Bare Metal Service to Enable Metrics ==================================================== Enabling metrics in ironic-api and ironic-conductor --------------------------------------------------- The ironic-api and ironic-conductor services can be configured to emit metrics to statsd by adding the following to the ironic configuration file, usually located at ``/etc/ironic/ironic.conf``:: [metrics] backend = statsd If a statsd daemon is installed and configured on every host running an ironic service, listening on the default UDP port (8125), no further configuration is needed. If you are using a remote statsd server, you must also supply connection information in the ironic configuration file:: [metrics_statsd] # Point this at your environments' statsd host statsd_host = 192.0.2.1 statsd_port = 8125 Enabling metrics in ironic-python-agent --------------------------------------- The ironic-python-agent process receives its configuration in the response from the initial lookup request to the ironic-api service. This means to configure ironic-python-agent to emit metrics, you must enable the agent metrics backend in your ironic configuration file on all ironic-conductor hosts:: [metrics] agent_backend = statsd In order to reliably emit metrics from the ironic-python-agent, you must provide a statsd server that is reachable from both the configured provisioning and cleaning networks. The agent statsd connection information is configured in the ironic configuration file as well:: [metrics_statsd] # Point this at a statsd host reachable from the provisioning and cleaning nets agent_statsd_host = 198.51.100.2 agent_statsd_port = 8125 Types of Metrics Emitted ======================== The Bare Metal service emits timing metrics for every API method, as well as for most driver methods. These metrics measure how long a given method takes to execute. A deployer with metrics enabled should expect between 100 and 500 distinctly named data points to be emitted from the Bare Metal service. This will increase if the metrics.preserve_host option is set to true or if multiple drivers are used in the Bare Metal deployment. This estimate may be used to determine if a deployer needs to scale their metrics backend to handle the additional load before enabling metrics. To see which metrics have changed names or have been removed between releases, refer to the `ironic release notes `_. .. note:: With the default statsd configuration, each timing metric may create additional metrics due to how statsd handles timing metrics. For more information, see statds documentation on `metric types `_. The ironic-python-agent ramdisk emits timing metrics for every API method. Deployers who use custom HardwareManagers can emit custom metrics for their hardware. For more information on custom HardwareManagers, and emitting metrics from them, please see the :ironic-python-agent-doc:`ironic-python-agent documentation <>`. Adding New Metrics ================== If you're a developer, and would like to add additional metrics to ironic, please see the :ironic-lib-doc:`ironic-lib developer documentation <>` for details on how to use the metrics library. A release note should also be created each time a metric is changed or removed to alert deployers of the change. ironic-15.0.0/doc/source/admin/raid.rst0000664000175000017500000003656313652514273017735 0ustar zuulzuul00000000000000.. _raid: ================== RAID Configuration ================== Overview ======== Ironic supports RAID configuration for bare metal nodes. It allows operators to specify the desired RAID configuration via the OpenStackClient CLI or REST API. The desired RAID configuration is applied on the bare metal during manual cleaning. The examples described here use the OpenStackClient CLI; please see the `REST API reference `_ for their corresponding REST API requests. Prerequisites ============= The bare metal node needs to use a hardware type that supports RAID configuration. RAID interfaces may implement RAID configuration either in-band or out-of-band. Software RAID is supported on all hardware, although with some caveats - see `Software RAID`_ for details. In-band RAID configuration (including software RAID) is done using the Ironic Python Agent ramdisk. For in-band hardware RAID configuration, a hardware manager which supports RAID should be bundled with the ramdisk. Whether a node supports RAID configuration could be found using the CLI command ``openstack baremetal node validate ``. In-band RAID is usually implemented by the ``agent`` RAID interface. Build agent ramdisk which supports RAID configuration ===================================================== For doing in-band hardware RAID configuration, Ironic needs an agent ramdisk bundled with a hardware manager which supports RAID configuration for your hardware. For example, the :ref:`DIB_raid_support` should be used for HPE Proliant Servers. .. note:: For in-band software RAID, the agent ramdisk does not need to be bundled with a hardware manager as the generic hardware manager in the Ironic Python Agent already provides (basic) support for software RAID. RAID configuration JSON format ============================== The desired RAID configuration and current RAID configuration are represented in JSON format. Target RAID configuration ------------------------- This is the desired RAID configuration on the bare metal node. Using the OpenStackClient CLI (or REST API), the operator sets ``target_raid_config`` field of the node. The target RAID configuration will be applied during manual cleaning. Target RAID configuration is a dictionary having ``logical_disks`` as the key. The value for the ``logical_disks`` is a list of JSON dictionaries. It looks like:: { "logical_disks": [ {}, {}, ... ] } If the ``target_raid_config`` is an empty dictionary, it unsets the value of ``target_raid_config`` if the value was set with previous RAID configuration done on the node. Each dictionary of logical disk contains the desired properties of logical disk supported by the hardware type. These properties are discoverable by:: openstack baremetal driver raid property list Mandatory properties ^^^^^^^^^^^^^^^^^^^^ These properties must be specified for each logical disk and have no default values: - ``size_gb`` - Size (Integer) of the logical disk to be created in GiB. ``MAX`` may be specified if the logical disk should use all of the remaining space available. This can be used only when backing physical disks are specified (see below). - ``raid_level`` - RAID level for the logical disk. Ironic supports the following RAID levels: 0, 1, 2, 5, 6, 1+0, 5+0, 6+0. Optional properties ^^^^^^^^^^^^^^^^^^^ These properties have default values and they may be overridden in the specification of any logical disk. None of these options are supported for software RAID. - ``volume_name`` - Name of the volume. Should be unique within the Node. If not specified, volume name will be auto-generated. - ``is_root_volume`` - Set to ``true`` if this is the root volume. At most one logical disk can have this set to ``true``; the other logical disks must have this set to ``false``. The ``root device hint`` will be saved, if the RAID interface is capable of retrieving it. This is ``false`` by default. Backing physical disk hints ^^^^^^^^^^^^^^^^^^^^^^^^^^^ These hints are specified for each logical disk to let Ironic find the desired disks for RAID configuration. This is machine-independent information. This serves the use-case where the operator doesn't want to provide individual details for each bare metal node. None of these options are supported for software RAID. - ``share_physical_disks`` - Set to ``true`` if this logical disk can share physical disks with other logical disks. The default value is ``false``, except for software RAID which always shares disks. - ``disk_type`` - ``hdd`` or ``ssd``. If this is not specified, disk type will not be a criterion to find backing physical disks. - ``interface_type`` - ``sata`` or ``scsi`` or ``sas``. If this is not specified, interface type will not be a criterion to find backing physical disks. - ``number_of_physical_disks`` - Integer, number of disks to use for the logical disk. Defaults to minimum number of disks required for the particular RAID level, except for software RAID which always spans all disks. Backing physical disks ^^^^^^^^^^^^^^^^^^^^^^ These are the actual machine-dependent information. This is suitable for environments where the operator wants to automate the selection of physical disks with a 3rd-party tool based on a wider range of attributes (eg. S.M.A.R.T. status, physical location). The values for these properties are hardware dependent. - ``controller`` - The name of the controller as read by the RAID interface. In order to trigger the setup of a Software RAID via the Ironic Python Agent, the value of this property needs to be set to ``software``. - ``physical_disks`` - A list of physical disks to use as read by the RAID interface. For software RAID ``physical_disks`` is a list of device hints in the same format as used for :ref:`root-device-hints`. The number of provided hints must match the expected number of backing devices (repeat the same hint if necessary). .. note:: If properties from both "Backing physical disk hints" or "Backing physical disks" are specified, they should be consistent with each other. If they are not consistent, then the RAID configuration will fail (because the appropriate backing physical disks could not be found). .. _raid-config-examples: Examples for ``target_raid_config`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *Example 1*. Single RAID disk of RAID level 5 with all of the space available. Make this the root volume to which Ironic deploys the image: .. code-block:: json { "logical_disks": [ { "size_gb": "MAX", "raid_level": "5", "is_root_volume": true } ] } *Example 2*. Two RAID disks. One with RAID level 5 of 100 GiB and make it root volume and use SSD. Another with RAID level 1 of 500 GiB and use HDD: .. code-block:: json { "logical_disks": [ { "size_gb": 100, "raid_level": "5", "is_root_volume": true, "disk_type": "ssd" }, { "size_gb": 500, "raid_level": "1", "disk_type": "hdd" } ] } *Example 3*. Single RAID disk. I know which disks and controller to use: .. code-block:: json { "logical_disks": [ { "size_gb": 100, "raid_level": "5", "controller": "Smart Array P822 in Slot 3", "physical_disks": ["6I:1:5", "6I:1:6", "6I:1:7"], "is_root_volume": true } ] } *Example 4*. Using backing physical disks: .. code-block:: json { "logical_disks": [ { "size_gb": 50, "raid_level": "1+0", "controller": "RAID.Integrated.1-1", "volume_name": "root_volume", "is_root_volume": true, "physical_disks": [ "Disk.Bay.0:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.1:Encl.Int.0-1:RAID.Integrated.1-1" ] }, { "size_gb": 100, "raid_level": "5", "controller": "RAID.Integrated.1-1", "volume_name": "data_volume", "physical_disks": [ "Disk.Bay.2:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.3:Encl.Int.0-1:RAID.Integrated.1-1", "Disk.Bay.4:Encl.Int.0-1:RAID.Integrated.1-1" ] } ] } *Example 5*. Software RAID with two RAID devices: .. code-block:: json { "logical_disks": [ { "size_gb": 100, "raid_level": "1", "controller": "software" }, { "size_gb": "MAX", "raid_level": "0", "controller": "software" } ] } *Example 6*. Software RAID, limiting backing block devices to exactly two devices with the size exceeding 100 GiB: .. code-block:: json { "logical_disks": [ { "size_gb": "MAX", "raid_level": "0", "controller": "software", "physical_disks": [ {"size": "> 100"}, {"size": "> 100"} ] } ] } Current RAID configuration -------------------------- After target RAID configuration is applied on the bare metal node, Ironic populates the current RAID configuration. This is populated in the ``raid_config`` field in the Ironic node. This contains the details about every logical disk after they were created on the bare metal node. It contains details like RAID controller used, the backing physical disks used, WWN of each logical disk, etc. It also contains information about each physical disk found on the bare metal node. To get the current RAID configuration:: openstack baremetal node show Workflow ======== * Operator configures the bare metal node with a hardware type that has a ``RAIDInterface`` other than ``no-raid``. For instance, for Software RAID, this would be ``agent``. * For in-band RAID configuration, operator builds an agent ramdisk which supports RAID configuration by bundling the hardware manager with the ramdisk. See `Build agent ramdisk which supports RAID configuration`_ for more information. * Operator prepares the desired target RAID configuration as mentioned in `Target RAID configuration`_. The target RAID configuration is set on the Ironic node:: openstack baremetal node set \ --target-raid-config The CLI command can accept the input from standard input also:: openstack baremetal node set \ --target-raid-config - * Create a JSON file with the RAID clean steps for manual cleaning. Add other clean steps as desired:: [{ "interface": "raid", "step": "delete_configuration" }, { "interface": "raid", "step": "create_configuration" }] .. note:: 'create_configuration' doesn't remove existing disks. It is recommended to add 'delete_configuration' before 'create_configuration' to make sure that only the desired logical disks exist in the system after manual cleaning. * Bring the node to ``manageable`` state and do a ``clean`` action to start cleaning on the node:: openstack baremetal node clean \ --clean-steps * After manual cleaning is complete, the current RAID configuration is reported in the ``raid_config`` field when running:: openstack baremetal node show Software RAID ============= Building Linux software RAID in-band (via the Ironic Python Agent ramdisk) is supported starting with the Train release. It is requested by using the ``agent`` RAID interface and RAID configuration with all controllers set to ``software``. You can find a software RAID configuration example in :ref:`raid-config-examples`. There are certain limitations to be aware of: * Only the mandatory properties (plus the required ``controller`` property) from `Target RAID configuration`_ are currently supported. * The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. * Building RAID will fail if the target disks are already partitioned. Wipe the disks using e.g. the ``erase_devices_metadata`` clean step before building RAID:: [{ "interface": "raid", "step": "delete_configuration" }, { "interface": "deploy", "step": "erase_devices_metadata" { "interface": "raid", "step": "create_configuration" }] * If local boot is going to be used, the final instance image must have the ``mdadm`` utility installed and needs to be able to detect software RAID devices at boot time (which is usually done by having the RAID drivers embedded in the image's initrd). * Regular cleaning will not remove RAID configuration (similarly to hardware RAID). To destroy RAID run the ``delete_configuration`` manual clean step. * There is no support for partition images, only whole-disk images are supported with Software RAID. See :doc:`/install/configure-glance-images`. Image requirements ------------------ Since Ironic needs to perform additional steps when deploying nodes with software RAID, there are some requirements the deployed images need to fulfill. Up to and including the Train release, the image needs to have its root file system on the first partition. Starting with Ussuri, the image can also have additional metadata to point Ironic to the partition with the root file system: for this, the image needs to set the ``rootfs_uuid`` property with the file system UUID of the root file system. The pre-Ussuri approach, i.e. to have the root file system on the first partition, is kept as a fallback and hence allows software RAID deployments where Ironic does not have access to any image metadata (e.g. Ironic stand-alone). Using RAID in nova flavor for scheduling ======================================== The operator can specify the `raid_level` capability in nova flavor for node to be selected for scheduling:: openstack flavor set my-baremetal-flavor --property capabilities:raid_level="1+0" Developer documentation ======================= In-band RAID configuration is done using IPA ramdisk. IPA ramdisk has support for pluggable hardware managers which can be used to extend the functionality offered by IPA ramdisk using stevedore plugins. For more information, see Ironic Python Agent :ironic-python-agent-doc:`Hardware Manager ` documentation. The hardware manager that supports RAID configuration should do the following: #. Implement a method named ``create_configuration``. This method creates the RAID configuration as given in ``target_raid_config``. After successful RAID configuration, it returns the current RAID configuration information which ironic uses to set ``node.raid_config``. #. Implement a method named ``delete_configuration``. This method deletes all the RAID disks on the bare metal. #. Return these two clean steps in ``get_clean_steps`` method with priority as 0. Example:: return [{'step': 'create_configuration', 'interface': 'raid', 'priority': 0}, {'step': 'delete_configuration', 'interface': 'raid', 'priority': 0}] ironic-15.0.0/doc/source/admin/report.txt0000664000175000017500000004457513652514273020342 0ustar zuulzuul00000000000000/usr/local/lib/python2.7/dist-packages/pecan/__init__.py:122: RuntimeWarning: `static_root` is only used when `debug` is True, ignoring RuntimeWarning ======================================================================== ==== Guru Meditation ==== ======================================================================== |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| ======================================================================== ==== Package ==== ======================================================================== product = None vendor = None version = None ======================================================================== ==== Threads ==== ======================================================================== ------ Thread #140512155997952 ------ /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py:346 in run `self.wait(sleep_time)` /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py:82 in wait `sleep(seconds)` ======================================================================== ==== Green Threads ==== ======================================================================== ------ Green Thread ------ /usr/local/bin/ironic-api:10 in `sys.exit(main())` /opt/stack/ironic/ironic/cmd/api.py:48 in main `launcher.wait()` /usr/local/lib/python2.7/dist-packages/oslo_service/service.py:586 in wait `self._respawn_children()` /usr/local/lib/python2.7/dist-packages/oslo_service/service.py:570 in _respawn_children `eventlet.greenthread.sleep(self.wait_interval)` /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py:34 in sleep `hub.switch()` /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py:294 in switch `return self.greenlet.switch()` ------ Green Thread ------ No Traceback! ======================================================================== ==== Processes ==== ======================================================================== Process 124840 (under 48114) [ run by: ubuntu (1000), state: running ] Process 124849 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124850 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124851 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124852 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124853 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124854 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124855 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124856 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124857 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124858 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124859 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124860 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124861 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124862 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124863 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124864 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124865 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124866 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124867 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124868 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124869 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124870 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124871 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124872 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124873 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124874 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124875 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124876 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124877 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124878 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124879 (under 124840) [ run by: ubuntu (1000), state: sleeping ] Process 124880 (under 124840) [ run by: ubuntu (1000), state: sleeping ] ======================================================================== ==== Configuration ==== ======================================================================== agent: agent_api_version = v1 deploy_logs_collect = always deploy_logs_local_path = /home/ubuntu/ironic-bm-logs/deploy_logs deploy_logs_storage_backend = local deploy_logs_swift_container = ironic_deploy_logs_container deploy_logs_swift_days_to_expire = 30 manage_agent_boot = True memory_consumed_by_agent = 0 post_deploy_get_power_state_retries = 6 post_deploy_get_power_state_retry_interval = 5 stream_raw_images = True api: api_workers = None enable_ssl_api = False host_ip = 0.0.0.0 max_limit = 1000 port = 6385 public_endpoint = None ramdisk_heartbeat_timeout = 30 restrict_lookup = True audit: audit_map_file = /etc/ironic/api_audit_map.conf enabled = False ignore_req_list = namespace = openstack audit_middleware_notifications: driver = None topics = None transport_url = *** conductor: api_url = http://10.223.197.220:6385 automated_clean = True check_provision_state_interval = 60 clean_callback_timeout = 1800 configdrive_swift_container = ironic_configdrive_container configdrive_use_swift = False deploy_callback_timeout = 1800 force_power_state_during_sync = True heartbeat_interval = 10 heartbeat_timeout = 60 inspect_timeout = 1800 node_locked_retry_attempts = 3 node_locked_retry_interval = 1 periodic_max_workers = 8 power_state_sync_max_retries = 3 send_sensor_data = False send_sensor_data_interval = 600 send_sensor_data_types = ALL sync_local_state_interval = 180 sync_power_state_interval = 60 workers_pool_size = 100 console: subprocess_checking_interval = 1 subprocess_timeout = 10 terminal = shellinaboxd terminal_cert_dir = None terminal_pid_dir = None cors: allow_credentials = True allow_headers = allow_methods = DELETE GET HEAD OPTIONS PATCH POST PUT TRACE allowed_origin = None expose_headers = max_age = 3600 cors.subdomain: allow_credentials = True allow_headers = allow_methods = DELETE GET HEAD OPTIONS PATCH POST PUT TRACE allowed_origin = None expose_headers = max_age = 3600 database: backend = sqlalchemy connection = *** connection_debug = 0 connection_trace = False db_inc_retry_interval = True db_max_retries = 20 db_max_retry_interval = 10 db_retry_interval = 1 idle_timeout = 3600 max_overflow = 50 max_pool_size = 5 max_retries = 10 min_pool_size = 1 mysql_engine = InnoDB mysql_sql_mode = TRADITIONAL pool_timeout = None retry_interval = 10 slave_connection = *** sqlite_synchronous = True use_db_reconnect = False default: api_paste_config = api-paste.ini auth_strategy = keystone bindir = /opt/stack/ironic/ironic/bin client_socket_timeout = 900 config-dir = config-file = /etc/ironic/ironic.conf control_exchange = ironic debug = True debug_tracebacks_in_api = False default_boot_interface = None default_console_interface = None default_deploy_interface = None default_inspect_interface = None default_log_levels = amqp=WARNING amqplib=WARNING eventlet.wsgi.server=INFO glanceclient=WARNING iso8601=WARNING keystoneauth.session=INFO keystonemiddleware.auth_token=INFO neutronclient=WARNING oslo_messaging=INFO paramiko=WARNING qpid.messaging=INFO requests=WARNING sqlalchemy=WARNING stevedore=INFO urllib3.connectionpool=WARNING default_management_interface = None default_network_interface = None default_portgroup_mode = active-backup default_power_interface = None default_raid_interface = None default_vendor_interface = None enabled_boot_interfaces = pxe enabled_console_interfaces = no-console enabled_deploy_interfaces = direct iscsi enabled_hardware_types = ipmi redfish enabled_inspect_interfaces = no-inspect enabled_management_interfaces = ipmitool redfish enabled_network_interfaces = flat noop enabled_power_interfaces = ipmitool redfish enabled_raid_interfaces = agent no-raid enabled_vendor_interfaces = no-vendor fatal_exception_format_errors = False force_raw_images = True graceful_shutdown_timeout = 60 grub_config_template = /opt/stack/ironic/ironic/common/grub_conf.template hash_partition_exponent = 5 hash_ring_reset_interval = 180 host = ubuntu instance_format = [instance: %(uuid)s] instance_uuid_format = [instance: %(uuid)s] isolinux_bin = /usr/lib/syslinux/isolinux.bin isolinux_config_template = /opt/stack/ironic/ironic/common/isolinux_config.template log-config-append = None log-date-format = %Y-%m-%d %H:%M:%S log-dir = None log-file = None log_options = True logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s max_header_line = 16384 my_ip = 10.223.197.220 notification_level = None parallel_image_downloads = False pecan_debug = False publish_errors = False pybasedir = /opt/stack/ironic/ironic rate_limit_burst = 0 rate_limit_except_level = CRITICAL rate_limit_interval = 0 rootwrap_config = /etc/ironic/rootwrap.conf rpc_backend = rabbit rpc_response_timeout = 60 state_path = /var/lib/ironic syslog-log-facility = LOG_USER tcp_keepidle = 600 tempdir = /tmp transport_url = *** use-journal = False use-syslog = False use_stderr = False watch-log-file = False wsgi_default_pool_size = 100 wsgi_keep_alive = True wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f deploy: continue_if_disk_secure_erase_fails = False default_boot_option = netboot erase_devices_metadata_priority = None erase_devices_priority = 0 http_root = /opt/stack/data/ironic/httpboot http_url = http://10.223.197.220:3928 power_off_after_deploy_failure = True shred_final_overwrite_with_zeros = True shred_random_overwrite_iterations = 1 dhcp: dhcp_provider = neutron disk_partitioner: check_device_interval = 1 check_device_max_retries = 20 disk_utils: bios_boot_partition_size = 1 dd_block_size = 1M efi_system_partition_size = 200 iscsi_verify_attempts = 3 drac: query_raid_config_job_status_interval = 120 glance: allowed_direct_url_schemes = auth_section = None auth_strategy = keystone auth_type = password cafile = /opt/stack/data/ca-bundle.pem certfile = None glance_api_insecure = False glance_api_servers = None glance_cafile = None glance_num_retries = 0 insecure = False keyfile = None swift_account = AUTH_cb13c4492d124b01b4659a97d627955c swift_api_version = v1 swift_container = glance swift_endpoint_url = http://10.223.197.220:8080 swift_store_multiple_containers_seed = 0 swift_temp_url_cache_enabled = False swift_temp_url_duration = 3600 swift_temp_url_expected_download_start_delay = 0 swift_temp_url_key = *** timeout = None ilo: ca_file = None clean_priority_clear_secure_boot_keys = 0 clean_priority_erase_devices = None clean_priority_reset_bios_to_default = 10 clean_priority_reset_ilo = 0 clean_priority_reset_ilo_credential = 30 clean_priority_reset_secure_boot_keys_to_default = 20 client_port = 443 client_timeout = 60 default_boot_mode = auto power_retry = 6 power_wait = 2 swift_ilo_container = ironic_ilo_container swift_object_expiry_timeout = 900 use_web_server_for_images = False inspector: auth_section = None auth_type = password cafile = /opt/stack/data/ca-bundle.pem certfile = None enabled = False insecure = False keyfile = None service_url = None status_check_period = 60 timeout = None ipmi: min_command_interval = 5 retry_timeout = 60 irmc: auth_method = basic client_timeout = 60 port = 443 remote_image_server = None remote_image_share_name = share remote_image_share_root = /remote_image_share_root remote_image_share_type = CIFS remote_image_user_domain = remote_image_user_name = None remote_image_user_password = *** sensor_method = ipmitool snmp_community = public snmp_port = 161 snmp_security = None snmp_version = v2c ironic_lib: fatal_exception_format_errors = False root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf iscsi: portal_port = 3260 keystone: region_name = RegionOne keystone_authtoken: admin_password = *** admin_tenant_name = admin admin_token = *** admin_user = None auth-url = http://10.223.197.220/identity_admin auth_admin_prefix = auth_host = 127.0.0.1 auth_port = 5000 auth_protocol = https auth_section = None auth_type = password www_authenticate_uri = http://10.223.197.220/identity auth_version = None cache = None cafile = /opt/stack/data/ca-bundle.pem certfile = None check_revocations_for_cached = False default-domain-id = None default-domain-name = None delay_auth_decision = False domain-id = None domain-name = None enforce_token_bind = permissive hash_algorithms = md5 http_connect_timeout = None http_request_max_retries = 3 identity_uri = None include_service_catalog = True insecure = False keyfile = None memcache_pool_conn_get_timeout = 10 memcache_pool_dead_retry = 300 memcache_pool_maxsize = 10 memcache_pool_socket_timeout = 3 memcache_pool_unused_timeout = 60 memcache_secret_key = *** memcache_security_strategy = None memcache_use_advanced_pool = False memcached_servers = 10.223.197.220:11211 password = *** project-domain-id = None project-domain-name = Default project-id = None project-name = service region_name = None revocation_cache_time = 10 service_token_roles = service service_token_roles_required = False signing_dir = /var/cache/ironic/api token_cache_time = 300 trust-id = None user-domain-id = None user-domain-name = Default user-id = None username = ironic metrics: agent_backend = noop agent_global_prefix = None agent_prepend_host = False agent_prepend_host_reverse = True agent_prepend_uuid = False backend = noop global_prefix = None prepend_host = False prepend_host_reverse = True metrics_statsd: agent_statsd_host = localhost agent_statsd_port = 8125 statsd_host = localhost statsd_port = 8125 neutron: auth_section = None auth_strategy = keystone auth_type = password cafile = /opt/stack/data/ca-bundle.pem certfile = None cleaning_network = private cleaning_network_security_groups = insecure = False keyfile = None port_setup_delay = 15 provisioning_network = None provisioning_network_security_groups = retries = 3 timeout = None url = None url_timeout = 30 oslo_concurrency: disable_process_locking = False lock_path = None oslo_messaging_notifications: driver = topics = notifications transport_url = *** oslo_messaging_rabbit: amqp_auto_delete = False amqp_durable_queues = False conn_pool_min_size = 2 conn_pool_ttl = 1200 fake_rabbit = False heartbeat_rate = 2 heartbeat_timeout_threshold = 60 kombu_compression = None kombu_failover_strategy = round-robin kombu_missing_consumer_retry_timeout = 60 kombu_reconnect_delay = 1.0 rabbit_ha_queues = False rabbit_host = localhost rabbit_hosts = localhost:5672 rabbit_interval_max = 30 rabbit_login_method = AMQPLAIN rabbit_password = *** rabbit_port = 5672 rabbit_qos_prefetch_count = 0 rabbit_retry_backoff = 2 rabbit_retry_interval = 1 rabbit_transient_queues_ttl = 1800 rabbit_userid = guest rabbit_virtual_host = / rpc_conn_pool_size = 30 ssl = False ssl_ca_file = ssl_cert_file = ssl_key_file = ssl_version = oslo_versionedobjects: fatal_exception_format_errors = False pxe: default_ephemeral_format = ext4 image_cache_size = 20480 image_cache_ttl = 10080 images_path = /var/lib/ironic/images/ instance_master_path = /var/lib/ironic/master_images ip_version = 4 ipxe_boot_script = /opt/stack/ironic/ironic/drivers/modules/boot.ipxe ipxe_enabled = True ipxe_timeout = 0 ipxe_use_swift = False pxe_append_params = nofb nomodeset vga=normal console=ttyS0 systemd.journald.forward_to_console=yes pxe_bootfile_name = undionly.kpxe pxe_bootfile_name_by_arch: pxe_config_template = /opt/stack/ironic/ironic/drivers/modules/ipxe_config.template pxe_config_template_by_arch: tftp_master_path = /opt/stack/data/ironic/tftpboot/master_images tftp_root = /opt/stack/data/ironic/tftpboot tftp_server = 10.223.197.220 uefi_pxe_bootfile_name = ipxe.efi uefi_pxe_config_template = /opt/stack/ironic/ironic/drivers/modules/ipxe_config.template seamicro: action_timeout = 10 max_retry = 3 service_catalog: auth_section = None auth_type = password cafile = /opt/stack/data/ca-bundle.pem certfile = None insecure = False keyfile = None timeout = None snmp: power_timeout = 10 reboot_delay = 0 swift: auth_section = None auth_type = password cafile = /opt/stack/data/ca-bundle.pem certfile = None insecure = False keyfile = None swift_max_retries = 2 timeout = None virtualbox: port = 18083 ironic-15.0.0/doc/source/admin/interfaces/0000775000175000017500000000000013652514443020371 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/admin/interfaces/boot.rst0000664000175000017500000000516013652514273022071 0ustar zuulzuul00000000000000=============== Boot interfaces =============== The boot interface manages booting of both the deploy ramdisk and the user instances on the bare metal node. The `PXE boot`_ interface is generic and works with all hardware that supports booting from network. Alternatively, several vendors provide *virtual media* implementations of the boot interface. They work by pushing an ISO image to the node's `management controller`_, and do not require either PXE or iPXE. Check your driver documentation at :doc:`../drivers` for details. .. _pxe-boot: PXE boot -------- The ``pxe`` boot interface uses PXE_ or iPXE_ to deliver the target kernel/ramdisk pair. PXE uses relatively slow and unreliable TFTP protocol for transfer, while iPXE uses HTTP. The downside of iPXE is that it's less common, and usually requires bootstrapping using PXE first. The ``pxe`` boot interface works by preparing a PXE/iPXE environment for a node on the file system, then instructing the DHCP provider (for example, the Networking service) to boot the node from it. See :ref:`iscsi-deploy-example` and :ref:`direct-deploy-example` for a better understanding of the whole deployment process. .. note:: Both PXE and iPXE are configured differently, when UEFI boot is used instead of conventional BIOS boot. This is particularly important for CPU architectures that do not have BIOS support at all. The ``pxe`` boot interface is used by default for many hardware types, including ``ipmi``. Some hardware types, notably ``ilo`` and ``irmc`` have their specific implementations of the PXE boot interface. Additional configuration is required for this boot interface - see :doc:`/install/configure-pxe` for details. Enable persistent boot device for deploy/clean operation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ironic uses non-persistent boot for cleaning/deploying phases as default, in PXE interface. For some drivers, a persistent change is far more costly than a non-persistent one, so this can bring performance improvements. Set the flag ``force_persistent_boot_device`` to ``True`` in the node's ``driver_info``:: $ openstack baremetal node set --driver-info force_persistent_boot_device=True .. note:: It's recommended to check if the node's state has not changed as there is no way of locking the node between these commands. Once the flag is present, the next cleaning and deploy steps will be done with persistent boot for that node. .. _PXE: https://en.wikipedia.org/wiki/Preboot_Execution_Environment .. _iPXE: https://en.wikipedia.org/wiki/IPXE .. _management controller: https://en.wikipedia.org/wiki/Out-of-band_management ironic-15.0.0/doc/source/admin/interfaces/deploy.rst0000664000175000017500000001363513652514273022430 0ustar zuulzuul00000000000000================= Deploy Interfaces ================= A *deploy* interface plays a critical role in the provisioning process. It orchestrates the whole deployment and defines how the image gets transferred to the target disk. .. _iscsi-deploy: iSCSI deploy ============ With ``iscsi`` deploy interface, the deploy ramdisk publishes the node's hard drive as an iSCSI_ share. The ironic-conductor then copies the image to this share. See :ref:`iSCSI deploy diagram ` for a detailed explanation of how this deploy interface works. This interface is used by default, if enabled (see :ref:`enable-hardware-interfaces`). You can specify it explicitly when creating or updating a node:: openstack baremetal node create --driver ipmi --deploy-interface iscsi openstack baremetal node set --deploy-interface iscsi .. _iSCSI: https://en.wikipedia.org/wiki/ISCSI .. _direct-deploy: Direct deploy ============= With ``direct`` deploy interface, the deploy ramdisk fetches the image from an HTTP location. It can be an object storage (swift or RadosGW) temporary URL or a user-provided HTTP URL. The deploy ramdisk then copies the image to the target disk. See :ref:`direct deploy diagram ` for a detailed explanation of how this deploy interface works. You can specify this deploy interface when creating or updating a node:: openstack baremetal node create --driver ipmi --deploy-interface direct openstack baremetal node set --deploy-interface direct .. note:: For historical reasons the ``direct`` deploy interface is sometimes called ``agent``. This is because before the Kilo release **ironic-python-agent** used to only support this deploy interface. Deploy with custom HTTP servers ------------------------------- The ``direct`` deploy interface can also be configured to use with custom HTTP servers set up at ironic conductor nodes, images will be cached locally and made accessible by the HTTP server. To use this deploy interface with a custom HTTP server, set ``image_download_source`` to ``http`` in the ``[agent]`` section. .. code-block:: ini [agent] ... image_download_source = http ... You need to set up a workable HTTP server at each conductor node which with ``direct`` deploy interface enabled, and check http related options in the ironic configuration file to match the HTTP server configurations. .. code-block:: ini [deploy] http_url = http://example.com http_root = /httpboot Each HTTP servers should be configured to follow symlinks for images accessible from HTTP service. Please refer to configuration option ``FollowSymLinks`` if you are using Apache HTTP server, or ``disable_symlinks`` if Nginx HTTP server is in use. .. _ansible-deploy: Ansible deploy ============== This interface is similar to ``direct`` in the sense that the image is downloaded by the ramdisk directly from the image store (not from ironic-conductor host), but the logic of provisioning the node is held in a set of Ansible playbooks that are applied by the ``ironic-conductor`` service handling the node. While somewhat more complex to set up, this deploy interface provides greater flexibility in terms of advanced node preparation during provisioning. This interface is supported by most but not all hardware types declared in ironic. However this deploy interface is not enabled by default. To enable it, add ``ansible`` to the list of enabled deploy interfaces in ``enabled_deploy_interfaces`` option in the ``[DEFAULT]`` section of ironic's configuration file: .. code-block:: ini [DEFAULT] ... enabled_deploy_interfaces = iscsi,direct,ansible ... Once enabled, you can specify this deploy interface when creating or updating a node: .. code-block:: shell openstack baremetal node create --driver ipmi --deploy-interface ansible openstack baremetal node set --deploy-interface ansible For more information about this deploy interface, its features and how to use it, see :doc:`Ansible deploy interface <../drivers/ansible>`. .. toctree:: :hidden: ../drivers/ansible .. _ramdisk-deploy: Ramdisk deploy ============== The ramdisk interface is intended to provide a mechanism to "deploy" an instance where the item to be deployed is in reality a ramdisk. Most commonly this is performed when an instance is booted via PXE, iPXE or Virtual Media, with the only local storage contents being those in memory. It is suported by ``pxe`` and ``ilo-virtual-media`` boot interfaces. As with most non-default interfaces, it must be enabled and set for a node to be utilized: .. code-block:: ini [DEFAULT] ... enabled_deploy_interfaces = iscsi,direct,ramdisk ... Once enabled and the conductor(s) have been restarted, the interface can be set upon creation of a new node or update a pre-existing node: .. code-block:: shell openstack baremetal node create --driver ipmi \ --deploy-interface ramdisk \ --boot-interface pxe openstack baremetal node set --deploy-interface ramdisk The intended use case is for advanced scientific and ephemeral workloads where the step of writing an image to the local storage is not required or desired. As such, this interface does come with several caveats: * Configuration drives are not supported. * Disk image contents are not written to the bare metal node. * Users and Operators who intend to leverage this interface should expect to leverage a metadata service, custom ramdisk images, or the ``instance_info/ramdisk_kernel_arguments`` parameter to add options to the kernel boot command line. * Bare metal nodes must continue to have network access to PXE and iPXE network resources. This is contrary to most tenant networking enabled configurations where this access is restricted to the provisioning and cleaning networks * As with all deployment interfaces, automatic cleaning of the node will still occur with the contents of any local storage being wiped between deployments. ironic-15.0.0/doc/source/admin/deploy-steps.rst0000664000175000017500000000016313652514273021431 0ustar zuulzuul00000000000000============ Deploy Steps ============ The deploy steps section has moved to :ref:`node-deployment-deploy-steps`. ironic-15.0.0/doc/source/admin/radosgw.rst0000664000175000017500000000447613652514273020462 0ustar zuulzuul00000000000000.. _radosgw support: =========================== Ceph Object Gateway support =========================== Overview ======== Ceph project is a powerful distributed storage system. It contains object store and provides a RADOS Gateway Swift API which is compatible with OpenStack Swift API. Ironic added support for RADOS Gateway temporary URL in the Mitaka release. Configure Ironic and Glance with RADOS Gateway ============================================== #. Install Ceph storage with RADOS Gateway. See `Ceph documentation `_. #. Configure RADOS Gateway to use keystone for authentication. See `Integrating with OpenStack Keystone `_ #. Register RADOS Gateway endpoint in the keystone catalog, with the same format swift uses, as the ``object-store`` service. URL example: ``http://rados.example.com:8080/swift/v1/AUTH_$(project_id)s``. In the ceph configuration, make sure radosgw is configured with the following value:: rgw swift account in url = True #. Configure Glance API service for RADOS Swift API as backend. Edit the configuration file for the Glance API service (is typically located at ``/etc/glance/glance-api.conf``):: [glance_store] stores = file, http, swift default_store = swift default_swift_reference=ref1 swift_store_config_file=/etc/glance/glance-swift-creds.conf swift_store_container = glance swift_store_create_container_on_put = True In the file referenced in ``swift_store_config_file`` option, add the following:: [ref1] user = : key = user_domain_id = default project_domain_id = default auth_version = 3 auth_address = http://keystone.example.com/identity Values for user and key options correspond to keystone credentials for RADOS Gateway service user. Note: RADOS Gateway uses FastCGI protocol for interacting with HTTP server. Read your HTTP server documentation if you want to enable HTTPS support. #. Restart Glance API service and upload all needed images. #. If you're using custom container name in RADOS, change Ironic configuration file on the conductor host(s) as follows:: [glance] swift_container = glance #. Restart Ironic conductor service(s). ironic-15.0.0/doc/source/admin/drivers/0000775000175000017500000000000013652514443017724 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/admin/drivers/ilo.rst0000664000175000017500000024525213652514273021254 0ustar zuulzuul00000000000000.. _ilo: ========== iLO driver ========== Overview ======== iLO driver enables to take advantage of features of iLO management engine in HPE ProLiant servers. The ``ilo`` hardware type is targeted for HPE ProLiant Gen8 and Gen9 systems which have `iLO 4 management engine`_. From **Pike** release ``ilo`` hardware type supports ProLiant Gen10 systems which have `iLO 5 management engine`_. iLO5 conforms to `Redfish`_ API and hence hardware type ``redfish`` (see :doc:`redfish`) is also an option for this kind of hardware but it lacks the iLO specific features. For more details and for up-to-date information (like tested platforms, known issues, etc), please check the `iLO driver wiki page `_. For enabling Gen10 systems and getting detailed information on Gen10 feature support in Ironic please check this `Gen10 wiki section`_. Hardware type ============= ProLiant hardware is primarily supported by the ``ilo`` hardware type. ``ilo5`` hardware type is only supported on ProLiant Gen10 and later systems. Both hardware can be used with reference hardware type ``ipmi`` (see :doc:`ipmitool`) and ``redfish`` (see :doc:`redfish`). For information on how to enable the ``ilo`` and ``ilo5`` hardware type, see :ref:`enable-hardware-types`. .. note:: Only HPE ProLiant Gen10 servers supports hardware type ``redfish``. The hardware type ``ilo`` supports following HPE server features: * `Boot mode support`_ * `UEFI Secure Boot Support`_ * `Node Cleaning Support`_ * `Node Deployment Customization`_ * `Hardware Inspection Support`_ * `Swiftless deploy for intermediate images`_ * `HTTP(S) Based Deploy Support`_ * `Support for iLO driver with Standalone Ironic`_ * `RAID Support`_ * `Disk Erase Support`_ * `Initiating firmware update as manual clean step`_ * `Smart Update Manager (SUM) based firmware update`_ * `Activating iLO Advanced license as manual clean step`_ * `Firmware based UEFI iSCSI boot from volume support`_ * `Certificate based validation in iLO`_ * `Rescue mode support`_ * `Inject NMI support`_ * `Soft power operation support`_ * `BIOS configuration support`_ * `IPv6 support`_ Apart from above features hardware type ``ilo5`` also supports following features: * `Out of Band RAID Support`_ * `Out of Band Sanitize Disk Erase Support`_ Hardware interfaces ^^^^^^^^^^^^^^^^^^^ The ``ilo`` hardware type supports following hardware interfaces: * bios Supports ``ilo`` and ``no-bios``. The default is ``ilo``. They can be enabled by using the ``[DEFAULT]enabled_bios_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_bios_interfaces = ilo,no-bios * boot Supports ``ilo-virtual-media``, ``ilo-pxe`` and ``ilo-ipxe``. The default is ``ilo-virtual-media``. The ``ilo-virtual-media`` interface provides security enhanced PXE-less deployment by using iLO virtual media to boot up the bare metal node. The ``ilo-pxe`` and ``ilo-ipxe`` interfaces use PXE and iPXE respectively for deployment(just like :ref:`pxe-boot`). These interfaces do not require iLO Advanced license. They can be enabled by using the ``[DEFAULT]enabled_boot_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_boot_interfaces = ilo-virtual-media,ilo-pxe,ilo-ipxe * console Supports ``ilo`` and ``no-console``. The default is ``ilo``. They can be enabled by using the ``[DEFAULT]enabled_console_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_console_interfaces = ilo,no-console .. note:: To use ``ilo`` console interface you need to enable iLO feature 'IPMI/DCMI over LAN Access' on `iLO4 `_ and `iLO5 `_ management engine. * inspect Supports ``ilo`` and ``inspector``. The default is ``ilo``. They can be enabled by using the ``[DEFAULT]enabled_inspect_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_inspect_interfaces = ilo,inspector .. note:: :ironic-inspector-doc:`Ironic Inspector <>` needs to be configured to use ``inspector`` as the inspect interface. * management Supports only ``ilo``. It can be enabled by using the ``[DEFAULT]enabled_management_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_management_interfaces = ilo * power Supports only ``ilo``. It can be enabled by using the ``[DEFAULT]enabled_power_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_power_interfaces = ilo * raid Supports ``agent`` and ``no-raid``. The default is ``no-raid``. They can be enabled by using the ``[DEFAULT]enabled_raid_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_raid_interfaces = agent,no-raid * storage Supports ``cinder`` and ``noop``. The default is ``noop``. They can be enabled by using the ``[DEFAULT]enabled_storage_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_storage_interfaces = cinder,noop .. note:: The storage interface ``cinder`` is supported only when corresponding boot interface of the ``ilo`` hardware type based node is ``ilo-pxe`` or ``ilo-ipxe``. Please refer to :doc:`/admin/boot-from-volume` for configuring ``cinder`` as a storage interface. * rescue Supports ``agent`` and ``no-rescue``. The default is ``no-rescue``. They can be enabled by using the ``[DEFAULT]enabled_rescue_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo enabled_rescue_interfaces = agent,no-rescue The ``ilo5`` hardware type supports all the ``ilo`` interfaces described above, except for ``raid`` interface. The details of ``raid`` interface is as under: * raid Supports ``ilo5`` and ``no-raid``. The default is ``ilo5``. They can be enabled by using the ``[DEFAULT]enabled_raid_interfaces`` option in ``ironic.conf`` as given below: .. code-block:: ini [DEFAULT] enabled_hardware_types = ilo5 enabled_raid_interfaces = ilo5,no-raid The ``ilo`` and ``ilo5`` hardware type support all standard ``deploy`` and ``network`` interface implementations, see :ref:`enable-hardware-interfaces` for details. The following command can be used to enroll a ProLiant node with ``ilo`` hardware type: .. code-block:: console openstack baremetal node create --os-baremetal-api-version=1.38 \ --driver ilo \ --deploy-interface direct \ --raid-interface agent \ --rescue-interface agent \ --driver-info ilo_address= \ --driver-info ilo_username= \ --driver-info ilo_password= \ --driver-info ilo_deploy_iso= \ --driver-info ilo_rescue_iso= The following command can be used to enroll a ProLiant node with ``ilo5`` hardware type: .. code-block:: console openstack baremetal node create \ --driver ilo5 \ --deploy-interface direct \ --raid-interface ilo5 \ --rescue-interface agent \ --driver-info ilo_address= \ --driver-info ilo_username= \ --driver-info ilo_password= \ --driver-info ilo_deploy_iso= \ --driver-info ilo_rescue_iso= Please refer to :doc:`/install/enabling-drivers` for detailed explanation of hardware type. Node configuration ^^^^^^^^^^^^^^^^^^ * Each node is configured for ``ilo`` and ``ilo5`` hardware type by setting the following ironic node object's properties in ``driver_info``: - ``ilo_address``: IP address or hostname of the iLO. - ``ilo_username``: Username for the iLO with administrator privileges. - ``ilo_password``: Password for the above iLO user. - ``client_port``: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443. - ``client_timeout``: (optional) Timeout for iLO operations. Default timeout is 60 seconds. - ``ca_file``: (optional) CA certificate file to validate iLO. - ``console_port``: (optional) Node's UDP port for console access. Any unused port on the ironic conductor node may be used. This is required only when ``ilo-console`` interface is used. * The following properties are also required in node object's ``driver_info`` if ``ilo-virtual-media`` boot interface is used: - ``ilo_deploy_iso``: The glance UUID of the deploy ramdisk ISO image. - ``instance info/ilo_boot_iso`` property to be either boot iso Glance UUID or a HTTP(S) URL. This is optional property and is used when ``boot_option`` is set to ``netboot`` or ``ramdisk``. .. note:: When ``boot_option`` is set to ``ramdisk``, the ironic node must be configured to use ``ramdisk`` deploy interface. See :ref:`ramdisk-deploy` for details. - ``ilo_rescue_iso``: The glance UUID of the rescue ISO image. This is optional property and is used when ``rescue`` interface is set to ``agent``. * The following properties are also required in node object's ``driver_info`` if ``ilo-pxe`` or ``ilo-ipxe`` boot interface is used: - ``deploy_kernel``: The glance UUID or a HTTP(S) URL of the deployment kernel. - ``deploy_ramdisk``: The glance UUID or a HTTP(S) URL of the deployment ramdisk. - ``rescue_kernel``: The glance UUID or a HTTP(S) URL of the rescue kernel. This is optional property and is used when ``rescue`` interface is set to ``agent``. - ``rescue_ramdisk``: The glance UUID or a HTTP(S) URL of the rescue ramdisk. This is optional property and is used when ``rescue`` interface is set to ``agent``. * The following parameters are mandatory in ``driver_info`` if ``ilo-inspect`` inspect inteface is used and SNMPv3 inspection (`SNMPv3 Authentication` in `HPE iLO4 User Guide`_) is desired: * ``snmp_auth_user`` : The SNMPv3 user. * ``snmp_auth_prot_password`` : The auth protocol pass phrase. * ``snmp_auth_priv_password`` : The privacy protocol pass phrase. The following parameters are optional for SNMPv3 inspection: * ``snmp_auth_protocol`` : The Auth Protocol. The valid values are "MD5" and "SHA". The iLO default value is "MD5". * ``snmp_auth_priv_protocol`` : The Privacy protocol. The valid values are "AES" and "DES". The iLO default value is "DES". .. note:: If configuration values for ``ca_file``, ``client_port`` and ``client_timeout`` are not provided in the ``driver_info`` of the node, the corresponding config variables defined under ``[ilo]`` section in ironic.conf will be used. Prerequisites ============= * `proliantutils `_ is a python package which contains a set of modules for managing HPE ProLiant hardware. Install ``proliantutils`` module on the ironic conductor node. Minimum version required is 2.8.0:: $ pip install "proliantutils>=2.8.0" * ``ipmitool`` command must be present on the service node(s) where ``ironic-conductor`` is running. On most distros, this is provided as part of the ``ipmitool`` package. Please refer to `Hardware Inspection Support`_ for more information on recommended version. Different configuration for ilo hardware type ============================================= Glance Configuration ^^^^^^^^^^^^^^^^^^^^ 1. :glance-doc:`Configure Glance image service with its storage backend as Swift `. 2. Set a temp-url key for Glance user in Swift. For example, if you have configured Glance with user ``glance-swift`` and tenant as ``service``, then run the below command:: swift --os-username=service:glance-swift post -m temp-url-key:mysecretkeyforglance 3. Fill the required parameters in the ``[glance]`` section in ``/etc/ironic/ironic.conf``. Normally you would be required to fill in the following details:: [glance] swift_temp_url_key=mysecretkeyforglance swift_endpoint_url=https://10.10.1.10:8080 swift_api_version=v1 swift_account=AUTH_51ea2fb400c34c9eb005ca945c0dc9e1 swift_container=glance The details can be retrieved by running the below command: .. code-block:: bash $ swift --os-username=service:glance-swift stat -v | grep -i url StorageURL: http://10.10.1.10:8080/v1/AUTH_51ea2fb400c34c9eb005ca945c0dc9e1 Meta Temp-Url-Key: mysecretkeyforglance 4. Swift must be accessible with the same admin credentials configured in Ironic. For example, if Ironic is configured with the below credentials in ``/etc/ironic/ironic.conf``:: [keystone_authtoken] admin_password = password admin_user = ironic admin_tenant_name = service Ensure ``auth_version`` in ``keystone_authtoken`` to 2. Then, the below command should work.: .. code-block:: bash $ swift --os-username ironic --os-password password --os-tenant-name service --auth-version 2 stat Account: AUTH_22af34365a104e4689c46400297f00cb Containers: 2 Objects: 18 Bytes: 1728346241 Objects in policy "policy-0": 18 Bytes in policy "policy-0": 1728346241 Meta Temp-Url-Key: mysecretkeyforglance X-Timestamp: 1409763763.84427 X-Trans-Id: tx51de96a28f27401eb2833-005433924b Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes 5. Restart the Ironic conductor service:: $ service ironic-conductor restart Web server configuration on conductor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * The HTTP(S) web server can be configured in many ways. For apache web server on Ubuntu, refer `here `_ * Following config variables need to be set in ``/etc/ironic/ironic.conf``: * ``use_web_server_for_images`` in ``[ilo]`` section:: [ilo] use_web_server_for_images = True * ``http_url`` and ``http_root`` in ``[deploy]`` section:: [deploy] # Ironic compute node's http root path. (string value) http_root=/httpboot # Ironic compute node's HTTP server URL. Example: # http://192.1.2.3:8080 (string value) http_url=http://192.168.0.2:8080 ``use_web_server_for_images``: If the variable is set to ``false``, the ``ilo-virtual-media`` boot interface uses swift containers to host the intermediate floppy image and the boot ISO. If the variable is set to ``true``, it uses the local web server for hosting the intermediate files. The default value for ``use_web_server_for_images`` is False. ``http_url``: The value for this variable is prefixed with the generated intermediate files to generate a URL which is attached in the virtual media. ``http_root``: It is the directory location to which ironic conductor copies the intermediate floppy image and the boot ISO. .. note:: HTTPS is strongly recommended over HTTP web server configuration for security enhancement. The ``ilo-virtual-media`` boot interface will send the instance's configdrive over an encrypted channel if web server is HTTPS enabled. Enable driver ============= 1. Build a deploy ISO (and kernel and ramdisk) image, see :ref:`deploy-ramdisk` 2. See `Glance Configuration`_ for configuring glance image service with its storage backend as ``swift``. 3. Upload this image to Glance:: glance image-create --name deploy-ramdisk.iso --disk-format iso --container-format bare < deploy-ramdisk.iso 4. Enable hardware type and hardware interfaces in ``/etc/ironic/ironic.conf``:: [DEFAULT] enabled_hardware_types = ilo enabled_bios_interfaces = ilo enabled_boot_interfaces = ilo-virtual-media,ilo-pxe,ilo-ipxe enabled_power_interfaces = ilo enabled_console_interfaces = ilo enabled_raid_interfaces = agent enabled_management_interfaces = ilo enabled_inspect_interfaces = ilo enabled_rescue_interfaces = agent 5. Restart the ironic conductor service:: $ service ironic-conductor restart Optional functionalities for the ``ilo`` hardware type ====================================================== Boot mode support ^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports automatic detection and setting of boot mode (Legacy BIOS or UEFI). * When boot mode capability is not configured: - If config variable ``default_boot_mode`` in ``[ilo]`` section of ironic configuration file is set to either 'bios' or 'uefi', then iLO driver uses that boot mode for provisioning the baremetal ProLiant servers. - If the pending boot mode is set on the node then iLO driver uses that boot mode for provisioning the baremetal ProLiant servers. - If the pending boot mode is not set on the node then iLO driver uses 'uefi' boot mode for UEFI capable servers and "bios" when UEFI is not supported. * When boot mode capability is configured, the driver sets the pending boot mode to the configured value. * Only one boot mode (either ``uefi`` or ``bios``) can be configured for the node. * If the operator wants a node to boot always in ``uefi`` mode or ``bios`` mode, then they may use ``capabilities`` parameter within ``properties`` field of an ironic node. To configure a node in ``uefi`` mode, then set ``capabilities`` as below:: openstack baremetal node set --property capabilities='boot_mode:uefi' Nodes having ``boot_mode`` set to ``uefi`` may be requested by adding an ``extra_spec`` to the nova flavor:: nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only ironic nodes which have the ``boot_mode`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in nova can be used in heterogeneous environments where there is a mix of ``uefi`` and ``bios`` machines, and operator wants to provide a choice to the user regarding boot modes. If the flavor doesn't contain ``boot_mode`` then nova scheduler will not consider boot mode as a placement criteria, hence user may get either a BIOS or UEFI machine that matches with user specified flavors. The automatic boot ISO creation for UEFI boot mode has been enabled in Kilo. The manual creation of boot ISO for UEFI boot mode is also supported. For the latter, the boot ISO for the deploy image needs to be built separately and the deploy image's ``boot_iso`` property in glance should contain the glance UUID of the boot ISO. For building boot ISO, add ``iso`` element to the diskimage-builder command to build the image. For example:: disk-image-create ubuntu baremetal iso .. _`iLO UEFI Secure Boot Support`: UEFI Secure Boot Support ^^^^^^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports secure boot deploy. The UEFI secure boot can be configured in ironic by adding ``secure_boot`` parameter in the ``capabilities`` parameter within ``properties`` field of an ironic node. ``secure_boot`` is a boolean parameter and takes value as ``true`` or ``false``. To enable ``secure_boot`` on a node add it to ``capabilities`` as below:: openstack baremetal node set --property capabilities='secure_boot:true' Alternatively see `Hardware Inspection Support`_ to know how to automatically populate the secure boot capability. Nodes having ``secure_boot`` set to ``true`` may be requested by adding an ``extra_spec`` to the nova flavor:: nova flavor-key ironic-test-3 set capabilities:secure_boot="true" nova boot --flavor ironic-test-3 --image test-image instance-1 If ``capabilities`` is used in ``extra_spec`` as above, nova scheduler (``ComputeCapabilitiesFilter``) will match only ironic nodes which have the ``secure_boot`` set appropriately in ``properties/capabilities``. It will filter out rest of the nodes. The above facility for matching in nova can be used in heterogeneous environments where there is a mix of machines supporting and not supporting UEFI secure boot, and operator wants to provide a choice to the user regarding secure boot. If the flavor doesn't contain ``secure_boot`` then nova scheduler will not consider secure boot mode as a placement criteria, hence user may get a secure boot capable machine that matches with user specified flavors but deployment would not use its secure boot capability. Secure boot deploy would happen only when it is explicitly specified through flavor. Use element ``ubuntu-signed`` or ``fedora`` to build signed deploy iso and user images from `diskimage-builder `_. Please refer to :ref:`deploy-ramdisk` for more information on building deploy ramdisk. The below command creates files named cloud-image-boot.iso, cloud-image.initrd, cloud-image.vmlinuz and cloud-image.qcow2 in the current working directory:: cd ./bin/disk-image-create -o cloud-image ubuntu-signed baremetal iso .. note:: In UEFI secure boot, digitally signed bootloader should be able to validate digital signatures of kernel during boot process. This requires that the bootloader contains the digital signatures of the kernel. For the ``ilo-virtual-media`` boot interface, it is recommended that ``boot_iso`` property for user image contains the glance UUID of the boot ISO. If ``boot_iso`` property is not updated in glance for the user image, it would create the ``boot_iso`` using bootloader from the deploy iso. This ``boot_iso`` will be able to boot the user image in UEFI secure boot environment only if the bootloader is signed and can validate digital signatures of user image kernel. Ensure the public key of the signed image is loaded into bare metal to deploy signed images. For HPE ProLiant Gen9 servers, one can enroll public key using iLO System Utilities UI. Please refer to section ``Accessing Secure Boot options`` in `HP UEFI System Utilities User Guide `_. One can also refer to white paper on `Secure Boot for Linux on HP ProLiant servers `_ for additional details. For more up-to-date information, refer `iLO driver wiki page `_ .. _ilo_node_cleaning: Node Cleaning Support ^^^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports node cleaning. For more information on node cleaning, see :ref:`cleaning` Supported **Automated** Cleaning Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * The automated cleaning operations supported are: * ``reset_bios_to_default``: Resets system ROM settings to default. By default, enabled with priority 10. This clean step is supported only on Gen9 and above servers. * ``reset_secure_boot_keys_to_default``: Resets secure boot keys to manufacturer's defaults. This step is supported only on Gen9 and above servers. By default, enabled with priority 20 . * ``reset_ilo_credential``: Resets the iLO password, if ``ilo_change_password`` is specified as part of node's driver_info. By default, enabled with priority 30. * ``clear_secure_boot_keys``: Clears all secure boot keys. This step is supported only on Gen9 and above servers. By default, this step is disabled. * ``reset_ilo``: Resets the iLO. By default, this step is disabled. * ``erase_devices``: An inband clean step that performs disk erase on all the disks including the disks visible to OS as well as the raw disks visible to Smart Storage Administrator (SSA). This step supports erasing of the raw disks visible to SSA in Proliant servers only with the ramdisk created using diskimage-builder from Ocata release. By default, this step is disabled. See `Disk Erase Support`_ for more details. * For supported in-band cleaning operations, see :ref:`InbandvsOutOfBandCleaning`. * All the automated cleaning steps have an explicit configuration option for priority. In order to disable or change the priority of the automated clean steps, respective configuration option for priority should be updated in ironic.conf. * Updating clean step priority to 0, will disable that particular clean step and will not run during automated cleaning. * Configuration Options for the automated clean steps are listed under ``[ilo]`` and ``[deploy]`` section in ironic.conf :: [ilo] clean_priority_reset_ilo=0 clean_priority_reset_bios_to_default=10 clean_priority_reset_secure_boot_keys_to_default=20 clean_priority_clear_secure_boot_keys=0 clean_priority_reset_ilo_credential=30 [deploy] erase_devices_priority=0 For more information on node automated cleaning, see :ref:`automated_cleaning` Supported **Manual** Cleaning Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * The manual cleaning operations supported are: ``activate_license``: Activates the iLO Advanced license. This is an out-of-band manual cleaning step associated with the ``management`` interface. See `Activating iLO Advanced license as manual clean step`_ for user guidance on usage. Please note that this operation cannot be performed using the ``ilo-virtual-media`` boot interface as it needs this type of advanced license already active to use virtual media to boot into to start cleaning operation. Virtual media is an advanced feature. If an advanced license is already active and the user wants to overwrite the current license key, for example in case of a multi-server activation key delivered with a flexible-quantity kit or after completing an Activation Key Agreement (AKA), then the driver can still be used for executing this cleaning step. ``update_firmware``: Updates the firmware of the devices. Also an out-of-band step associated with the ``management`` interface. See `Initiating firmware update as manual clean step`_ for user guidance on usage. The supported devices for firmware update are: ``ilo``, ``cpld``, ``power_pic``, ``bios`` and ``chassis``. Please refer to below table for their commonly used descriptions. .. csv-table:: :header: "Device", "Description" :widths: 30, 80 "``ilo``", "BMC for HPE ProLiant servers" "``cpld``", "System programmable logic device" "``power_pic``", "Power management controller" "``bios``", "HPE ProLiant System ROM" "``chassis``", "System chassis device" Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and Onboard Administrator (OA). ``update_firmware_sum``: Updates all or list of user specified firmware components on the node using Smart Update Manager (SUM). It is an inband step associated with the ``management`` interface. See `Smart Update Manager (SUM) based firmware update`_ for more information on usage. * iLO with firmware version 1.5 is minimally required to support all the operations. For more information on node manual cleaning, see :ref:`manual_cleaning` Node Deployment Customization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports customization of node deployment via deploy templates, see :ref:`node-deployment-deploy-steps` The supported deploy steps are: * ``apply_configuration``: Applies given BIOS settings on the node. See `BIOS configuration support`_. This step is part of the ``bios`` interface. * ``factory_reset``: Resets the BIOS settings on the node to factory defaults. See `BIOS configuration support`_. This step is part of the ``bios`` interface. * ``reset_bios_to_default``: Resets system ROM settings to default. This step is supported only on Gen9 and above servers. This step is part of the ``management`` interface. * ``reset_secure_boot_keys_to_default``: Resets secure boot keys to manufacturer's defaults. This step is supported only on Gen9 and above servers. This step is part of the ``management`` interface. * ``reset_ilo_credential``: Resets the iLO password. The password need to be specified in ``ilo_password`` argument of the step. This step is part of the ``management`` interface. * ``clear_secure_boot_keys``: Clears all secure boot keys. This step is supported only on Gen9 and above servers. This step is part of the ``management`` interface. * ``reset_ilo``: Resets the iLO. This step is part of the ``management`` interface. * ``update_firmware``: Updates the firmware of the devices. This step is part of the ``management`` interface. See `Initiating firmware update as manual clean step`_ for user guidance on usage. The supported devices for firmware update are: ``ilo``, ``cpld``, ``power_pic``, ``bios`` and ``chassis``. This step is part of ``management`` interface. Please refer to below table for their commonly used descriptions. .. csv-table:: :header: "Device", "Description" :widths: 30, 80 "``ilo``", "BMC for HPE ProLiant servers" "``cpld``", "System programmable logic device" "``power_pic``", "Power management controller" "``bios``", "HPE ProLiant System ROM" "``chassis``", "System chassis device" Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and Onboard Administrator (OA). * ``apply_configuration``: Applies RAID configuration on the node. See :ref:`raid` for more information. This step is part of the ``raid`` interface. * ``delete_configuration``: Deletes RAID configuration on the node. See :ref:`raid` for more information. This step is part of the ``raid`` interface. Example of using deploy template with the Compute service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create a deploy template with a single step: .. code-block:: console openstack baremetal deploy template create \ CUSTOM_HYPERTHREADING_ON \ --steps '[{"interface": "bios", "step": "apply_configuration", "args": {"settings": [{"name": "ProcHyperthreading", "value": "Enabled"}]}, "priority": 150}]' Add the trait ``CUSTOM_HYPERTHREADING_ON`` to the node represented by ``$node_ident``: .. code-block:: console openstack baremetal node add trait $node_ident CUSTOM_HYPERTHREADING_ON Update the flavor ``bm-hyperthreading-on`` in the Compute service with the following property: .. code-block:: console openstack flavor set --property trait:CUSTOM_HYPERTHREADING_ON=required bm-hyperthreading-on Creating a Compute instance with this flavor will ensure that the instance is scheduled only to Bare Metal nodes with the ``CUSTOM_HYPERTHREADING_ON`` trait. When an instance is created using the ``bm-hyperthreading-on`` flavor, then the deploy steps of deploy template ``CUSTOM_HYPERTHREADING_ON`` will be executed during the deployment of the scheduled node, causing Hyperthreading to be enabled in the node's BIOS configuration. .. _ilo-inspection: Hardware Inspection Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports hardware inspection. .. note:: * The disk size is returned by RIBCL/RIS only when RAID is preconfigured on the storage. If the storage is Direct Attached Storage, then RIBCL/RIS fails to get the disk size. * The SNMPv3 inspection gets disk size for all types of storages. If RIBCL/RIS is unable to get disk size and SNMPv3 inspection is requested, the proliantutils does SNMPv3 inspection to get the disk size. If proliantutils is unable to get the disk size, it raises an error. This feature is available in proliantutils release version >= 2.2.0. * The iLO must be updated with SNMPv3 authentication details. Pleae refer to the section `SNMPv3 Authentication` in `HPE iLO4 User Guide`_ for setting up authentication details on iLO. The following parameters are mandatory to be given in driver_info for SNMPv3 inspection: * ``snmp_auth_user`` : The SNMPv3 user. * ``snmp_auth_prot_password`` : The auth protocol pass phrase. * ``snmp_auth_priv_password`` : The privacy protocol pass phrase. The following parameters are optional for SNMPv3 inspection: * ``snmp_auth_protocol`` : The Auth Protocol. The valid values are "MD5" and "SHA". The iLO default value is "MD5". * ``snmp_auth_priv_protocol`` : The Privacy protocol. The valid values are "AES" and "DES". The iLO default value is "DES". The inspection process will discover the following essential properties (properties required for scheduling deployment): * ``memory_mb``: memory size * ``cpus``: number of cpus * ``cpu_arch``: cpu architecture * ``local_gb``: disk size Inspection can also discover the following extra capabilities for iLO driver: * ``ilo_firmware_version``: iLO firmware version * ``rom_firmware_version``: ROM firmware version * ``secure_boot``: secure boot is supported or not. The possible values are 'true' or 'false'. The value is returned as 'true' if secure boot is supported by the server. * ``server_model``: server model * ``pci_gpu_devices``: number of gpu devices connected to the bare metal. * ``nic_capacity``: the max speed of the embedded NIC adapter. * ``sriov_enabled``: true, if server has the SRIOV supporting NIC. * ``has_rotational``: true, if server has HDD disk. * ``has_ssd``: true, if server has SSD disk. * ``has_nvme_ssd``: true, if server has NVME SSD disk. * ``cpu_vt``: true, if server supports cpu virtualization. * ``hardware_supports_raid``: true, if RAID can be configured on the server using RAID controller. * ``nvdimm_n``: true, if server has NVDIMM_N type of persistent memory. * ``persistent_memory``: true, if server has persistent memory. * ``logical_nvdimm_n``: true, if server has logical NVDIMM_N configured. * ``rotational_drive__rpm``: The capabilities ``rotational_drive_4800_rpm``, ``rotational_drive_5400_rpm``, ``rotational_drive_7200_rpm``, ``rotational_drive_10000_rpm`` and ``rotational_drive_15000_rpm`` are set to true if the server has HDD drives with speed of 4800, 5400, 7200, 10000 and 15000 rpm respectively. * ``logical_raid_level_``: The capabilities ``logical_raid_level_0``, ``logical_raid_level_1``, ``logical_raid_level_2``, ``logical_raid_level_5``, ``logical_raid_level_6``, ``logical_raid_level_10``, ``logical_raid_level_50`` and ``logical_raid_level_60`` are set to true if any of the raid levels among 0, 1, 2, 5, 6, 10, 50 and 60 are configured on the system. .. note:: * The capability ``nic_capacity`` can only be discovered if ipmitool version >= 1.8.15 is used on the conductor. The latest version can be downloaded from `here `__. * The iLO firmware version needs to be 2.10 or above for nic_capacity to be discovered. * To discover IPMI based attributes you need to enable iLO feature 'IPMI/DCMI over LAN Access' on `iLO4 `_ and `iLO5 `_ management engine. * The proliantutils returns only active NICs for Gen10 ProLiant HPE servers. The user would need to delete the ironic ports corresponding to inactive NICs for Gen8 and Gen9 servers as proliantutils returns all the discovered (active and otherwise) NICs for Gen8 and Gen9 servers and ironic ports are created for all of them. Inspection logs a warning if the node under inspection is Gen8 or Gen9. The operator can specify these capabilities in nova flavor for node to be selected for scheduling:: nova flavor-key my-baremetal-flavor set capabilities:server_model=" Gen8" nova flavor-key my-baremetal-flavor set capabilities:nic_capacity="10Gb" nova flavor-key my-baremetal-flavor set capabilities:ilo_firmware_version=" 2.10" nova flavor-key my-baremetal-flavor set capabilities:has_ssd="true" See :ref:`capabilities-discovery` for more details and examples. Swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` with ``ilo-virtual-media`` as boot interface can deploy and boot the server with and without ``swift`` being used for hosting the intermediate temporary floppy image (holding metadata for deploy kernel and ramdisk) and the boot ISO. A local HTTP(S) web server on each conductor node needs to be configured. Please refer to `Web server configuration on conductor`_ for more information. The HTTPS web server needs to be enabled (instead of HTTP web server) in order to send management information and images in encrypted channel over HTTPS. .. note:: This feature assumes that the user inputs are on Glance which uses swift as backend. If swift dependency has to be eliminated, please refer to `HTTP(S) Based Deploy Support`_ also. Deploy Process ~~~~~~~~~~~~~~ Please refer to `Netboot in swiftless deploy for intermediate images`_ for partition image support and `Localboot in swiftless deploy for intermediate images`_ for whole disk image support. HTTP(S) Based Deploy Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The user input for the images given in ``driver_info`` like ``ilo_deploy_iso``, ``deploy_kernel`` and ``deploy_ramdisk`` and in ``instance_info`` like ``image_source``, ``kernel``, ``ramdisk`` and ``ilo_boot_iso`` may also be given as HTTP(S) URLs. The HTTP(S) web server can be configured in many ways. For the Apache web server on Ubuntu, refer `here `_. The web server may reside on a different system than the conductor nodes, but its URL must be reachable by the conductor and the bare metal nodes. Deploy Process ~~~~~~~~~~~~~~ Please refer to `Netboot with HTTP(S) based deploy`_ for partition image boot and `Localboot with HTTP(S) based deploy`_ for whole disk image boot. Support for iLO driver with Standalone Ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use ironic as standalone services without other OpenStack services. The ``ilo`` hardware type can be used in standalone ironic. This feature is referred to as ``iLO driver with standalone ironic`` in this document. Configuration ~~~~~~~~~~~~~ The HTTP(S) web server needs to be configured as described in `HTTP(S) Based Deploy Support`_ and `Web server configuration on conductor`_ needs to be configured for hosting intermediate images on conductor as described in `Swiftless deploy for intermediate images`_. Deploy Process ============== Netboot with glance and swift ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Download user image"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> Swift [label = "Uploads the boot ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> Swift [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot with glance and swift for partition images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to root partition"]; IPA -> IPA [label = "Installs boot loader"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Localboot with glance and swift ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot in swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Download user image"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> ConductorWebserver [label = "Uploads the boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> ConductorWebserver [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot in swiftless deploy for intermediate images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Glance; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Glance [label = "Get the metadata for deploy ISO"]; Glance -> Conductor [label = "Returns the metadata for deploy ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for deploy ISO"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Swift [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Swift [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot with HTTP(S) based deploy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Webserver [label = "Download user image"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> Swift [label = "Uploads the boot ISO"]; Conductor -> Conductor [label = "Generates swift tempURL for boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO swift tempURL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> Swift [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot with HTTP(S) based deploy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; Swift; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Conductor [label = "Creates the FAT32 image containing ironic API URL and driver name"]; Conductor -> Swift [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates swift tempURL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image swift tempURL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Webserver [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Netboot in standalone ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Webserver [label = "Download user image"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver[label = "Uploads the FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Exposes the disk over iSCSI"]; Conductor -> Conductor [label = "Connects to bare metal's disk over iSCSI and writes image"]; Conductor -> Conductor [label = "Generates the boot ISO"]; Conductor -> ConductorWebserver [label = "Uploads the boot ISO"]; Conductor -> iLO [label = "Attaches boot ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets boot device to CDROM"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> iLO [label = "Power on the node"]; iLO -> ConductorWebserver [label = "Downloads boot ISO"]; iLO -> Baremetal [label = "Boots the instance kernel/ramdisk from iLO virtual media CDROM"]; Baremetal -> Baremetal [label = "Instance kernel finds root partition and continues booting from disk"]; } Localboot in standalone ironic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. seqdiag:: :scale: 80 diagram { Webserver; Conductor; Baremetal; ConductorWebserver; IPA; iLO; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> iLO [label = "Powers off the node"]; Conductor -> Conductor [label = "Creates the FAT32 image containing Ironic API URL and driver name"]; Conductor -> ConductorWebserver [label = "Uploads the FAT32 image"]; Conductor -> Conductor [label = "Generates URL for FAT32 image"]; Conductor -> iLO [label = "Attaches the FAT32 image URL as virtual media floppy"]; Conductor -> iLO [label = "Attaches the deploy ISO URL as virtual media CDROM"]; Conductor -> iLO [label = "Sets one time boot to CDROM"]; Conductor -> iLO [label = "Reboot the node"]; iLO -> Webserver [label = "Downloads deploy ISO"]; Baremetal -> iLO [label = "Boots deploy kernel/ramdisk from iLO virtual media CDROM"]; IPA -> Conductor [label = "Lookup node"]; Conductor -> IPA [label = "Provides node UUID"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> IPA [label = "Sends the user image HTTP(S) URL"]; IPA -> Webserver [label = "Retrieves the user image on bare metal"]; IPA -> IPA [label = "Writes user image to disk"]; IPA -> Conductor [label = "Heartbeat"]; Conductor -> Baremetal [label = "Sets boot device to disk"]; Conductor -> IPA [label = "Power off the node"]; Conductor -> Baremetal [label = "Power on the node"]; Baremetal -> Baremetal [label = "Boot user image from disk"]; } Activating iLO Advanced license as manual clean step ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ iLO driver can activate the iLO Advanced license key as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. User can follow steps from :ref:`manual_cleaning` to initiate manual cleaning operation on a node. An example of a manual clean step with ``activate_license`` as the only clean step could be:: "clean_steps": [{ "interface": "management", "step": "activate_license", "args": { "ilo_license_key": "ABC12-XXXXX-XXXXX-XXXXX-YZ345" } }] The different attributes of ``activate_license`` clean step are as follows: .. csv-table:: :header: "Attribute", "Description" :widths: 30, 120 "``interface``", "Interface of clean step, here ``management``" "``step``", "Name of clean step, here ``activate_license``" "``args``", "Keyword-argument entry (: ) being passed to clean step" "``args.ilo_license_key``", "iLO Advanced license key to activate enterprise features. This is mandatory." Initiating firmware update as manual clean step ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ iLO driver can invoke secure firmware update as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the ``manageable`` state. Once the manual cleaning is finished, the node will be put in the ``manageable`` state again. A user can follow steps from :ref:`manual_cleaning` to initiate manual cleaning operation on a node. An example of a manual clean step with ``update_firmware`` as the only clean step could be:: "clean_steps": [{ "interface": "management", "step": "update_firmware", "args": { "firmware_update_mode": "ilo", "firmware_images":[ { "url": "file:///firmware_images/ilo/1.5/CP024444.scexe", "checksum": "a94e683ea16d9ae44768f0a65942234d", "component": "ilo" }, { "url": "swift://firmware_container/cpld2.3.rpm", "checksum": "", "component": "cpld" }, { "url": "http://my_address:port/firmwares/bios_vLatest.scexe", "checksum": "", "component": "bios" }, { "url": "https://my_secure_address_url/firmwares/chassis_vLatest.scexe", "checksum": "", "component": "chassis" }, { "url": "file:///home/ubuntu/firmware_images/power_pic/pmc_v3.0.bin", "checksum": "", "component": "power_pic" } ] } }] The different attributes of ``update_firmware`` clean step are as follows: .. csv-table:: :header: "Attribute", "Description" :widths: 30, 120 "``interface``", "Interface of clean step, here ``management``" "``step``", "Name of clean step, here ``update_firmware``" "``args``", "Keyword-argument entry (: ) being passed to clean step" "``args.firmware_update_mode``", "Mode (or mechanism) of out-of-band firmware update. Supported value is ``ilo``. This is mandatory." "``args.firmware_images``", "Ordered list of dictionaries of images to be flashed. This is mandatory." Each firmware image block is represented by a dictionary (JSON), in the form:: { "url": "", "checksum": "", "component": "" } All the fields in the firmware image block are mandatory. * The different types of firmware url schemes supported are: ``file``, ``http``, ``https`` and ``swift``. .. note:: This feature assumes that while using ``file`` url scheme the file path is on the conductor controlling the node. .. note:: The ``swift`` url scheme assumes the swift account of the ``service`` project. The ``service`` project (tenant) is a special project created in the Keystone system designed for the use of the core OpenStack services. When Ironic makes use of Swift for storage purpose, the account is generally ``service`` and the container is generally ``ironic`` and ``ilo`` driver uses a container named ``ironic_ilo_container`` for their own purpose. .. note:: While using firmware files with a ``.rpm`` extension, make sure the commands ``rpm2cpio`` and ``cpio`` are present on the conductor, as they are utilized to extract the firmware image from the package. * The firmware components that can be updated are: ``ilo``, ``cpld``, ``power_pic``, ``bios`` and ``chassis``. * The firmware images will be updated in the order given by the operator. If there is any error during processing of any of the given firmware images provided in the list, none of the firmware updates will occur. The processing error could happen during image download, image checksum verification or image extraction. The logic is to process each of the firmware files and update them on the devices only if all the files are processed successfully. If, during the update (uploading and flashing) process, an update fails, then the remaining updates, if any, in the list will be aborted. But it is recommended to triage and fix the failure and re-attempt the manual clean step ``update_firmware`` for the aborted ``firmware_images``. The devices for which the firmwares have been updated successfully would start functioning using their newly updated firmware. * As a troubleshooting guidance on the complete process, check Ironic conductor logs carefully to see if there are any firmware processing or update related errors which may help in root causing or gain an understanding of where things were left off or where things failed. You can then fix or work around and then try again. A common cause of update failure is HPE Secure Digital Signature check failure for the firmware image file. * To compute ``md5`` checksum for your image file, you can use the following command:: $ md5sum image.rpm 66cdb090c80b71daa21a67f06ecd3f33 image.rpm Smart Update Manager (SUM) based firmware update ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The firmware update based on `SUM`_ is an inband clean step supported by iLO driver. The firmware update is performed on all or list of user specified firmware components on the node. Refer to `SUM User Guide`_ to get more information on SUM based firmware update. ``update_firmware_sum`` clean step requires the agent ramdisk with ``Proliant Hardware Manager`` from the proliantutils version 2.5.0 or higher. See `DIB support for Proliant Hardware Manager`_ to create the agent ramdisk with ``Proliant Hardware Manager``. The attributes of ``update_firmware_sum`` clean step are as follows: .. csv-table:: :header: "Attribute", "Description" :widths: 30, 120 "``interface``", "Interface of the clean step, here ``management``" "``step``", "Name of the clean step, here ``update_firmware_sum``" "``args``", "Keyword-argument entry (: ) being passed to the clean step" The keyword arguments used for the clean step are as follows: * ``url``: URL of SPP (Service Pack for Proliant) ISO. It is mandatory. The URL schemes supported are ``http``, ``https`` and ``swift``. * ``checksum``: MD5 checksum of SPP ISO to verify the image. It is mandatory. * ``components``: List of filenames of the firmware components to be flashed. It is optional. If not provided, the firmware update is performed on all the firmware components. The clean step performs an update on all or a list of firmware components and returns the SUM log files. The log files include ``hpsum_log.txt`` and ``hpsum_detail_log.txt`` which holds the information about firmware components, firmware version for each component and their update status. The log object will be named with the following pattern:: [_]_update_firmware_sum_.tar.gz Refer to :ref:`retrieve_deploy_ramdisk_logs` for more information on enabling and viewing the logs returned from the ramdisk. An example of ``update_firmware_sum`` clean step: .. code-block:: json { "interface": "management", "step": "update_firmware_sum", "args": { "url": "http://my_address:port/SPP.iso", "checksum": "abcdefxyz", "components": ["CP024356.scexe", "CP008097.exe"] } } The clean step fails if there is any error in the processing of clean step arguments. The processing error could happen during validation of components' file extension, image download, image checksum verification or image extraction. In case of a failure, check Ironic conductor logs carefully to see if there are any validation or firmware processing related errors which may help in root cause analysis or gaining an understanding of where things were left off or where things failed. You can then fix or work around and then try again. .. warning:: This feature is officially supported only with RHEL and SUSE based IPA ramdisk. Refer to `SUM`_ for supported OS versions for specific SUM version. .. note:: Refer `Guidelines for SPP ISO`_ for steps to get SPP (Service Pack for ProLiant) ISO. RAID Support ^^^^^^^^^^^^ The inband RAID functionality is supported by iLO driver. See :ref:`raid` for more information. Bare Metal service update node with following information after successful configuration of RAID: * Node ``properties/local_gb`` is set to the size of root volume. * Node ``properties/root_device`` is filled with ``wwn`` details of root volume. It is used by iLO driver as root device hint during provisioning. * The value of raid level of root volume is added as ``raid_level`` capability to the node's ``capabilities`` parameter within ``properties`` field. The operator can specify the ``raid_level`` capability in nova flavor for node to be selected for scheduling:: nova flavor-key ironic-test set capabilities:raid_level="1+0" nova boot --flavor ironic-test --image test-image instance-1 .. _DIB_raid_support: DIB support for Proliant Hardware Manager ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Install ``ironic-python-agent-builder`` following the guide [1]_ To create an agent ramdisk with ``Proliant Hardware Manager``, use the ``proliant-tools`` element in DIB:: ironic-python-agent-builder -o proliant-agent-ramdisk -e proliant-tools fedora Disk Erase Support ^^^^^^^^^^^^^^^^^^ ``erase_devices`` is an inband clean step supported by iLO driver. It performs erase on all the disks including the disks visible to OS as well as the raw disks visible to the Smart Storage Administrator (SSA). This inband clean step requires ``ssacli`` utility starting from version ``2.60-19.0`` to perform the erase on physical disks. See the `ssacli documentation`_ for more information on ssacli utility and different erase methods supported by SSA. The disk erasure via ``shred`` is used to erase disks visible to the OS and its implementation is available in Ironic Python Agent. The raw disks connected to the Smart Storage Controller are erased using Sanitize erase which is a ssacli supported erase method. If Sanitize erase is not supported on the Smart Storage Controller the disks are erased using One-pass erase (overwrite with zeros). This clean step is supported when the agent ramdisk contains the ``Proliant Hardware Manager`` from the proliantutils version 2.3.0 or higher. This clean step is performed as part of automated cleaning and it is disabled by default. See :ref:`InbandvsOutOfBandCleaning` for more information on enabling/disabling a clean step. Install ``ironic-python-agent-builder`` following the guide [1]_ To create an agent ramdisk with ``Proliant Hardware Manager``, use the ``proliant-tools`` element in DIB:: ironic-python-agent-builder -o proliant-agent-ramdisk -e proliant-tools fedora See the `proliant-tools`_ for more information on creating agent ramdisk with ``proliant-tools`` element in DIB. Firmware based UEFI iSCSI boot from volume support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ With Gen9 (UEFI firmware version 1.40 or higher) and Gen10 HPE Proliant servers, the driver supports firmware based UEFI boot of an iSCSI cinder volume. This feature requires the node to be configured to boot in ``UEFI`` boot mode, as well as user image should be ``UEFI`` bootable image, and ``PortFast`` needs to be enabled in switch configuration for immediate spanning tree forwarding state so it wouldn't take much time setting the iSCSI target as persistent device. The driver does not support this functionality when in ``bios`` boot mode. In case the node is configured with ``ilo-pxe`` or ``ilo-ipxe`` as boot interface and the boot mode configured on the bare metal is ``bios``, the iscsi boot from volume is performed using iPXE. See :doc:`/admin/boot-from-volume` for more details. To use this feature, configure the boot mode of the bare metal to ``uefi`` and configure the corresponding ironic node using the steps given in :doc:`/admin/boot-from-volume`. In a cloud environment with nodes configured to boot from ``bios`` and ``uefi`` boot modes, the virtual media driver only supports uefi boot mode, and that attempting to use iscsi boot at the same time with a bios volume will result in an error. BIOS configuration support ^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``ilo`` and ``ilo5`` hardware types support ``ilo`` BIOS interface. The support includes providing manual clean steps *apply_configuration* and *factory_reset* to manage supported BIOS settings on the node. See :ref:`bios` for more details and examples. .. note:: Prior to the Stein release the user is required to reboot the node manually in order for the settings to take into effect. Starting with the Stein release, iLO drivers reboot the node after running clean steps related to the BIOS configuration. The BIOS settings are cached and the clean step is marked as success only if all the requested settings are applied without any failure. If application of any of the settings fails, the clean step is marked as failed and the settings are not cached. Configuration ~~~~~~~~~~~~~ Following are the supported BIOS settings and the corresponding brief description for each of the settings. For a detailed description please refer to `HPE Integrated Lights-Out REST API Documentation `_. - ``AdvancedMemProtection``: Configure additional memory protection with ECC (Error Checking and Correcting). Allowed values are ``AdvancedEcc``, ``OnlineSpareAdvancedEcc``, ``MirroredAdvancedEcc``. - ``AutoPowerOn``: Configure the server to automatically power on when AC power is applied to the system. Allowed values are ``AlwaysPowerOn``, ``AlwaysPowerOff``, ``RestoreLastState``. - ``BootMode``: Select the boot mode of the system. Allowed values are ``Uefi``, ``LegacyBios`` - ``BootOrderPolicy``: Configure how the system attempts to boot devices per the Boot Order when no bootable device is found. Allowed values are ``RetryIndefinitely``, ``AttemptOnce``, ``ResetAfterFailed``. - ``CollabPowerControl``: Enables the Operating System to request processor frequency changes even if the Power Regulator option on the server configured for Dynamic Power Savings Mode. Allowed values are ``Enabled``, ``Disabled``. - ``DynamicPowerCapping``: Configure when the System ROM executes power calibration during the boot process. Allowed values are ``Enabled``, ``Disabled``, ``Auto``. - ``DynamicPowerResponse``: Enable the System BIOS to control processor performance and power states depending on the processor workload. Allowed values are ``Fast``, ``Slow``. - ``IntelligentProvisioning``: Enable or disable the Intelligent Provisioning functionality. Allowed values are ``Enabled``, ``Disabled``. - ``IntelPerfMonitoring``: Exposes certain chipset devices that can be used with the Intel Performance Monitoring Toolkit. Allowed values are ``Enabled``, ``Disabled``. - ``IntelProcVtd``: Hypervisor or operating system supporting this option can use hardware capabilities provided by Intel's Virtualization Technology for Directed I/O. Allowed values are ``Enabled``, ``Disabled``. - ``IntelQpiFreq``: Set the QPI Link frequency to a lower speed. Allowed values are ``Auto``, ``MinQpiSpeed``. - ``IntelTxt``: Option to modify Intel TXT support. Allowed values are ``Enabled``, ``Disabled``. - ``PowerProfile``: Set the power profile to be used. Allowed values are ``BalancedPowerPerf``, ``MinPower``, ``MaxPerf``, ``Custom``. - ``PowerRegulator``: Determines how to regulate the power consumption. Allowed values are ``DynamicPowerSavings``, ``StaticLowPower``, ``StaticHighPerf``, ``OsControl``. - ``ProcAes``: Enable or disable the Advanced Encryption Standard Instruction Set (AES-NI) in the processor. Allowed values are ``Enabled``, ``Disabled``. - ``ProcCoreDisable``: Disable processor cores using Intel's Core Multi-Processing (CMP) Technology. Allowed values are Integers ranging from ``0`` to ``24``. - ``ProcHyperthreading``: Enable or disable Intel Hyperthreading. Allowed values are ``Enabled``, ``Disabled``. - ``ProcNoExecute``: Protect your system against malicious code and viruses. Allowed values are ``Enabled``, ``Disabled``. - ``ProcTurbo``: Enables the processor to transition to a higher frequency than the processor's rated speed using Turbo Boost Technology if the processor has available power and is within temperature specifications. Allowed values are ``Enabled``, ``Disabled``. - ``ProcVirtualization``: Enables or Disables a hypervisor or operating system supporting this option to use hardware capabilities provided by Intel's Virtualization Technology. Allowed values are ``Enabled``, ``Disabled``. - ``SecureBootStatus``: The current state of Secure Boot configuration. Allowed values are ``Enabled``, ``Disabled``. .. note:: This setting is read-only and can't be modified with ``apply_configuration`` clean step. - ``Sriov``: If enabled, SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. If enabled, the BIOS allocates additional resources to PCI-express devices. Allowed values are ``Enabled``, ``Disabled``. - ``ThermalConfig``: select the fan cooling solution for the system. Allowed values are ``OptimalCooling``, ``IncreasedCooling``, ``MaxCooling`` - ``ThermalShutdown``: Control the reaction of the system to caution level thermal events. Allowed values are ``Enabled``, ``Disabled``. - ``TpmState``: Current TPM device state. Allowed values are ``NotPresent``, ``PresentDisabled``, ``PresentEnabled``. .. note:: This setting is read-only and can't be modified with ``apply_configuration`` clean step. - ``TpmType``: Current TPM device type. Allowed values are ``NoTpm``, ``Tpm12``, ``Tpm20``, ``Tm10``. .. note:: This setting is read-only and can't be modified with ``apply_configuration`` clean step. - ``UefiOptimizedBoot``: Enables or Disables the System BIOS boot using native UEFI graphics drivers. Allowed values are ``Enabled``, ``Disabled``. - ``WorkloadProfile``: Change the Workload Profile to accomodate your desired workload. Allowed values are ``GeneralPowerEfficientCompute``, ``GeneralPeakFrequencyCompute``, ``GeneralThroughputCompute``, ``Virtualization-PowerEfficient``, ``Virtualization-MaxPerformance``, ``LowLatency``, ``MissionCritical``, ``TransactionalApplicationProcessing``, ``HighPerformanceCompute``, ``DecisionSupport``, ``GraphicProcessing``, ``I/OThroughput``, ``Custom`` .. note:: This setting is only applicable to ProLiant Gen10 servers with iLO 5 management systems. Certificate based validation in iLO ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The driver supports validation of certificates on the HPE Proliant servers. The path to certificate file needs to be appropriately set in ``ca_file`` in the node's ``driver_info``. To update SSL certificates into iLO, refer to `HPE Integrated Lights-Out Security Technology Brief `_. Use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as `ilo_address` while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch. Rescue mode support ^^^^^^^^^^^^^^^^^^^ The hardware type ``ilo`` supports rescue functionality. Rescue operation can be used to boot nodes into a rescue ramdisk so that the ``rescue`` user can access the node. Please refer to :doc:`/admin/rescue` for detailed explanation of rescue feature. Inject NMI support ^^^^^^^^^^^^^^^^^^ The management interface ``ilo`` supports injection of non-maskable interrupt (NMI) to a bare metal. Following command can be used to inject NMI on a server: .. code-block:: console openstack baremetal node inject nmi Following command can be used to inject NMI via Compute service: .. code-block:: console openstack server dump create .. note:: This feature is supported on HPE ProLiant Gen9 servers and beyond. Soft power operation support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The power interface ``ilo`` supports soft power off and soft reboot operations on a bare metal. Following commands can be used to perform soft power operations on a server: .. code-block:: console openstack baremetal node reboot --soft \ [--power-timeout ] openstack baremetal node power off --soft \ [--power-timeout ] .. note:: The configuration ``[conductor]soft_power_off_timeout`` is used as a default timeout value when no timeout is provided while invoking hard or soft power operations. .. note:: Server POST state is used to track the power status of HPE ProLiant Gen9 servers and beyond. Out of Band RAID Support ^^^^^^^^^^^^^^^^^^^^^^^^ With Gen10 HPE Proliant servers and later the ``ilo5`` hardware type supports firmware based RAID configuration as a clean step. This feature requires the node to be configured to ``ilo5`` hardware type and its raid interface to be ``ilo5``. See :ref:`raid` for more information. After a successful RAID configuration, the Bare Metal service will update the node with the following information: * Node ``properties/local_gb`` is set to the size of root volume. * Node ``properties/root_device`` is filled with ``wwn`` details of root volume. It is used by iLO driver as root device hint during provisioning. Later the value of raid level of root volume can be added in ``baremetal-with-RAID10`` (RAID10 for raid level 10) resource class. And consequently flavor needs to be updated to request the resource class to create the server using selected node:: openstack baremetal node set test_node --resource-class \ baremetal-with-RAID10 openstack flavor set --property \ resources:CUSTOM_BAREMETAL_WITH_RAID10=1 test-flavor openstack server create --flavor test-flavor --image test-image instance-1 .. note:: Supported raid levels for ``ilo5`` hardware type are: 0, 1, 5, 6, 10, 50, 60 IPv6 support ^^^^^^^^^^^^ With the IPv6 support in ``proliantutils>=2.8.0``, nodes can be enrolled into the baremetal service using the iLO IPv6 addresses. .. code-block:: console openstack baremetal node create --driver ilo --deploy-interface direct \ --driver-info ilo_address=2001:0db8:85a3:0000:0000:8a2e:0370:7334 \ --driver-info ilo_username=test-user \ --driver-info ilo_password=test-password \ --driver-info ilo_deploy_iso=test-iso \ --driver-info ilo_rescue_iso=test-iso .. note:: No configuration changes (in e.g. ironic.conf) are required in order to support IPv6. Out of Band Sanitize Disk Erase Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ With Gen10 HPE Proliant servers and later the ``ilo5`` hardware type supports firmware based sanitize disk erase as a clean step. This feature requires the node to be configured to ``ilo5`` hardware type and its management interface to be ``ilo5``. The possible erase pattern its supports are: * For HDD - 'overwrite', 'zero', 'crypto' * For SSD - 'block', 'zero', 'crypto' The default erase pattern are, for HDD, 'overwrite' and for SSD, 'block'. .. note:: In average 300GB HDD with default pattern "overwrite" would take approx. 9 hours and 300GB SSD with default pattern "block" would take approx. 30 seconds to complete the erase. .. _`ssacli documentation`: https://support.hpe.com/hpsc/doc/public/display?docId=c03909334 .. _`proliant-tools`: https://docs.openstack.org/diskimage-builder/latest/elements/proliant-tools/README.html .. _`HPE iLO4 User Guide`: https://h20566.www2.hpe.com/hpsc/doc/public/display?docId=c03334051 .. _`iLO 4 management engine`: https://www.hpe.com/us/en/servers/integrated-lights-out-ilo.html .. _`iLO 5 management engine`: https://www.hpe.com/us/en/servers/integrated-lights-out-ilo.html#innovations .. _`Redfish`: https://www.dmtf.org/standards/redfish .. _`Gen10 wiki section`: https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/master#Enabling_ProLiant_Gen10_systems_in_Ironic .. _`Guidelines for SPP ISO`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp .. _`SUM`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/hpsum/index.aspx .. _`SUM User Guide`: https://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c05210448 .. [1] `ironic-python-agent-builder`: https://docs.openstack.org/ironic-python-agent-builder/latest/install/index.html ironic-15.0.0/doc/source/admin/drivers/intel-ipmi.rst0000664000175000017500000001277313652514273022540 0ustar zuulzuul00000000000000================= Intel IPMI driver ================= Overview ======== The ``intel-ipmi`` hardware type is same as the :doc:`ipmitool` hardware type except for the support of Intel Speed Select Performance Profile (Intel SST-PP_) feature. Intel SST-PP allows a server to run different workloads by configuring the CPU to run at 3 distinct operating points or profiles. Intel SST-PP supports three configuration levels: * 0 - Intel SST-PP Base Config * 1 - Intel SST-PP Config 1 * 2 - Intel SST-PP Config 2 The following table shows the list of active cores and their base frequency at different SST-PP config levels: ============== ========= =================== Config Cores Base Freq (GHz) ============== ========= =================== Base 24 2.4 Config 1 20 2.5 Config 2 16 2.7 ============== ========= =================== This configuration is managed by the management interface ``intel-ipmitool`` for IntelIPMI hardware. IntelIPMI manages nodes by using IPMI_ (Intelligent Platform Management Interface) protocol versions 2.0 or 1.5. It uses the IPMItool_ utility which is an open-source command-line interface (CLI) for controlling IPMI-enabled devices. Glossary ======== * IPMI - Intelligent Platform Management Interface. * Intel SST-PP - Intel Speed Select Performance Profile. Enabling the IntelIPMI hardware type ==================================== Please see :doc:`/install/configure-ipmi` for the required dependencies. #. To enable ``intel-ipmi`` hardware, add the following configuration to your ``ironic.conf``: .. code-block:: ini [DEFAULT] enabled_hardware_types = intel-ipmi enabled_management_interfaces = intel-ipmitool #. Restart the Ironic conductor service:: sudo service ironic-conductor restart # Or, for RDO: sudo systemctl restart openstack-ironic-conductor Registering a node with the IntelIPMI driver ============================================ Nodes configured to use the IntelIPMI drivers should have the ``driver`` field set to ``intel-ipmi``. All the configuration value required for IntelIPMI is the same as the IPMI hardware type except the management interface which is ``intel-ipmitool``. Refer :doc:`ipmitool` for details. The ``openstack baremetal node create`` command can be used to enroll a node with an IntelIPMI driver. For example:: openstack baremetal node create --driver intel-ipmi \ --driver-info ipmi_address=
\ --driver-info ipmi_username= \ --driver-info ipmi_password= Features of the ``intel-ipmi`` hardware type ============================================ Intel SST-PP ^^^^^^^^^^^^^ A node with Intel SST-PP can be configured to use it via ``configure_intel_speedselect`` deploy step. This deploy accepts: * ``intel_speedselect_config``: Hexadecimal code of Intel SST-PP configuration. Accepted values are '0x00', '0x01', '0x02'. These values correspond to `Intel SST-PP Config Base`, `Intel SST-PP Config 1`, `Intel SST-PP Config 2` respectively. The input value must be a string. * ``socket_count``: Number of sockets in the node. The input value must be a positive integer (1 by default). The deploy step issues an IPMI command with the raw code for each socket in the node to set the requested configuration. A reboot is required to reflect the changes. Each configuration profile is mapped to traits that Ironic understands. Please note that these names are used for example purpose only. Any name can be used. Only the parameter value should match the deploy step ``configure_intel_speedselect``. * 0 - ``CUSTOM_INTEL_SPEED_SELECT_CONFIG_BASE`` * 1 - ``CUSTOM_INTEL_SPEED_SELECT_CONFIG_1`` * 2 - ``CUSTOM_INTEL_SPEED_SELECT_CONFIG_2`` Now to configure a node with Intel SST-PP while provisioning, create deploy templates for each profiles in Ironic. .. code-block:: console openstack baremetal deploy template create \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_BASE \ --steps '[{"interface": "management", "step": "configure_intel_speedselect", "args": {"intel_speedselect_config": "0x00", "socket_count": 2}, "priority": 150}]' openstack baremetal deploy template create \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_1 \ --steps '[{"interface": "management", "step": "configure_intel_speedselect", "args": {"intel_speedselect_config": "0x01", "socket_count": 2}, "priority": 150}]' openstack baremetal deploy template create \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_2 \ --steps '[{"interface": "management", "step": "configure_intel_speedselect", "args": {"intel_speedselect_config": "0x02", "socket_count": 2}, "priority": 150}]' All Intel SST-PP capable nodes should have these traits associated. .. code-block:: console openstack baremetal node add trait node-0 \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_BASE \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_1 \ CUSTOM_INTEL_SPEED_SELECT_CONFIG_2 To trigger the Intel SST-PP configuration during node provisioning, one of the traits can be added to the flavor. .. code-block:: console openstack flavor set baremetal --property trait:CUSTOM_INTEL_SPEED_SELECT_CONFIG_1=required Finally create a server with ``baremetal`` flavor to provision a baremetal node with Intel SST-PP profile *Config 1*. .. _IPMI: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface .. _IPMItool: https://sourceforge.net/projects/ipmitool/ .. _SST-PP: https://www.intel.com/content/www/us/en/architecture-and-technology/speed-select-technology-article.html ironic-15.0.0/doc/source/admin/drivers/ipmitool.rst0000664000175000017500000001663213652514273022323 0ustar zuulzuul00000000000000=========== IPMI driver =========== Overview ======== The ``ipmi`` hardware type manage nodes by using IPMI_ (Intelligent Platform Management Interface) protocol versions 2.0 or 1.5. It uses the IPMItool_ utility which is an open-source command-line interface (CLI) for controlling IPMI-enabled devices. Glossary ======== * IPMI_ - Intelligent Platform Management Interface. * IPMB - Intelligent Platform Management Bus/Bridge. * BMC_ - Baseboard Management Controller. * RMCP - Remote Management Control Protocol. Enabling the IPMI hardware type =============================== Please see :doc:`/install/configure-ipmi` for the required dependencies. #. The ``ipmi`` hardware type is enabled by default starting with the Ocata release. To enable it explicitly, add the following to your ``ironic.conf``: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi enabled_management_interfaces = ipmitool,noop enabled_power_interfaces = ipmitool Optionally, enable the :doc:`vendor passthru interface ` and either or both :doc:`console interfaces `: .. code-block:: ini [DEFAULT] enabled_hardware_types = ipmi enabled_console_interfaces = ipmitool-socat,ipmitool-shellinabox,no-console enabled_management_interfaces = ipmitool,noop enabled_power_interfaces = ipmitool enabled_vendor_interfaces = ipmitool,no-vendor #. Restart the Ironic conductor service. Please see :doc:`/install/enabling-drivers` for more details. Registering a node with the IPMI driver ======================================= Nodes configured to use the IPMItool drivers should have the ``driver`` field set to ``ipmi``. The following configuration value is required and has to be added to the node's ``driver_info`` field: - ``ipmi_address``: The IP address or hostname of the BMC. Other options may be needed to match the configuration of the BMC, the following options are optional, but in most cases, it's considered a good practice to have them set: - ``ipmi_username``: The username to access the BMC; defaults to *NULL* user. - ``ipmi_password``: The password to access the BMC; defaults to *NULL*. - ``ipmi_port``: The remote IPMI RMCP port. By default ipmitool will use the port *623*. .. note:: It is highly recommend that you setup a username and password for your BMC. The ``openstack baremetal node create`` command can be used to enroll a node with an IPMItool-based driver. For example:: openstack baremetal node create --driver ipmi \ --driver-info ipmi_address=
\ --driver-info ipmi_username= \ --driver-info ipmi_password= Advanced configuration ====================== When a simple configuration such as providing the ``address``, ``username`` and ``password`` is not enough, the IPMItool driver contains many other options that can be used to address special usages. Single/Double bridging functionality ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: A version of IPMItool higher or equal to 1.8.12 is required to use the bridging functionality. There are two different bridging functionalities supported by the IPMItool-based drivers: *single* bridge and *dual* bridge. The following configuration values need to be added to the node's ``driver_info`` field so bridging can be used: - ``ipmi_bridging``: The bridging type; default is *no*; other supported values are *single* for single bridge or *dual* for double bridge. - ``ipmi_local_address``: The local IPMB address for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. This configuration is optional, if not specified it will be auto discovered by IPMItool. - ``ipmi_target_address``: The destination address for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. - ``ipmi_target_channel``: The destination channel for bridged requests. Required only if ``ipmi_bridging`` is set to *single* or *dual*. Double bridge specific options: - ``ipmi_transit_address``: The transit address for bridged requests. Required only if ``ipmi_bridging`` is set to *dual*. - ``ipmi_transit_channel``: The transit channel for bridged requests. Required only if ``ipmi_bridging`` is set to *dual*. The parameter ``ipmi_bridging`` should specify the type of bridging required: *single* or *dual* to access the bare metal node. If the parameter is not specified, the default value will be set to *no*. The ``openstack baremetal node set`` command can be used to set the required bridging information to the Ironic node enrolled with the IPMItool driver. For example: * Single Bridging:: openstack baremetal node set \ --driver-info ipmi_local_address=
\ --driver-info ipmi_bridging=single \ --driver-info ipmi_target_channel= \ --driver-info ipmi_target_address= * Double Bridging:: openstack baremetal node set \ --driver-info ipmi_local_address=
\ --driver-info ipmi_bridging=dual \ --driver-info ipmi_transit_channel= \ --driver-info ipmi_transit_address= \ --driver-info ipmi_target_channel= \ --driver-info ipmi_target_address= Changing the version of the IPMI protocol ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The IPMItool-based drivers works with the versions *2.0* and *1.5* of the IPMI protocol. By default, the version *2.0* is used. In order to change the IPMI protocol version in the bare metal node, the following option needs to be set to the node's ``driver_info`` field: - ``ipmi_protocol_version``: The version of the IPMI protocol; default is *2.0*. Supported values are *1.5* or *2.0*. The ``openstack baremetal node set`` command can be used to set the desired protocol version:: openstack baremetal node set --driver-info ipmi_protocol_version= .. warning:: Version *1.5* of the IPMI protocol does not support encryption. Therefore, it is highly recommended that version 2.0 is used. Static boot order configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some hardware is known to misbehave when changing the boot device through the IPMI protocol. To work around it you can use the ``noop`` management interface implementation with the ``ipmi`` hardware type. In this case the Bare Metal service will not change the boot device for you, leaving the pre-configured boot order. For example, in case of the :ref:`pxe-boot`: #. Via any available means configure the boot order on the node as follows: #. Boot from PXE/iPXE on the provisioning NIC. .. warning:: If it is not possible to limit network boot to only provisioning NIC, make sure that no other DHCP/PXE servers are accessible by the node. #. Boot from hard drive. #. Make sure the ``noop`` management interface is enabled, see example in `Enabling the IPMI hardware type`_. #. Change the node to use the ``noop`` management interface:: openstack baremetal node set --management-interface noop .. TODO(lucasagomes): Write about privilege level .. TODO(lucasagomes): Write about force boot device .. _IPMItool: https://sourceforge.net/projects/ipmitool/ .. _IPMI: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface .. _BMC: https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface#Baseboard_management_controller ironic-15.0.0/doc/source/admin/drivers/ipa.rst0000664000175000017500000001264413652514273021237 0ustar zuulzuul00000000000000=================== Ironic Python Agent =================== Overview ======== *Ironic Python Agent* (also often called *IPA* or just *agent*) is a Python-based agent which handles *ironic* bare metal nodes in a variety of actions such as inspect, configure, clean and deploy images. IPA is distributed over nodes and runs, inside of a ramdisk, the process of booting this ramdisk on the node. For more information see the :ironic-python-agent-doc:`ironic-python-agent documentation <>`. Drivers ======= Starting with the Kilo release all deploy interfaces (except for fake ones) are using IPA. There are two types of them: * For nodes using the :ref:`iscsi-deploy` interface, IPA exposes the root hard drive as an iSCSI share and calls back to the ironic conductor. The conductor mounts the share and copies an image there. It then signals back to IPA for post-installation actions like setting up a bootloader for local boot support. * For nodes using the :ref:`direct-deploy` interface, the conductor prepares a swift temporary URL for an image. IPA then handles the whole deployment process: downloading an image from swift, putting it on the machine and doing any post-deploy actions. Which one to choose depends on your environment. :ref:`iscsi-deploy` puts higher load on conductors, :ref:`direct-deploy` currently requires the whole image to fit in the node's memory, except when using raw images. It also requires :doc:`/install/configure-glance-swift`. .. todo: other differences? Requirements ------------ Using IPA requires it to be present and configured on the deploy ramdisk, see :ref:`deploy-ramdisk` Using proxies for image download ================================ Overview -------- When using the :ref:`direct-deploy`, IPA supports using proxies for downloading the user image. For example, this could be used to speed up download by using a caching proxy. Steps to enable proxies ----------------------- #. Configure the proxy server of your choice (for example `Squid `_, `Apache Traffic Server `_). This will probably require you to configure the proxy server to cache the content even if the requested URL contains a query, and to raise the maximum cached file size as images can be pretty big. If you have HTTPS enabled in swift (see :swift-doc:`swift deployment guide `), it is possible to configure the proxy server to talk to swift via HTTPS to download the image, store it in the cache unencrypted and return it to the node via HTTPS again. Because the image will be stored unencrypted in the cache, this approach is recommended for images that do not contain sensitive information. Refer to your proxy server's documentation to complete this step. #. Set ``[glance]swift_temp_url_cache_enabled`` in the ironic conductor config file to ``True``. The conductor will reuse the cached swift temporary URLs instead of generating new ones each time an image is requested, so that the proxy server does not create new cache entries for the same image, based on the query part of the URL (as it contains some query parameters that change each time it is regenerated). #. Set ``[glance]swift_temp_url_expected_download_start_delay`` option in the ironic conductor config file to the value appropriate for your hardware. This is the delay (in seconds) from the time of the deploy request (when the swift temporary URL is generated) to when the URL is used for the image download. You can think of it as roughly the time needed for IPA ramdisk to startup and begin download. This value is used to check if the swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled, this will determine if a cached entry will still be valid when the download starts. It is used only if ``[glance]swift_temp_url_cache_enabled`` is ``True``. #. Increase ``[glance]swift_temp_url_duration`` option in the ironic conductor config file, as only non-expired links to images will be returned from the swift temporary URLs cache. This means that if ``swift_temp_url_duration=1200`` then after 20 minutes a new image will be cached by the proxy server as the query in its URL will change. The value of this option must be greater than or equal to ``[glance]swift_temp_url_expected_download_start_delay``. #. Add one or more of ``image_http_proxy``, ``image_https_proxy``, ``image_no_proxy`` to driver_info properties in each node that will use the proxy. Advanced configuration ====================== Out-of-band vs. in-band power off on deploy ------------------------------------------- After deploying an image onto the node's hard disk, Ironic will reboot the machine into the new image. By default this power action happens ``in-band``, meaning that the ironic-conductor will instruct the IPA ramdisk to power itself off. Some hardware may have a problem with the default approach and would require Ironic to talk directly to the management controller to switch the power off and on again. In order to tell Ironic to do that, you have to update the node's ``driver_info`` field and set the ``deploy_forces_oob_reboot`` parameter with the value of **True**. For example, the below command sets this configuration in a specific node:: openstack baremetal node set --driver-info deploy_forces_oob_reboot=True ironic-15.0.0/doc/source/admin/drivers/irmc.rst0000664000175000017500000005245013652514273021417 0ustar zuulzuul00000000000000.. _irmc: =========== iRMC driver =========== Overview ======== The iRMC driver enables control FUJITSU PRIMERGY via ServerView Common Command Interface (SCCI). Support for FUJITSU PRIMERGY servers consists of the ``irmc`` hardware type and a few hardware interfaces specific for that hardware type. Prerequisites ============= * Install `python-scciclient `_ and `pysnmp `_ packages:: $ pip install "python-scciclient>=0.7.2" pysnmp Hardware Type ============= The ``irmc`` hardware type is available for FUJITSU PRIMERGY servers. For information on how to enable the ``irmc`` hardware type, see :ref:`enable-hardware-types`. Hardware interfaces ^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type overrides the selection of the following hardware interfaces: * bios Supports ``irmc`` and ``no-bios``. The default is ``irmc``. * boot Supports ``irmc-virtual-media``, ``irmc-pxe``, and ``pxe``. The default is ``irmc-virtual-media``. The ``irmc-virtual-media`` boot interface enables the virtual media based deploy with IPA (Ironic Python Agent). .. warning:: We deprecated the ``pxe`` boot interface when used with ``irmc`` hardware type. Support for this interface will be removed in the future. Instead, use ``irmc-pxe``. ``irmc-pxe`` boot interface was introduced in Pike. * console Supports ``ipmitool-socat``, ``ipmitool-shellinabox``, and ``no-console``. The default is ``ipmitool-socat``. * inspect Supports ``irmc``, ``inspector``, and ``no-inspect``. The default is ``irmc``. .. note:: :ironic-inspector-doc:`Ironic Inspector <>` needs to be present and configured to use ``inspector`` as the inspect interface. * management Supports only ``irmc``. * power Supports ``irmc``, which enables power control via ServerView Common Command Interface (SCCI), by default. Also supports ``ipmitool``. * raid Supports ``irmc``, ``no-raid`` and ``agent``. The default is ``no-raid``. For other hardware interfaces, ``irmc`` hardware type supports the Bare Metal reference interfaces. For more details about the hardware interfaces and how to enable the desired ones, see :ref:`enable-hardware-interfaces`. Here is a complete configuration example with most of the supported hardware interfaces enabled for ``irmc`` hardware type. .. code-block:: ini [DEFAULT] enabled_hardware_types = irmc enabled_bios_interfaces = irmc enabled_boot_interfaces = irmc-virtual-media,irmc-pxe enabled_console_interfaces = ipmitool-socat,ipmitool-shellinabox,no-console enabled_deploy_interfaces = iscsi,direct enabled_inspect_interfaces = irmc,inspector,no-inspect enabled_management_interfaces = irmc enabled_network_interfaces = flat,neutron enabled_power_interfaces = irmc enabled_raid_interfaces = no-raid,irmc enabled_storage_interfaces = noop,cinder enabled_vendor_interfaces = no-vendor,ipmitool Here is a command example to enroll a node with ``irmc`` hardware type. .. code-block:: console openstack baremetal node create \ --bios-interface irmc \ --boot-interface irmc-pxe \ --deploy-interface direct \ --inspect-interface irmc \ --raid-interface irmc Node configuration ^^^^^^^^^^^^^^^^^^ * Each node is configured for ``irmc`` hardware type by setting the following ironic node object's properties: - ``driver_info/irmc_address`` property to be ``IP address`` or ``hostname`` of the iRMC. - ``driver_info/irmc_username`` property to be ``username`` for the iRMC with administrator privileges. - ``driver_info/irmc_password`` property to be ``password`` for irmc_username. - ``properties/capabilities`` property to be ``boot_mode:uefi`` if UEFI boot is required. - ``properties/capabilities`` property to be ``secure_boot:true`` if UEFI Secure Boot is required. Please refer to `UEFI Secure Boot Support`_ for more information. * The following properties are also required if ``irmc-virtual-media`` boot interface is used: - ``driver_info/irmc_deploy_iso`` property to be either deploy iso file name, Glance UUID, or Image Service URL. - ``instance info/irmc_boot_iso`` property to be either boot iso file name, Glance UUID, or Image Service URL. This is optional property when ``boot_option`` is set to ``netboot``. * All of the nodes are configured by setting the following configuration options in the ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``port``: Port to be used for iRMC operations; either 80 or 443. The default value is 443. Optional. - ``auth_method``: Authentication method for iRMC operations; either ``basic`` or ``digest``. The default value is ``basic``. Optional. - ``client_timeout``: Timeout (in seconds) for iRMC operations. The default value is 60. Optional. - ``sensor_method``: Sensor data retrieval method; either ``ipmitool`` or ``scci``. The default value is ``ipmitool``. Optional. * The following options are required if ``irmc-virtual-media`` boot interface is enabled: - ``remote_image_share_root``: Ironic conductor node's ``NFS`` or ``CIFS`` root path. The default value is ``/remote_image_share_root``. - ``remote_image_server``: IP of remote image server. - ``remote_image_share_type``: Share type of virtual media, either ``NFS`` or ``CIFS``. The default is ``CIFS``. - ``remote_image_share_name``: share name of ``remote_image_server``. The default value is ``share``. - ``remote_image_user_name``: User name of ``remote_image_server``. - ``remote_image_user_password``: Password of ``remote_image_user_name``. - ``remote_image_user_domain``: Domain name of ``remote_image_user_name``. * The following options are required if ``irmc`` inspect interface is enabled: - ``snmp_version``: SNMP protocol version; either ``v1``, ``v2c`` or ``v3``. The default value is ``v2c``. Optional. - ``snmp_port``: SNMP port. The default value is ``161``. Optional. - ``snmp_community``: SNMP community required for versions ``v1`` and ``v2c``. The default value is ``public``. Optional. - ``snmp_security``: SNMP security name required for version ``v3``. Optional. * Each node can be further configured by setting the following ironic node object's properties which override the parameter values in ``[irmc]`` section of ``/etc/ironic/ironic.conf``: - ``driver_info/irmc_port`` property overrides ``port``. - ``driver_info/irmc_auth_method`` property overrides ``auth_method``. - ``driver_info/irmc_client_timeout`` property overrides ``client_timeout``. - ``driver_info/irmc_sensor_method`` property overrides ``sensor_method``. - ``driver_info/irmc_snmp_version`` property overrides ``snmp_version``. - ``driver_info/irmc_snmp_port`` property overrides ``snmp_port``. - ``driver_info/irmc_snmp_community`` property overrides ``snmp_community``. - ``driver_info/irmc_snmp_security`` property overrides ``snmp_security``. Optional functionalities for the ``irmc`` hardware type ======================================================= UEFI Secure Boot Support ^^^^^^^^^^^^^^^^^^^^^^^^ The hardware type ``irmc`` supports secure boot deploy. .. warning:: Secure boot feature is not supported with ``pxe`` boot interface. The UEFI secure boot can be configured by adding ``secure_boot`` parameter, which is a boolean value. Enabling the secure boot is different when Bare Metal service is used with Compute service or without Compute service. The following sections describe both methods: * Enabling secure boot with Compute service: To enable secure boot we need to set a capability on the bare metal node and the bare metal flavor, for example:: openstack baremetal node set --property capabilities='secure_boot:true' openstack flavor set FLAVOR-NAME --property capabilities:secure_boot="true" * Enabling secure boot without Compute service: Since adding capabilities to the node's properties is only used by the nova scheduler to perform more advanced scheduling of instances, we need to enable secure boot without nova, for example:: openstack baremetal node set --instance-info capabilities='{"secure_boot": "true"}' .. _irmc_node_cleaning: Node Cleaning Support ^^^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type supports node cleaning. For more information on node cleaning, see :ref:`cleaning`. Supported **Automated** Cleaning Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The automated cleaning operations supported are: * ``restore_irmc_bios_config``: Restores BIOS settings on a baremetal node from backup data. If this clean step is enabled, the BIOS settings of a baremetal node will be backed up automatically before the deployment. By default, this clean step is disabled with priority ``0``. Set its priority to a positive integer to enable it. The recommended value is ``10``. .. warning:: ``pxe`` boot interface, when used with ``irmc`` hardware type, does not support this clean step. If uses ``irmc`` hardware type, it is required to select ``irmc-pxe`` or ``irmc-virtual-media`` as the boot interface in order to make this clean step work. Configuration options for the automated cleaning steps are listed under ``[irmc]`` section in ironic.conf :: clean_priority_restore_irmc_bios_config = 0 For more information on node automated cleaning, see :ref:`automated_cleaning` Boot from Remote Volume ^^^^^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type supports the generic iPXE-based remote volume booting when using the following boot interfaces: * ``irmc-pxe`` * ``pxe`` In addition, the ``irmc`` hardware type supports remote volume booting without iPXE. This is available when using the ``irmc-virtual-media`` boot interface. This feature configures a node to boot from a remote volume by using the API of iRMC. It supports iSCSI and FibreChannel. Configuration ~~~~~~~~~~~~~ In addition to the configuration for generic drivers to :ref:`remote volume boot `, the iRMC driver requires the following configuration: * It is necessary to set physical port IDs to network ports and volume connectors. All cards including those not used for volume boot should be registered. The format of a physical port ID is: ``-`` where: - ````: could be ``LAN``, ``FC`` or ``CNA`` - ````: 0 indicates onboard slot. Use 1 to 9 for add-on slots. - ````: A port number starting from 1. These IDs are specified in a node's ``driver_info[irmc_pci_physical_ids]``. This value is a dictionary. The key is the UUID of a resource (Port or Volume Connector) and its value is the physical port ID. For example:: { "1ecd14ee-c191-4007-8413-16bb5d5a73a2":"LAN0-1", "87f6c778-e60e-4df2-bdad-2605d53e6fc0":"CNA1-1" } It can be set with the following command:: openstack baremetal node set $NODE_UUID \ --driver-info irmc_pci_physical_ids={} \ --driver-info irmc_pci_physical_ids/$PORT_UUID=LAN0-1 \ --driver-info irmc_pci_physical_ids/$VOLUME_CONNECTOR_UUID=CNA1-1 * For iSCSI boot, volume connectors with both types ``iqn`` and ``ip`` are required. The configuration with DHCP is not supported yet. * For iSCSI, the size of the storage network is needed. This value should be specified in a node's ``driver_info[irmc_storage_network_size]``. It must be a positive integer < 32. For example, if the storage network is 10.2.0.0/22, use the following command:: openstack baremetal node set $NODE_UUID --driver-info irmc_storage_network_size=22 Supported hardware ~~~~~~~~~~~~~~~~~~ The driver supports the PCI controllers, Fibrechannel Cards, Converged Network Adapters supported by `Fujitsu ServerView Virtual-IO Manager `_. Hardware Inspection Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type provides the iRMC-specific hardware inspection with ``irmc`` inspect interface. .. note:: SNMP requires being enabled in ServerView® iRMC S4 Web Server(Network Settings\SNMP section). Configuration ~~~~~~~~~~~~~ The Hardware Inspection Support in the iRMC driver requires the following configuration: * It is necessary to set ironic configuration with ``gpu_ids`` and ``fpga_ids`` options in ``[irmc]`` section. ``gpu_ids`` and ``fpga_ids`` are lists of ``/`` where: - ````: 4 hexadecimal digits starts with '0x'. - ````: 4 hexadecimal digits starts with '0x'. Here are sample values for ``gpu_ids`` and ``fpga_ids``:: gpu_ids = 0x1000/0x0079,0x2100/0x0080 fpga_ids = 0x1000/0x005b,0x1100/0x0180 * The python-scciclient package requires pyghmi version >= 1.0.22 and pysnmp version >= 4.2.3. They are used by the conductor service on the conductor. The latest version of pyghmi can be downloaded from `here `__ and pysnmp can be downloaded from `here `__. Supported properties ~~~~~~~~~~~~~~~~~~~~ The inspection process will discover the following essential properties (properties required for scheduling deployment): * ``memory_mb``: memory size * ``cpus``: number of cpus * ``cpu_arch``: cpu architecture * ``local_gb``: disk size Inspection can also discover the following extra capabilities for iRMC driver: * ``irmc_firmware_version``: iRMC firmware version * ``rom_firmware_version``: ROM firmware version * ``trusted_boot``: The flag whether TPM(Trusted Platform Module) is supported by the server. The possible values are 'True' or 'False'. * ``server_model``: server model * ``pci_gpu_devices``: number of gpu devices connected to the bare metal. Inspection can also set/unset node's traits with the following cpu type for iRMC driver: * ``CUSTOM_CPU_FPGA``: The bare metal contains fpga cpu type. .. note:: * The disk size is returned only when eLCM License for FUJITSU PRIMERGY servers is activated. If the license is not activated, then Hardware Inspection will fail to get this value. * Before inspecting, if the server is power-off, it will be turned on automatically. System will wait for a few second before start inspecting. After inspection, power status will be restored to the previous state. The operator can specify these capabilities in compute service flavor, for example:: openstack flavor set baremetal-flavor-name --property capabilities:irmc_firmware_version="iRMC S4-8.64F" openstack flavor set baremetal-flavor-name --property capabilities:server_model="TX2540M1F5" openstack flavor set baremetal-flavor-name --property capabilities:pci_gpu_devices="1" See :ref:`capabilities-discovery` for more details and examples. The operator can add a trait in compute service flavor, for example:: openstack baremetal node add trait $NODE_UUID CUSTOM_CPU_FPGA A valid trait must be no longer than 255 characters. Standard traits are defined in the os_traits library. A custom trait must start with the prefix ``CUSTOM_`` and use the following characters: A-Z, 0-9 and _. RAID configuration Support ^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type provides the iRMC RAID configuration with ``irmc`` raid interface. .. note:: * RAID implementation for ``irmc`` hardware type is based on eLCM license and SDCard. Otherwise, SP(Service Platform) in lifecycle management must be available. * RAID implementation only supported for RAIDAdapter 0 in Fujitsu Servers. Configuration ~~~~~~~~~~~~~ The RAID configuration Support in the iRMC drivers requires the following configuration: * It is necessary to set ironic configuration into Node with JSON file option:: $ openstack baremetal node set \ --target-raid-config Here is some sample values for JSON file:: { "logical_disks": [ { "size_gb": 1000, "raid_level": "1" ] } or:: { "logical_disks": [ { "size_gb": 1000, "raid_level": "1", "controller": "FTS RAID Ctrl SAS 6G 1GB (D3116C) (0)", "physical_disks": [ "0", "1" ] } ] } .. note:: RAID 1+0 and 5+0 in iRMC driver does not support property ``physical_disks`` in ``target_raid_config`` during create raid configuration yet. See following example:: { "logical_disks": [ { "size_gb": "MAX", "raid_level": "1+0" } ] } See :ref:`raid` for more details and examples. Supported properties ~~~~~~~~~~~~~~~~~~~~ The RAID configuration using iRMC driver supports following parameters in JSON file: * ``size_gb``: is mandatory properties in Ironic. * ``raid_level``: is mandatory properties in Ironic. Currently, iRMC Server supports following RAID levels: 0, 1, 5, 6, 1+0 and 5+0. * ``controller``: is name of the controller as read by the RAID interface. * ``physical_disks``: are specific values for each raid array in LogicalDrive which operator want to set them along with ``raid_level``. The RAID configuration is supported as a manual cleaning step. .. note:: * iRMC server will power-on after create/delete raid configuration is applied, FGI (Foreground Initialize) will process raid configuration in iRMC server, thus the operation will completed upon power-on and power-off when created RAID on iRMC server. See :ref:`raid` for more details and examples. BIOS configuration Support ^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``irmc`` hardware type provides the iRMC BIOS configuration with ``irmc`` bios interface. .. warning:: ``irmc`` bios interface does not support ``factory_reset``. Configuration ~~~~~~~~~~~~~ The BIOS configuration in the iRMC driver supports the following settings: - ``boot_option_filter``: Specifies from which drives can be booted. This supports following options: ``UefiAndLegacy``, ``LegacyOnly``, ``UefiOnly``. - ``check_controllers_health_status_enabled``: The UEFI FW checks the controller health status. This supports following options: ``true``, ``false``. - ``cpu_active_processor_cores``: The number of active processor cores 1...n. Option 0 indicates that all available processor cores are active. - ``cpu_adjacent_cache_line_prefetch_enabled``: The processor loads the requested cache line and the adjacent cache line. This supports following options: ``true``, ``false``. - ``cpu_vt_enabled``: Supports the virtualization of platform hardware and several software environments, based on Virtual Machine Extensions to support the use of several software environments using virtual computers. This supports following options: ``true``, ``false``. - ``flash_write_enabled``: The system BIOS can be written. Flash BIOS update is possible. This supports following options: ``true``, ``false``. - ``hyper_threading_enabled``: Hyper-threading technology allows a single physical processor core to appear as several logical processors. This supports following options: ``true``, ``false``. - ``keep_void_boot_options_enabled``: Boot Options will not be removed from "Boot Option Priority" list. This supports following options: ``true``, ``false``. - ``launch_csm_enabled``: Specifies whether the Compatibility Support Module (CSM) is executed. This supports following options: ``true``, ``false``. - ``os_energy_performance_override_enabled``: Prevents the OS from overruling any energy efficiency policy setting of the setup. This supports following options: ``true``, ``false``. - ``pci_aspm_support``: Active State Power Management (ASPM) is used to power-manage the PCI Express links, thus consuming less power. This supports following options: ``Disabled``, ``Auto``, ``L0Limited``, ``L1only``, ``L0Force``. - ``pci_above_4g_decoding_enabled``: Specifies if memory resources above the 4GB address boundary can be assigned to PCI devices. This supports following options: ``true``, ``false``. - ``power_on_source``: Specifies whether the switch on sources for the system are managed by the BIOS or the ACPI operating system. This supports following options: ``BiosControlled``, ``AcpiControlled``. - ``single_root_io_virtualization_support_enabled``: Single Root IO Virtualization Support is active. This supports following options: ``true``, ``false``. The BIOS configuration is supported as a manual cleaning step. See :ref:`bios` for more details and examples. Supported platforms =================== This driver supports FUJITSU PRIMERGY BX S4 or RX S8 servers and above. - PRIMERGY BX920 S4 - PRIMERGY BX924 S4 - PRIMERGY RX300 S8 When ``irmc`` power interface is used, Soft Reboot (Graceful Reset) and Soft Power Off (Graceful Power Off) are only available if `ServerView agents `_ are installed. See `iRMC S4 Manual `_ for more details. RAID configuration feature supports FUJITSU PRIMERGY servers with RAID-Ctrl-SAS-6G-1GB(D3116C) controller and above. For detail supported controller with OOB-RAID configuration, please see `the whitepaper for iRMC RAID configuration `_. ironic-15.0.0/doc/source/admin/drivers/xclarity.rst0000664000175000017500000000451613652514273022324 0ustar zuulzuul00000000000000=============== XClarity driver =============== Overview ======== The ``xclarity`` driver is targeted for IMM 2.0 and IMM 3.0 managed Lenovo servers. The xclarity hardware type enables the user to take advantage of `XClarity Manager`_ by using the `XClarity Python Client`_. Prerequisites ============= * The XClarity Client library should be installed on the ironic conductor node(s). For example, it can be installed with ``pip``:: sudo pip install python-xclarityclient Enabling the XClarity driver ============================ #. Add ``xclarity`` to the list of ``enabled_hardware_types``, ``enabled_power_interfaces`` and ``enabled_management_interfaces`` in ``/etc/ironic/ironic.conf``. For example:: [DEFAULT] ... enabled_hardware_types = ipmi,xclarity enabled_power_interfaces = ipmitool,xclarity enabled_management_interfaces = ipmitool,xclarity #. Restart the ironic conductor service:: sudo service ironic-conductor restart # Or, for RDO: sudo systemctl restart openstack-ironic-conductor Registering a node with the XClarity driver =========================================== Nodes configured to use the driver should have the ``driver`` property set to ``xclarity``. The following properties are specified in the node's ``driver_info`` field and are required: - ``xclarity_manager_ip``: The IP address of the XClarity Controller. - ``xclarity_username``: User account with admin/server-profile access privilege to the XClarity Controller. - ``xclarity_password``: User account password corresponding to the xclarity_username to the XClarity Controller. - ``xclarity_hardware_id``: The hardware ID of the XClarity managed server. The ``openstack baremetal node create`` command can be used to enroll a node with the ``xclarity`` driver. For example: .. code-block:: bash openstack baremetal node create --driver xclarity \ --driver-info xclarity_manager_ip=https://10.240.217.101 \ --driver-info xclarity_username=admin \ --driver-info xclarity_password=password \ --driver-info xclarity_hardware_id=hardware_id For more information about enrolling nodes see :ref:`enrollment` in the install guide. .. _`XClarity Manager`: http://www3.lenovo.com/us/en/data-center/software/systems-management/xclarity/ .. _`XClarity Python Client`: http://pypi.org/project/python-xclarityclient/ ironic-15.0.0/doc/source/admin/drivers/idrac.rst0000664000175000017500000005741713652514273021557 0ustar zuulzuul00000000000000============ iDRAC driver ============ Overview ======== The integrated Dell Remote Access Controller (iDRAC_) is an out-of-band management platform on Dell EMC servers, and is supported directly by the ``idrac`` hardware type. This driver uses the Dell Web Services for Management (WSMAN) protocol and the standard Distributed Management Task Force (DMTF) Redfish protocol to perform all of its functions. iDRAC_ hardware is also supported by the generic ``ipmi`` and ``redfish`` hardware types, though with smaller feature sets. Key features of the Dell iDRAC driver include: * Out-of-band node inspection * Boot device management * Power management * RAID controller management and RAID volume configuration * BIOS settings configuration Ironic Features --------------- The ``idrac`` hardware type supports the following Ironic interfaces: * `BIOS Interface`_: BIOS management * `Inspect Interface`_: Hardware inspection * Management Interface: Boot device management * Power Interface: Power management * `RAID Interface`_: RAID controller and disk management * `Vendor Interface`_: BIOS management Prerequisites ------------- The ``idrac`` hardware type requires the ``python-dracclient`` library to be installed on the ironic conductor node(s) if an Ironic node is configured to use an ``idrac-wsman`` interface implementation, for example:: sudo pip install 'python-dracclient>=3.1.0' Additionally, the ``idrac`` hardware type requires the ``sushy`` library to be installed on the ironic conductor node(s) if an Ironic node is configured to use an ``idrac-redfish`` interface implementation, for example:: sudo pip install 'python-dracclient>=3.1.0' 'sushy>=2.0.0' Enabling -------- The iDRAC driver supports WSMAN for the bios, inspect, management, power, raid, and vendor interfaces. In addition, it supports Redfish for the inspect, management, and power interfaces. The iDRAC driver allows you to mix and match WSMAN and Redfish interfaces. The ``idrac-wsman`` implementation must be enabled to use WSMAN for an interface. The ``idrac-redfish`` implementation must be enabled to use Redfish for an interface. .. NOTE:: Redfish is supported for only the inspect, management, and power interfaces at the present time. To enable the ``idrac`` hardware type with the minimum interfaces, all using WSMAN, add the following to your ``/etc/ironic/ironic.conf``: .. code-block:: ini [DEFAULT] enabled_hardware_types=idrac enabled_management_interfaces=idrac-wsman enabled_power_interfaces=idrac-wsman To enable all optional features (BIOS, inspection, RAID, and vendor passthru) using Redfish where it is supported and WSMAN where not, use the following configuration: .. code-block:: ini [DEFAULT] enabled_hardware_types=idrac enabled_bios_interfaces=idrac-wsman enabled_inspect_interfaces=idrac-redfish enabled_management_interfaces=idrac-redfish enabled_power_interfaces=idrac-redfish enabled_raid_interfaces=idrac-wsman enabled_vendor_interfaces=idrac-wsman Below is the list of supported interface implementations in priority order: ================ =================================================== Interface Supported Implementations ================ =================================================== ``bios`` ``idrac-wsman``, ``no-bios`` ``boot`` ``ipxe``, ``pxe`` ``console`` ``no-console`` ``deploy`` ``iscsi``, ``direct``, ``ansible``, ``ramdisk`` ``inspect`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``, ``inspector``, ``no-inspect`` ``management`` ``idrac-wsman``, ``idrac``, ``idrac-redfish`` ``network`` ``flat``, ``neutron``, ``noop`` ``power`` ``idrac-wsman``, ``idrac``, ``idrac-redfish`` ``raid`` ``idrac-wsman``, ``idrac``, ``no-raid`` ``rescue`` ``no-rescue``, ``agent`` ``storage`` ``noop``, ``cinder``, ``external`` ``vendor`` ``idrac-wsman``, ``idrac``, ``no-vendor`` ================ =================================================== .. NOTE:: ``idrac`` is the legacy name of the WSMAN interface. It has been deprecated in favor of ``idrac-wsman`` and may be removed in a future release. Protocol-specific Properties ---------------------------- The WSMAN and Redfish protocols require different properties to be specified in the Ironic node's ``driver_info`` field to communicate with the bare metal system's iDRAC. The WSMAN protocol requires the following properties: * ``drac_username``: The WSMAN user name to use when communicating with the iDRAC. Usually ``root``. * ``drac_password``: The password for the WSMAN user to use when communicating with the iDRAC. * ``drac_address``: The IP address of the iDRAC. The Redfish protocol requires the following properties: * ``redfish_username``: The Redfish user name to use when communicating with the iDRAC. Usually ``root``. * ``redfish_password``: The password for the Redfish user to use when communicating with the iDRAC. * ``redfish_address``: The URL address of the iDRAC. It must include the authority portion of the URL, and can optionally include the scheme. If the scheme is missing, https is assumed. * ``redfish_system_id``: The Redfish ID of the server to be managed. This should always be: ``/redfish/v1/Systems/System.Embedded.1``. For other Redfish protocol parameters see :doc:`/admin/drivers/redfish`. If using only interfaces which use WSMAN (``idrac-wsman``), then only the WSMAN properties must be supplied. If using only interfaces which use Redfish (``idrac-redfish``), then only the Redfish properties must be supplied. If using a mix of interfaces, where some use WSMAN and others use Redfish, both the WSMAN and Redfish properties must be supplied. Enrolling --------- The following command enrolls a bare metal node with the ``idrac`` hardware type using WSMAN for all interfaces: .. code-block:: bash openstack baremetal node create --driver idrac \ --driver-info drac_username=user \ --driver-info drac_password=pa$$w0rd \ --driver-info drac_address=drac.host The following command enrolls a bare metal node with the ``idrac`` hardware type using Redfish for all interfaces: .. code-block:: bash openstack baremetal node create --driver idrac \ --driver-info redfish_username=user \ --driver-info redfish_password=pa$$w0rd \ --driver-info redfish_address=drac.host \ --driver-info redfish_system_id=/redfish/v1/Systems/System.Embedded.1 \ --inspect-interface idrac-redfish \ --management-interface idrac-redfish \ --power-interface idrac-redfish \ --raid-interface no-raid \ --vendor-interface no-vendor The following command enrolls a bare metal node with the ``idrac`` hardware type assuming a mix of Redfish and WSMAN interfaces are used: .. code-block:: bash openstack baremetal node create --driver idrac \ --driver-info drac_username=user \ --driver-info drac_password=pa$$w0rd --driver-info drac_address=drac.host \ --driver-info redfish_username=user \ --driver-info redfish_password=pa$$w0rd \ --driver-info redfish_address=drac.host \ --driver-info redfish_system_id=/redfish/v1/Systems/System.Embedded.1 \ --inspect-interface idrac-redfish \ --management-interface idrac-redfish \ --power-interface idrac-redfish .. NOTE:: If using WSMAN for the management interface, then WSMAN must be used for the power interface. The same applies to Redfish. It is currently not possible to use Redfish for one and WSMAN for the other. BIOS Interface ============== The BIOS interface implementation for idrac-wsman allows BIOS to be configured with the standard clean/deploy step approach. Example ------- A clean step to enable ``Virtualization`` and ``SRIOV`` in BIOS of an iDRAC BMC would be as follows:: { "target":"clean", "clean_steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "ProcVirtualization", "value": "Enabled" }, { "name": "SriovGlobalEnable", "value": "Enabled } ] } } ] } To see all the available BIOS parameters on a node with iDRAC BMC, and also for additional details of BIOS configuration, see :doc:`/admin/bios`. Inspect Interface ================= The Dell iDRAC out-of-band inspection process catalogs all the same attributes of the server as the IPMI driver. Unlike IPMI, it does this without requiring the system to be rebooted, or even to be powered on. Inspection is performed using the Dell WSMAN or Redfish protocol directly without affecting the operation of the system being inspected. The inspection discovers the following properties: * ``cpu_arch``: cpu architecture * ``cpus``: number of cpus * ``local_gb``: disk size in gigabytes * ``memory_mb``: memory size in megabytes Extra capabilities: * ``boot_mode``: UEFI or BIOS boot mode. It also creates baremetal ports for each NIC port detected in the system. The ``idrac-wsman`` inspect interface discovers which NIC ports are configured to PXE boot and sets ``pxe_enabled`` to ``True`` on those ports. The ``idrac-redfish`` inspect interface does not currently set ``pxe_enabled`` on the ports. The user should ensure that ``pxe_enabled`` is set correctly on the ports following inspection with the ``idrac-redfish`` inspect interface. RAID Interface ============== See :doc:`/admin/raid` for more information on Ironic RAID support. The following properties are supported by the iDRAC WSMAN raid interface implementation, ``idrac-wsman``: Mandatory properties -------------------- * ``size_gb``: Size in gigabytes (integer) for the logical disk. Use ``MAX`` as ``size_gb`` if this logical disk is supposed to use the rest of the space available. * ``raid_level``: RAID level for the logical disk. Valid values are ``0``, ``1``, ``5``, ``6``, ``1+0``, ``5+0`` and ``6+0``. .. NOTE:: ``JBOD`` and ``2`` are not supported, and will fail with reason: 'Cannot calculate spans for RAID level.' Optional properties ------------------- * ``is_root_volume``: Optional. Specifies whether this disk is a root volume. By default, this is ``False``. * ``volume_name``: Optional. Name of the volume to be created. If this is not specified, it will be auto-generated. Backing physical disk hints --------------------------- See :doc:`/admin/raid` for more information on backing disk hints. These are machine-independent information. The hints are specified for each logical disk to help Ironic find the desired disks for RAID configuration. * ``disk_type`` * ``interface_type`` * ``share_physical_disks`` * ``number_of_physical_disks`` Backing physical disks ---------------------- These are Dell RAID controller-specific values and must match the names provided by the iDRAC. * ``controller``: Mandatory. The name of the controller to use. * ``physical_disks``: Optional. The names of the physical disks to use. .. NOTE:: ``physical_disks`` is a mandatory parameter if the property ``size_gb`` is set to ``MAX``. Examples -------- Creation of RAID ``1+0`` logical disk with six disks on one controller: .. code-block:: json { "logical_disks": [ { "controller": "RAID.Integrated.1-1", "is_root_volume": "True", "physical_disks": [ "Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1"], "raid_level": "1+0", "size_gb": "MAX"}]} Manual RAID Invocation ---------------------- The following command can be used to delete any existing RAID configuration. It deletes all virtual disks/RAID volumes, unassigns all global and dedicated hot spare physical disks, and clears foreign configuration: .. code-block:: bash openstack baremetal node clean --clean-steps \ '[{"interface": "raid", "step": "delete_configuration"}]' ${node_uuid} The following command shows an example of how to set the target RAID configuration: .. code-block:: bash openstack baremetal node set --target-raid-config '{ "logical_disks": [ { "controller": "RAID.Integrated.1-1", "is_root_volume": true, "physical_disks": [ "Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1", "Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1"], "raid_level": "0", "size_gb": "MAX"}]}' ${node_uuid} The following command can be used to create a RAID configuration: .. code-block:: bash openstack baremetal node clean --clean-steps \ '[{"interface": "raid", "step": "create_configuration"}]' ${node_uuid} When the physical disk names or controller names are not known, the following Python code example shows how the ``python-dracclient`` can be used to fetch the information directly from the Dell bare metal: .. code-block:: python import dracclient.client client = dracclient.client.DRACClient( host="192.168.1.1", username="root", password="calvin") controllers = client.list_raid_controllers() print(controllers) physical_disks = client.list_physical_disks() print(physical_disks) Vendor Interface ================ Dell iDRAC BIOS management is available through the Ironic vendor passthru interface. ======================== ============ ====================================== Method Name HTTP Method Description ======================== ============ ====================================== ``abandon_bios_config`` ``DELETE`` Abandon a BIOS configuration job. ``commit_bios_config`` ``POST`` Commit a BIOS configuration job submitted through ``set_bios_config``. Required argument: ``reboot`` - indicates whether a reboot job should be automatically created with the config job. Returns a dictionary containing the ``job_id`` key with the ID of the newly created config job, and the ``reboot_required`` key indicating whether the node needs to be rebooted to execute the config job. ``get_bios_config`` ``GET`` Returns a dictionary containing the node's BIOS settings. ``list_unfinished_jobs`` ``GET`` Returns a dictionary containing the key ``unfinished_jobs``; its value is a list of dictionaries. Each dictionary represents an unfinished config job object. ``set_bios_config`` ``POST`` Change the BIOS configuration on a node. Required argument: a dictionary of {``AttributeName``: ``NewValue``}. Returns a dictionary containing the ``is_commit_required`` key indicating whether ``commit_bios_config`` needs to be called to apply the changes and the ``is_reboot_required`` value indicating whether the server must also be rebooted. Possible values are ``true`` and ``false``. ======================== ============ ====================================== Examples -------- Get BIOS Config ~~~~~~~~~~~~~~~ .. code-block:: bash openstack baremetal node passthru call --http-method GET ${node_uuid} get_bios_config Snippet of output showing virtualization enabled: .. code-block:: json {"ProcVirtualization": { "current_value": "Enabled", "instance_id": "BIOS.Setup.1-1:ProcVirtualization", "name": "ProcVirtualization", "pending_value": null, "possible_values": [ "Enabled", "Disabled"], "read_only": false }} There are a number of items to note from the above snippet: * ``name``: this is the name to use in a call to ``set_bios_config``. * ``current_value``: the current state of the setting. * ``pending_value``: if the value has been set, but not yet committed, the new value is shown here. The change can either be committed or abandoned. * ``possible_values``: shows a list of valid values which can be used in a call to ``set_bios_config``. * ``read_only``: indicates if the value is capable of being changed. Set BIOS Config ~~~~~~~~~~~~~~~ .. code-block:: bash openstack baremetal node passthru call ${node_uuid} set_bios_config --arg "name=value" Walkthrough of perfoming a BIOS configuration change: The following section demonstrates how to change BIOS configuration settings, detect that a commit and reboot are required, and act on them accordingly. The two properties that are being changed are: * Enable virtualization technology of the processor * Globally enable SR-IOV .. code-block:: bash openstack baremetal node passthru call ${node_uuid} set_bios_config \ --arg "ProcVirtualization=Enabled" \ --arg "SriovGlobalEnable=Enabled" This returns a dictionary indicating what actions are required next: .. code-block:: json { "is_reboot_required": true, "is_commit_required": true } Commit BIOS Changes ~~~~~~~~~~~~~~~~~~~ The next step is to commit the pending change to the BIOS. Note that in this example, the ``reboot`` argument is set to ``true``. The response indicates that a reboot is no longer required as it has been scheduled automatically by the ``commit_bios_config`` call. If the reboot argument is not supplied, the job is still created, however it remains in the ``scheduled`` state until a reboot is performed. The reboot can be initiated through the Ironic power API. .. code-block:: bash openstack baremetal node passthru call ${node_uuid} commit_bios_config \ --arg "reboot=true" .. code-block:: json { "job_id": "JID_499377293428", "reboot_required": false } The state of any executing job can be queried: .. code-block:: bash openstack baremetal node passthru call --http-method GET ${node_uuid} list_unfinished_jobs .. code-block:: json {"unfinished_jobs": [{"status": "Scheduled", "name": "ConfigBIOS:BIOS.Setup.1-1", "until_time": "TIME_NA", "start_time": "TIME_NOW", "message": "Task successfully scheduled.", "percent_complete": "0", "id": "JID_499377293428"}]} Abandon BIOS Changes ~~~~~~~~~~~~~~~~~~~~ Instead of committing, a pending change can be abandoned: .. code-block:: bash openstack baremetal node passthru call --http-method DELETE ${node_uuid} abandon_bios_config The abandon command does not provide a response body. Change Boot Mode ---------------- The boot mode of the iDRAC can be changed to: * BIOS - Also called legacy or traditional boot mode. The BIOS initializes the system’s processors, memory, bus controllers, and I/O devices. After initialization is complete, the BIOS passes control to operating system (OS) software. The OS loader uses basic services provided by the system BIOS to locate and load OS modules into system memory. After booting the system, the BIOS and embedded management controllers execute system management algorithms, which monitor and optimize the condition of the underlying hardware. BIOS configuration settings enable fine-tuning of the performance, power management, and reliability features of the system. * UEFI - The Unified Extensible Firmware Interface does not change the traditional purposes of the system BIOS. To a large extent, a UEFI-compliant BIOS performs the same initialization, boot, configuration, and management tasks as a traditional BIOS. However, UEFI does change the interfaces and data structures the BIOS uses to interact with I/O device firmware and operating system software. The primary intent of UEFI is to eliminate shortcomings in the traditional BIOS environment, enabling system firmware to continue scaling with industry trends. The UEFI boot mode offers: * Improved partitioning scheme for boot media * Support for media larger than 2 TB * Redundant partition tables * Flexible handoff from BIOS to OS * Consolidated firmware user interface * Enhanced resource allocation for boot device firmware The boot mode can be changed via the vendor passthru interface as follows: .. code-block:: bash openstack baremetal node passthru call ${node_uuid} set_bios_config \ --arg "BootMode=Uefi" openstack baremetal node passthru call ${node_uuid} commit_bios_config \ --arg "reboot=true" .. code-block:: bash openstack baremetal node passthru call ${node_uuid} set_bios_config \ --arg "BootMode=Bios" openstack baremetal node passthru call ${node_uuid} commit_bios_config \ --arg "reboot=true" Known Issues ============ Nodes go into maintenance mode ------------------------------ After some period of time, nodes managed by the ``idrac`` hardware type may go into maintenance mode in Ironic. This issue can be worked around by changing the Ironic power state poll interval to 70 seconds. See ``[conductor]sync_power_state_interval`` in ``/etc/ironic/ironic.conf``. .. _Ironic_RAID: https://docs.openstack.org/ironic/latest/admin/raid.html .. _iDRAC: https://www.dell.com/idracmanuals Vendor passthru timeout ----------------------- When iDRAC is not ready and executing vendor passthru commands, they take more time as waiting for iDRAC to become ready again and then time out, for example: .. code-block:: bash openstack baremetal node passthru call --http-method GET \ aed58dca-1b25-409a-a32f-3a817d59e1e0 list_unfinished_jobs Timed out waiting for a reply to message ID 547ce7995342418c99ef1ea4a0054572 (HTTP 500) To avoid this need to increase timeout for messaging in ``/etc/ironic/ironic.conf`` and restart Ironic API service. .. code-block:: ini [DEFAULT] rpc_response_timeout = 600 Timeout when powering off ------------------------- Some servers might be slow when soft powering off and time out. The default retry count is 6, resulting in 30 seconds timeout (the default retry interval set by ``post_deploy_get_power_state_retry_interval`` is 5 seconds). To resolve this issue, increase the timeout to 90 seconds by setting the retry count to 18 as follows: .. code-block:: ini [agent] post_deploy_get_power_state_retries = 18 Redfish management interface failure to set boot device ------------------------------------------------------- When using the ``idrac-redfish`` management interface with certain iDRAC firmware versions (at least versions 2.70.70.70, 4.00.00.00, and 4.10.10.10) and attempting to set the boot device on a baremetal server that is configured to UEFI boot, the iDRAC will return the following error:: Unable to Process the request because the value entered for the parameter Continuous is not supported by the implementation. To work around this issue, set the ``force_persistent_boot_device`` parameter in ``driver-info`` on the node to ``Never`` by running the following command from the command line: .. code-block:: bash openstack baremetal node set --driver-info \ force_persistent_boot_device=Never ${node_uuid} ironic-15.0.0/doc/source/admin/drivers/ansible.rst0000664000175000017500000004271013652514273022100 0ustar zuulzuul00000000000000======================== Ansible deploy interface ======================== `Ansible`_ is a mature and popular automation tool, written in Python and requiring no agents running on the node being configured. All communications with the node are by default performed over secure SSH transport. The ``ansible`` deploy interface uses Ansible playbooks to define the deployment logic. It is not based on :ironic-python-agent-doc:`Ironic Python Agent (IPA) <>` and does not generally need IPA to be running in the deploy ramdisk. Overview ======== The main advantage of this deploy interface is extended flexibility in regards to changing and adapting node deployment logic for specific use cases, via Ansible tooling that is already familiar to operators. It can be used to shorten the usual feature development cycle of * implementing logic in ironic, * implementing logic in IPA, * rebuilding deploy ramdisk, * uploading deploy ramdisk to Glance/HTTP storage, * reassigning deploy ramdisk to nodes, * restarting ironic-conductor service(s) and * running a test deployment by using a "stable" deploy ramdisk and not requiring ironic-conductor restarts (see `Extending playbooks`_). The main disadvantage of this deploy interface is the synchronous manner of performing deployment/cleaning tasks. A separate ``ansible-playbook`` process is spawned for each node being provisioned or cleaned, which consumes one thread from the thread pool available to the ``ironic-conductor`` process and blocks this thread until the node provisioning or cleaning step is finished or fails. This has to be taken into account when planning an ironic deployment that enables this deploy interface. Each action (deploy, clean) is described by a single playbook with roles, which is run whole during deployment, or tag-wise during cleaning. Control of cleaning steps is through tags and auxiliary clean steps file. The playbooks for actions can be set per-node, as can the clean steps file. Features -------- Similar to deploy interfaces relying on :ironic-python-agent-doc:`Ironic Python Agent (IPA) <>`, this deploy interface also depends on the deploy ramdisk calling back to ironic API's ``heartbeat`` endpoint. However, the driver is currently synchronous, so only the first heartbeat is processed and is used as a signal to start ``ansible-playbook`` process. User images ~~~~~~~~~~~ Supports whole-disk images and partition images: - compressed images are downloaded to RAM and converted to disk device; - raw images are streamed to disk directly. For partition images the driver will create root partition, and, if requested, ephemeral and swap partitions as set in node's ``instance_info`` by the Compute service or operator. The create partition table will be of ``msdos`` type by default, the node's ``disk_label`` capability is honored if set in node's ``instance_info`` (see also :ref:`choosing_the_disk_label`). Configdrive partition ~~~~~~~~~~~~~~~~~~~~~ Creating a configdrive partition is supported for both whole disk and partition images, on both ``msdos`` and ``GPT`` labeled disks. Root device hints ~~~~~~~~~~~~~~~~~ Root device hints are currently supported in their basic form only, with exact matches (see :ref:`root-device-hints` for more details). If no root device hint is provided for the node, the first device returned as part of ``ansible_devices`` fact is used as root device to create partitions on or write the whole disk image to. Node cleaning ~~~~~~~~~~~~~ Cleaning is supported, both automated and manual. The driver has two default clean steps: - wiping device metadata - disk shredding Their priority can be overridden via ``[deploy]\erase_devices_metadata_priority`` and ``[deploy]\erase_devices_priority`` options, respectively, in the ironic configuration file. As in the case of this driver all cleaning steps are known to the ironic-conductor service, booting the deploy ramdisk is completely skipped when there are no cleaning steps to perform. .. note:: Aborting cleaning steps is not supported. Logging ~~~~~~~ Logging is implemented as custom Ansible callback module, that makes use of ``oslo.log`` and ``oslo.config`` libraries and can re-use logging configuration defined in the main ironic configuration file to set logging for Ansible events, or use a separate file for this purpose. It works best when ``journald`` support for logging is enabled. Requirements ============ Ansible Tested with, and targets, Ansible 2.5.x Bootstrap image requirements ---------------------------- - password-less sudo permissions for the user used by Ansible - python 2.7.x - openssh-server - GNU coreutils - utils-linux - parted - gdisk - qemu-utils - python-requests (for ironic callback and streaming image download) - python-netifaces (for ironic callback) A set of scripts to build a suitable deploy ramdisk based on TinyCore Linux and ``tinyipa`` ramdisk, and an element for ``diskimage-builder`` can be found in ironic-staging-drivers_ project but will be eventually migrated to the new ironic-python-agent-builder_ project. Setting up your environment =========================== #. Install ironic (either as part of OpenStack or standalone) - If using ironic as part of OpenStack, ensure that the Image service is configured to use the Object Storage service as backend, and the Bare Metal service is configured accordingly, see :doc:`Configure the Image service for temporary URLs <../../install/configure-glance-swift>`. #. Install Ansible version as specified in ``ironic/driver-requirements.txt`` file #. Edit ironic configuration file A. Add ``ansible`` to the list of deploy interfaces defined in ``[DEFAULT]\enabled_deploy_interfaces`` option. B. Ensure that a hardware type supporting ``ansible`` deploy interface is enabled in ``[DEFAULT]\enabled_hardware_types`` option. C. Modify options in the ``[ansible]`` section of ironic's configuration file if needed (see `Configuration file`_). #. (Re)start ironic-conductor service #. Build suitable deploy kernel and ramdisk images #. Upload them to Glance or put in your HTTP storage #. Create new or update existing nodes to use the enabled driver of your choice and populate `Driver properties for the Node`_ when different from defaults. #. Deploy the node as usual. Ansible-deploy options ---------------------- Configuration file ~~~~~~~~~~~~~~~~~~~ Driver options are configured in ``[ansible]`` section of ironic configuration file, for their descriptions and default values please see `configuration file sample <../../configuration/config.html#ansible>`_. Driver properties for the Node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Set them per-node via ``openstack baremetal node set`` command, for example: .. code-block:: shell openstack baremetal node set \ --deploy-interface ansible \ --driver-info ansible_username=stack \ --driver-info ansible_key_file=/etc/ironic/id_rsa ansible_username User name to use for Ansible to access the node. Default is taken from ``[ansible]/default_username`` option of the ironic configuration file (defaults to ``ansible``). ansible_key_file Private SSH key used to access the node. Default is taken from ``[ansible]/default_key_file`` option of the ironic configuration file. If neither is set, the default private SSH keys of the user running the ``ironic-conductor`` process will be used. ansible_deploy_playbook Playbook to use when deploying this node. Default is taken from ``[ansible]/default_deploy_playbook`` option of the ironic configuration file (defaults to ``deploy.yaml``). ansible_shutdown_playbook Playbook to use to gracefully shutdown the node in-band. Default is taken from ``[ansible]/default_shutdown_playbook`` option of the ironic configuration file (defaults to ``shutdown.yaml``). ansible_clean_playbook Playbook to use when cleaning the node. Default is taken from ``[ansible]/default_clean_playbook`` option of the ironic configuration file (defaults to ``clean.yaml``). ansible_clean_steps_config Auxiliary YAML file that holds description of cleaning steps used by this node, and defines playbook tags in ``ansible_clean_playbook`` file corresponding to each cleaning step. Default is taken from ``[ansible]/default_clean_steps_config`` option of the ironic configuration file (defaults to ``clean_steps.yaml``). ansible_python_interpreter Absolute path to the python interpreter on the managed machine. Default is taken from ``[ansible]/default_python_interpreter`` option of the ironic configuration file. Ansible uses ``/usr/bin/python`` by default. Customizing the deployment logic ================================ Expected playbooks directory layout ----------------------------------- The ``[ansible]\playbooks_path`` option in the ironic configuration file is expected to have a standard layout for an Ansible project with some additions:: | \_ inventory \_ add-ironic-nodes.yaml \_ roles \_ role1 \_ role2 \_ ... | \_callback_plugins \_ ... | \_ library \_ ... The extra files relied by this driver are: inventory Ansible inventory file containing a single entry of ``conductor ansible_connection=local``. This basically defines an alias to ``localhost``. Its purpose is to make logging for tasks performed by Ansible locally and referencing the localhost in playbooks more intuitive. This also suppresses warnings produced by Ansible about ``hosts`` file being empty. add-ironic-nodes.yaml This file contains an Ansible play that populates in-memory Ansible inventory with access information received from the ansible-deploy interface, as well as some per-node variables. Include it in all your custom playbooks as the first play. The default ``deploy.yaml`` playbook is using several smaller roles that correspond to particular stages of deployment process: - ``discover`` - e.g. set root device and image target - ``prepare`` - if needed, prepare system, for example create partitions - ``deploy`` - download/convert/write user image and configdrive - ``configure`` - post-deployment steps, e.g. installing the bootloader Some more included roles are: - ``shutdown`` - used to gracefully power the node off in-band - ``clean`` - defines cleaning procedure, with each clean step defined as separate playbook tag. Extending playbooks ------------------- Most probably you'd start experimenting like this: #. Create a copy of ``deploy.yaml`` playbook *in the same folder*, name it distinctively. #. Create Ansible roles with your customized logic in ``roles`` folder. A. In your custom deploy playbook, replace the ``prepare`` role with your own one that defines steps to be run *before* image download/writing. This is a good place to set facts overriding those provided/omitted by the driver, like ``ironic_partitions`` or ``ironic_root_device``, and create custom partitions or (software) RAIDs. B. In your custom deploy playbook, replace the ``configure`` role with your own one that defines steps to be run *after* image is written to disk. This is a good place for example to configure the bootloader and add kernel options to avoid additional reboots. C. Use those new roles in your new playbook. #. Assign the custom deploy playbook you've created to the node's ``driver_info/ansible_deploy_playbook`` field. #. Run deployment. A. No ironic-conductor restart is necessary. B. A new deploy ramdisk must be built and assigned to nodes only when you want to use a command/script/package not present in the current deploy ramdisk and you can not or do not want to install those at runtime. Variables you have access to ---------------------------- This driver will pass the single JSON-ified extra var argument to Ansible (as in ``ansible-playbook -e ..``). Those values are then accessible in your plays as well (some of them are optional and might not be defined): .. code-block:: yaml ironic: nodes: - ip: "" name: "" user: "" extra: "" image: url: "" disk_format: "" container_format: "" checksum: "" mem_req: "" tags: "" properties: "" configdrive: type: "" location: "" partition_info: label: "" preserve_ephemeral: "" ephemeral_format: "" partitions: "" raid_config: "" ``ironic.nodes`` List of dictionaries (currently of only one element) that will be used by ``add-ironic-nodes.yaml`` play to populate in-memory inventory. It also contains a copy of node's ``extra`` field so you can access it in the playbooks. The Ansible's host is set to node's UUID. ``ironic.image`` All fields of node's ``instance_info`` that start with ``image_`` are passed inside this variable. Some extra notes and fields: - ``mem_req`` is calculated from image size (if available) and config option ``[ansible]extra_memory``. - if ``checksum`` is not in the form ``:``, hashing algorithm is assumed to be ``md5`` (default in Glance). - ``validate_certs`` - boolean (``yes/no``) flag that turns validating image store SSL certificate on or off (default is 'yes'). Governed by ``[ansible]image_store_insecure`` option in ironic configuration file. - ``cafile`` - custom CA bundle to use for validating image store SSL certificate. Takes value of ``[ansible]image_store_cafile`` if that is defined. Currently is not used by default playbooks, as Ansible has no way to specify the custom CA bundle to use for single HTTPS actions, however you can use this value in your custom playbooks to for example upload and register this CA in the ramdisk at deploy time. - ``client_cert`` - cert file for client-side SSL authentication. Takes value of ``[ansible]image_store_certfile`` option if defined. Currently is not used by default playbooks, however you can use this value in your custom playbooks. - ``client_key`` - private key file for client-side SSL authentication. Takes value of ``[ansible]image_store_keyfile`` option if defined. Currently is not used by default playbooks, however you can use this value in your custom playbooks. ``ironic.partition_info.partitions`` Optional. List of dictionaries defining partitions to create on the node in the form: .. code-block:: yaml partitions: - name: "" unit: "" size: "" type: "" align: "" format: "" flags: flag_name: "" The driver will populate this list from ``root_gb``, ``swap_mb`` and ``ephemeral_gb`` fields of ``instance_info``. The driver will also prepend the ``bios_grub``-labeled partition when deploying on GPT-labeled disk, and pre-create a 64 MiB partition for configdrive if it is set in ``instance_info``. Please read the documentation included in the ``ironic_parted`` module's source for more info on the module and its arguments. ``ironic.partition_info.ephemeral_format`` Optional. Taken from ``instance_info``, it defines file system to be created on the ephemeral partition. Defaults to the value of ``[pxe]\default_ephemeral_format`` option in ironic configuration file. ``ironic.partition_info.preserve_ephemeral`` Optional. Taken from the ``instance_info``, it specifies if the ephemeral partition must be preserved or rebuilt. Defaults to ``no``. ``ironic.raid_config`` Taken from the ``target_raid_config`` if not empty, it specifies the RAID configuration to apply. As usual for Ansible playbooks, you also have access to standard Ansible facts discovered by ``setup`` module. Included custom Ansible modules ------------------------------- The provided ``playbooks_path/library`` folder includes several custom Ansible modules used by default implementation of ``deploy`` and ``prepare`` roles. You can use these modules in your playbooks as well. ``stream_url`` Streaming download from HTTP(S) source to the disk device directly, tries to be compatible with Ansible's ``get_url`` module in terms of module arguments. Due to the low level of such operation it is not idempotent. ``ironic_parted`` creates partition tables and partitions with ``parted`` utility. Due to the low level of such operation it is not idempotent. Please read the documentation included in the module's source for more information about this module and its arguments. The name is chosen so that the ``parted`` module included in Ansible is not shadowed. .. _Ansible: https://docs.ansible.com/ansible/latest/index.html .. _ironic-staging-drivers: https://opendev.org/x/ironic-staging-drivers/src/branch/stable/pike/imagebuild .. _ironic-python-agent-builder: https://opendev.org/openstack/ironic-python-agent-builder ironic-15.0.0/doc/source/admin/drivers/snmp.rst0000664000175000017500000001637613652514273021451 0ustar zuulzuul00000000000000=========== SNMP driver =========== The SNMP hardware type enables control of power distribution units of the type frequently found in data centre racks. PDUs frequently have a management ethernet interface and SNMP support enabling control of the power outlets. The SNMP power interface works with the :ref:`pxe-boot` interface for network deployment and network-configured boot. .. note:: Unlike most of the other power interfaces, the SNMP power interface does not have a corresponding management interface. The SNMP hardware type uses the ``noop`` management interface instead. List of supported devices ========================= This is a non-exhaustive list of supported devices. Any device not listed in this table could possibly work using a similar driver. Please report any device status. ============== ========== ========== ===================== Manufacturer Model Supported? Driver name ============== ========== ========== ===================== APC AP7920 Yes apc_masterswitch APC AP9606 Yes apc_masterswitch APC AP9225 Yes apc_masterswitchplus APC AP7155 Yes apc_rackpdu APC AP7900 Yes apc_rackpdu APC AP7901 Yes apc_rackpdu APC AP7902 Yes apc_rackpdu APC AP7911a Yes apc_rackpdu APC AP7921 Yes apc_rackpdu APC AP7922 Yes apc_rackpdu APC AP7930 Yes apc_rackpdu APC AP7931 Yes apc_rackpdu APC AP7932 Yes apc_rackpdu APC AP7940 Yes apc_rackpdu APC AP7941 Yes apc_rackpdu APC AP7951 Yes apc_rackpdu APC AP7960 Yes apc_rackpdu APC AP7990 Yes apc_rackpdu APC AP7998 Yes apc_rackpdu APC AP8941 Yes apc_rackpdu APC AP8953 Yes apc_rackpdu APC AP8959 Yes apc_rackpdu APC AP8961 Yes apc_rackpdu APC AP8965 Yes apc_rackpdu Aten all? Yes aten CyberPower all? Untested cyberpower EatonPower all? Untested eatonpower Teltronix all? Yes teltronix BayTech MRP27 Yes baytech_mrp27 ============== ========== ========== ===================== Software Requirements ===================== - The PySNMP package must be installed, variously referred to as ``pysnmp`` or ``python-pysnmp`` Enabling the SNMP Hardware Type =============================== #. Add ``snmp`` to the list of ``enabled_hardware_types`` in ``ironic.conf``. Also update ``enabled_management_interfaces`` and ``enabled_power_interfaces`` in ``ironic.conf`` as shown below: .. code-block:: ini [DEFAULT] enabled_hardware_types = snmp enabled_management_interfaces = noop enabled_power_interfaces = snmp #. To set the default boot option, update ``default_boot_option`` in ``ironic.conf``: .. code-block:: ini [DEFAULT] default_boot_option = netboot .. note:: Currently the default value of ``default_boot_option`` is ``netboot`` but it will be changed to ``local`` in the future. It is recommended to set an explicit value for this option. .. note:: It is important to set ``boot_option`` to ``netboot`` as SNMP hardware type does not support setting of boot devices. One can also configure a node to boot using ``netboot`` by setting its ``capabilities`` and updating Nova flavor as described below: .. code-block:: console openstack baremetal node set --property capabilities="boot_option:netboot" openstack flavor set --property "capabilities:boot_option"="netboot" ironic-flavor #. Restart the Ironic conductor service. .. code-block:: bash service ironic-conductor restart Ironic Node Configuration ========================= Nodes configured to use the SNMP hardware type should have the ``driver`` field set to the hardware type ``snmp``. The following property values have to be added to the node's ``driver_info`` field: - ``snmp_driver``: PDU manufacturer driver name or ``auto`` to automatically choose ironic snmp driver based on ``SNMPv2-MIB::sysObjectID`` value as reported by PDU. - ``snmp_address``: the IPv4 address of the PDU controlling this node. - ``snmp_port``: (optional) A non-standard UDP port to use for SNMP operations. If not specified, the default port (161) is used. - ``snmp_outlet``: The power outlet on the PDU (1-based indexing). - ``snmp_version``: (optional) SNMP protocol version (permitted values ``1``, ``2c`` or ``3``). If not specified, SNMPv1 is chosen. - ``snmp_community``: (Required for SNMPv1/SNMPv2c unless ``snmp_community_read`` and/or ``snmp_community_write`` properties are present in which case the latter take over) SNMP community name parameter for reads and writes to the PDU. - ``snmp_community_read``: SNMP community name parameter for reads to the PDU. Takes precedence over the ``snmp_community`` property. - ``snmp_community_write``: SNMP community name parameter for writes to the PDU. Takes precedence over the ``snmp_community`` property. - ``snmp_user``: (Required for SNMPv3) SNMPv3 User-based Security Model (USM) user name. Synonym for now obsolete ``snmp_security`` parameter. - ``snmp_auth_protocol``: SNMPv3 message authentication protocol ID. Valid values include: ``none``, ``md5``, ``sha`` for all pysnmp versions and additionally ``sha224``, ``sha256``, ``sha384``, ``sha512`` for pysnmp versions 4.4.1 and later. Default is ``none`` unless ``snmp_auth_key`` is provided. In the latter case ``md5`` is the default. - ``snmp_auth_key``: SNMPv3 message authentication key. Must be 8+ characters long. Required when message authentication is used. - ``snmp_priv_protocol``: SNMPv3 message privacy (encryption) protocol ID. Valid values include: ``none``, ``des``, ``3des``, ``aes``, ``aes192``, ``aes256`` for all pysnmp version and additionally ``aes192blmt``, ``aes256blmt`` for pysnmp versions 4.4.3+. Note that message privacy requires using message authentication. Default is ``none`` unless ``snmp_priv_key`` is provided. In the latter case ``des`` is the default. - ``snmp_priv_key``: SNMPv3 message privacy (encryption) key. Must be 8+ characters long. Required when message encryption is used. - ``snmp_context_engine_id``: SNMPv3 context engine ID. Default is the value of authoritative engine ID. - ``snmp_context_name``: SNMPv3 context name. Default is an empty string. The following command can be used to enroll a node with the ``snmp`` hardware type: .. code-block:: bash openstack baremetal node create --os-baremetal-api-version=1.31 \ --driver snmp --driver-info snmp_driver= \ --driver-info snmp_address= \ --driver-info snmp_outlet= \ --driver-info snmp_community= \ --properties capabilities=boot_option:netboot ironic-15.0.0/doc/source/admin/drivers/redfish.rst0000664000175000017500000001700113652514273022102 0ustar zuulzuul00000000000000============== Redfish driver ============== Overview ======== The ``redfish`` driver enables managing servers compliant with the Redfish_ protocol. Prerequisites ============= * The Sushy_ library should be installed on the ironic conductor node(s). For example, it can be installed with ``pip``:: sudo pip install sushy Enabling the Redfish driver =========================== #. Add ``redfish`` to the list of ``enabled_hardware_types``, ``enabled_power_interfaces``, ``enabled_management_interfaces`` and ``enabled_inspect_interfaces`` as well as ``redfish-virtual-media`` to ``enabled_boot_interfaces`` in ``/etc/ironic/ironic.conf``. For example:: [DEFAULT] ... enabled_hardware_types = ipmi,redfish enabled_boot_interfaces = ipmitool,redfish-virtual-media enabled_power_interfaces = ipmitool,redfish enabled_management_interfaces = ipmitool,redfish enabled_inspect_interfaces = inspector,redfish #. Restart the ironic conductor service:: sudo service ironic-conductor restart # Or, for RDO: sudo systemctl restart openstack-ironic-conductor Registering a node with the Redfish driver =========================================== Nodes configured to use the driver should have the ``driver`` property set to ``redfish``. The following properties are specified in the node's ``driver_info`` field: - ``redfish_address``: The URL address to the Redfish controller. It must include the authority portion of the URL, and can optionally include the scheme. If the scheme is missing, https is assumed. For example: https://mgmt.vendor.com. This is required. - ``redfish_system_id``: The canonical path to the ComputerSystem resource that the driver will interact with. It should include the root service, version and the unique resource path to the ComputerSystem. This property is only required if target BMC manages more than one ComputerSystem. Otherwise ironic will pick the only available ComputerSystem automatically. For example: /redfish/v1/Systems/1. - ``redfish_username``: User account with admin/server-profile access privilege. Although not required, it is highly recommended. - ``redfish_password``: User account password. Although not required, it is highly recommended. - ``redfish_verify_ca``: If redfish_address has the **https** scheme, the driver will use a secure (TLS_) connection when talking to the Redfish controller. By default (if this is not set or set to True), the driver will try to verify the host certificates. This can be set to the path of a certificate file or directory with trusted certificates that the driver will use for verification. To disable verifying TLS_, set this to False. This is optional. - ``redfish_auth_type``: Redfish HTTP client authentication method. Can be "basic", "session" or "auto". The "auto" mode first tries "session" and falls back to "basic" if session authentication is not supported by the Redfish BMC. Default is set in ironic config as ``[redfish]auth_type``. The ``openstack baremetal node create`` command can be used to enroll a node with the ``redfish`` driver. For example: .. code-block:: bash openstack baremetal node create --driver redfish --driver-info \ redfish_address=https://example.com --driver-info \ redfish_system_id=/redfish/v1/Systems/CX34R87 --driver-info \ redfish_username=admin --driver-info redfish_password=password \ --name node-0 For more information about enrolling nodes see :ref:`enrollment` in the install guide. Features of the ``redfish`` hardware type ========================================= Boot mode support ^^^^^^^^^^^^^^^^^ The ``redfish`` hardware type can read current boot mode from the bare metal node as well as set it to either Legacy BIOS or UEFI. .. note:: Boot mode management is the optional part of the Redfish specification. Not all Redfish-compliant BMCs might implement it. In that case it remains the responsibility of the operator to configure proper boot mode to their bare metal nodes. Out-Of-Band inspection ^^^^^^^^^^^^^^^^^^^^^^ The ``redfish`` hardware type can inspect the bare metal node by querying Redfish compatible BMC. This process is quick and reliable compared to the way the ``inspector`` hardware type works i.e. booting bare metal node into the introspection ramdisk. .. note:: The ``redfish`` inspect interface relies on the optional parts of the Redfish specification. Not all Redfish-compliant BMCs might serve the required information, in which case bare metal node inspection will fail. .. note:: The ``local_gb`` property cannot always be discovered, for example, when a node does not have local storage or the Redfish implementation does not support the required schema. In this case the property will be set to 0. Virtual media boot ^^^^^^^^^^^^^^^^^^ The idea behind virtual media boot is that BMC gets hold of the boot image one way or the other (e.g. by HTTP GET, other methods are defined in the standard), then "inserts" it into node's virtual drive as if it was burnt on a physical CD/DVD. The node can then boot from that virtual drive into the operating system residing on the image. The major advantage of virtual media boot feature is that potentially unreliable TFTP image transfer phase of PXE protocol suite is fully eliminated. Hardware types based on the ``redfish`` fully support booting deploy/rescue and user images over virtual media. Ironic builds bootable ISO images, for either UEFI or BIOS (Legacy) boot modes, at the moment of node deployment out of kernel and ramdisk images associated with the ironic node. To boot a node managed by ``redfish`` hardware type over virtual media using BIOS boot mode, it suffice to set ironic boot interface to ``redfish-virtual-media``, as opposed to ``ipmitool``. .. code-block:: bash openstack baremetal node set --boot-interface redfish-virtual-media node-0 If UEFI boot mode is desired, the user should additionally supply EFI System Partition image (ESP_) via ``[driver-info]/bootloader`` ironic node property or ironic configuration file in form of Glance image UUID or a URL. .. code-block:: bash openstack baremetal node set --driver-info bootloader= node-0 If ``[driver_info]/config_via_floppy`` boolean property of the node is set to ``true``, ironic will create a file with runtime configuration parameters, place into on a FAT image, then insert the image into node's virtual floppy drive. When booting over PXE or virtual media, and user instance requires some specific kernel configuration, ``[instance_info]/kernel_append_params`` property can be used to pass user-specified kernel command line parameters. For ramdisk kernel, ``[instance_info]/kernel_append_params`` property serves the same purpose. .. _Redfish: http://redfish.dmtf.org/ .. _Sushy: https://opendev.org/openstack/sushy .. _TLS: https://en.wikipedia.org/wiki/Transport_Layer_Security .. _ESP: https://wiki.ubuntu.com/EFIBootLoaders#Booting_from_EFI ironic-15.0.0/doc/source/admin/drivers/ibmc.rst0000664000175000017500000000712213652514273021373 0ustar zuulzuul00000000000000=============== iBMC driver =============== .. warning:: The ``ibmc`` driver has been deprecated due to a lack of a functioning third party CI and will be removed in the Victoria development cycle. Overview ======== The ``ibmc`` driver is targeted for Huawei V5 series rack server such as 2288H V5, CH121 V5. The iBMC hardware type enables the user to take advantage of features of `Huawei iBMC`_ to control Huawei server. Prerequisites ============= The `HUAWEI iBMC Client library`_ should be installed on the ironic conductor node(s). For example, it can be installed with ``pip``:: sudo pip install python-ibmcclient Enabling the iBMC driver ============================ #. Add ``ibmc`` to the list of ``enabled_hardware_types``, ``enabled_power_interfaces``, ``enabled_vendor_interfaces`` and ``enabled_management_interfaces`` in ``/etc/ironic/ironic.conf``. For example:: [DEFAULT] ... enabled_hardware_types = ibmc,ipmi enabled_power_interfaces = ibmc,ipmitool enabled_management_interfaces = ibmc,ipmitool enabled_vendor_interfaces = ibmc #. Restart the ironic conductor service:: sudo service ironic-conductor restart # Or, for RDO: sudo systemctl restart openstack-ironic-conductor Registering a node with the iBMC driver =========================================== Nodes configured to use the driver should have the ``driver`` property set to ``ibmc``. The following properties are specified in the node's ``driver_info`` field: - ``ibmc_address``: The URL address to the ibmc controller. It must include the authority portion of the URL, and can optionally include the scheme. If the scheme is missing, https is assumed. For example: https://ibmc.example.com. This is required. - ``ibmc_username``: User account with admin/server-profile access privilege. This is required. - ``ibmc_password``: User account password. This is required. - ``ibmc_verify_ca``: If ibmc_address has the **https** scheme, the driver will use a secure (TLS_) connection when talking to the ibmc controller. By default (if this is set to True), the driver will try to verify the host certificates. This can be set to the path of a certificate file or directory with trusted certificates that the driver will use for verification. To disable verifying TLS_, set this to False. This is optional. The ``openstack baremetal node create`` command can be used to enroll a node with the ``ibmc`` driver. For example: .. code-block:: bash openstack baremetal node create --driver ibmc --driver-info ibmc_address=https://example.com \ --driver-info ibmc_username=admin \ --driver-info ibmc_password=password For more information about enrolling nodes see :ref:`enrollment` in the install guide. Features of the ``ibmc`` hardware type ========================================= Query boot up sequence ^^^^^^^^^^^^^^^^^^^^^^ The ``ibmc`` hardware type can query current boot up sequence from the bare metal node .. code-block:: bash openstack baremetal node passthru call --http-method GET \ boot_up_seq PXE Boot and iSCSI Deploy Process with Ironic Standalone Environment ==================================================================== .. figure:: ../../images/ironic_standalone_with_ibmc_driver.svg :width: 960px :align: left :alt: Ironic standalone with iBMC driver node .. _Huawei iBMC: https://e.huawei.com/en/products/cloud-computing-dc/servers/accessories/ibmc .. _TLS: https://en.wikipedia.org/wiki/Transport_Layer_Security .. _HUAWEI iBMC Client library: https://pypi.org/project/python-ibmcclient/ ironic-15.0.0/doc/source/admin/adoption.rst0000664000175000017500000002002213652514273020612 0ustar zuulzuul00000000000000.. _adoption: ============= Node adoption ============= Overview ======== As part of hardware inventory lifecycle management, it is not an unreasonable need to have the capability to be able to add hardware that should be considered "in-use" by the Bare Metal service, that may have been deployed by another Bare Metal service installation or deployed via other means. As such, the node adoption feature allows a user to define a node as ``active`` while skipping the ``available`` and ``deploying`` states, which will prevent the node from being seen by the Compute service as ready for use. This feature is leveraged as part of the state machine workflow, where a node in ``manageable`` can be moved to ``active`` state via the provision_state verb ``adopt``. To view the state transition capabilities, please see :ref:`states`. How it works ============ A node initially enrolled begins in the ``enroll`` state. An operator must then move the node to ``manageable`` state, which causes the node's ``power`` interface to be validated. Once in ``manageable`` state, an operator can then explicitly choose to adopt a node. Adoption of a node results in the validation of its ``boot`` interface, and upon success the process leverages what is referred to as the "takeover" logic. The takeover process is intended for conductors to take over the management of nodes for a conductor that has failed. The takeover process involves the deploy interface's ``prepare`` and ``take_over`` methods being called. These steps take specific actions such as downloading and staging the deployment kernel and ramdisk, ISO image, any required boot image, or boot ISO image and then places any PXE or virtual media configuration necessary for the node should it be required. The adoption process makes no changes to the physical node, with the exception of operator supplied configurations where virtual media is used to boot the node under normal circumstances. An operator should ensure that any supplied configuration defining the node is sufficient for the continued operation of the node moving forward. Such as, if the node is configured to network boot via instance_info/boot_option="netboot", then appropriate driver specific node configuration should be set to support this capability. Possible Risk ============= The main risk with this feature is that supplied configuration may ultimately be incorrect or invalid which could result in potential operational issues: * ``rebuild`` verb - Rebuild is intended to allow a user to re-deploy the node to a fresh state. The risk with adoption is that the image defined when an operator adopts the node may not be the valid image for the pre-existing configuration. If this feature is utilized for a migration from one deployment to another, and pristine original images are loaded and provided, then ultimately the risk is the same with any normal use of the ``rebuild`` feature, the server is effectively wiped. * When deleting a node, the deletion or cleaning processes may fail if the incorrect deployment image is supplied in the configuration as the node may NOT have been deployed with the supplied image and driver or compatibility issues may exist as a result. Operators will need to be cognizant of that possibility and should plan accordingly to ensure that deployment images are known to be compatible with the hardware in their environment. * Networking - Adoption will assert no new networking configuration to the newly adopted node as that would be considered modifying the node. Operators will need to plan accordingly and have network configuration such that the nodes will be able to network boot. How to use ========== .. NOTE:: The power state that the ironic-conductor observes upon the first successful power state check, as part of the transition to the ``manageable`` state will be enforced with a node that has been adopted. This means a node that is in ``power off`` state will, by default, have the power state enforced as ``power off`` moving forward, unless an administrator actively changes the power state using the Bare Metal service. Requirements ------------ Requirements for use are essentially the same as to deploy a node: * Sufficient driver information to allow for a successful power management validation. * Sufficient instance_info to pass deploy interface preparation. Each driver may have additional requirements dependent upon the configuration that is supplied. An example of this would be defining a node to always boot from the network, which will cause the conductor to attempt to retrieve the pertinent files. Inability to do so will result in the adoption failing, and the node being placed in the ``adopt failed`` state. Example ------- This is an example to create a new node, named ``testnode``, with sufficient information to pass basic validation in order to be taken from the ``manageable`` state to ``active`` state:: # Explicitly set the client API version environment variable to # 1.17, which introduces the adoption capability. export OS_BAREMETAL_API_VERSION=1.17 openstack baremetal node create --name testnode \ --driver ipmi \ --driver-info ipmi_address= \ --driver-info ipmi_username= \ --driver-info ipmi_password= \ --driver-info deploy_kernel= \ --driver-info deploy_ramdisk= openstack baremetal port create --node openstack baremetal node set testnode \ --instance-info image_source="http://localhost:8080/blankimage" \ --instance-info capabilities="{\"boot_option\": \"local\"}" openstack baremetal node manage testnode --wait openstack baremetal node adopt testnode --wait .. NOTE:: In the above example, the image_source setting must reference a valid image or file, however that image or file can ultimately be empty. .. NOTE:: The above example utilizes a capability that defines the boot operation to be local. It is recommended to define the node as such unless network booting is desired. .. NOTE:: The above example will fail a re-deployment as a fake image is defined and no instance_info/image_checksum value is defined. As such any actual attempt to write the image out will fail as the image_checksum value is only validated at time of an actual deployment operation. .. NOTE:: A user may wish to assign an instance_uuid to a node, which could be used to match an instance in the Compute service. Doing so is not required for the proper operation of the Bare Metal service. openstack baremetal node set --instance-uuid .. NOTE:: In Newton, coupled with API version 1.20, the concept of a network_interface was introduced. A user of this feature may wish to add new nodes with a network_interface of ``noop`` and then change the interface at a later point and time. Troubleshooting =============== Should an adoption operation fail for a node, the error that caused the failure will be logged in the node's ``last_error`` field when viewing the node. This error, in the case of node adoption, will largely be due to failure of a validation step. Validation steps are dependent upon what driver is selected for the node. Any node that is in the ``adopt failed`` state can have the ``adopt`` verb re-attempted. Example:: openstack baremetal node adopt If a user wishes to abort their attempt at adopting, they can then move the node back to ``manageable`` from ``adopt failed`` state by issuing the ``manage`` verb. Example:: openstack baremetal node manage If all else fails the hardware node can be removed from the Bare Metal service. The ``node delete`` command, which is **not** the same as setting the provision state to ``deleted``, can be used while the node is in ``adopt failed`` state. This will delete the node without cleaning occurring to preserve the node's current state. Example:: openstack baremetal node delete ironic-15.0.0/doc/source/admin/upgrade-guide.rst0000664000175000017500000004650213652514273021532 0ustar zuulzuul00000000000000.. _upgrade-guide: ================================ Bare Metal Service Upgrade Guide ================================ This document outlines various steps and notes for operators to consider when upgrading their ironic-driven clouds from previous versions of OpenStack. The Bare Metal (ironic) service is tightly coupled with the ironic driver that is shipped with the Compute (nova) service. Some special considerations must be taken into account when upgrading your cloud. Both offline and rolling upgrades are supported. Plan your upgrade ================= * Rolling upgrades are available starting with the Pike release; that is, when upgrading from Ocata. This means that it is possible to do an upgrade with minimal to no downtime of the Bare Metal API. * Upgrades are only supported between two consecutive named releases. This means that you cannot upgrade Ocata directly into Queens; you need to upgrade into Pike first. * The `release notes `_ should always be read carefully when upgrading the Bare Metal service. Specific upgrade steps and considerations are documented there. * The Bare Metal service should always be upgraded before the Compute service. .. note:: The ironic virt driver in nova always uses a specific version of the ironic REST API. This API version may be one that was introduced in the same development cycle, so upgrading nova first may result in nova being unable to use the Bare Metal API. * Make a backup of your database. Ironic does not support downgrading of the database. Hence, in case of upgrade failure, restoring the database from a backup is the only choice. * Before starting your upgrade, it is best to ensure that all nodes have reached, or are in, a stable ``provision_state``. Nodes in states with long running processes such as deploying or cleaning, may fail, and may require manual intervention to return them to the available hardware pool. This is most likely in cases where a timeout has occurred or a service was terminated abruptly. For a visual diagram detailing states and possible state transitions, please see :ref:`states`. Offline upgrades ================ In an offline (or cold) upgrade, the Bare Metal service is not available during the upgrade, because all the services have to be taken down. When upgrading the Bare Metal service, the following steps should always be taken in this order: #. upgrade the ironic-python-agent image #. update ironic code, without restarting services #. run database schema migrations via ``ironic-dbsync upgrade`` #. restart ironic-conductor and ironic-api services Once the above is done, do the following: * update any applicable configuration options to stop using any deprecated features or options, and perform any required work to transition to alternatives. All the deprecated features and options will be supported for one release cycle, so should be removed before your next upgrade is performed. * upgrade python-ironicclient along with any other services connecting to the Bare Metal service as a client, such as nova-compute * run the ``ironic-dbsync online_data_migrations`` command to make sure that data migrations are applied. The command lets you limit the impact of the data migrations with the ``--max-count`` option, which limits the number of migrations executed in one run. You should complete all of the migrations as soon as possible after the upgrade. .. warning:: You will not be able to start an upgrade to the release after this one, until this has been completed for the current release. For example, as part of upgrading from Ocata to Pike, you need to complete Pike's data migrations. If this not done, you will not be able to upgrade to Queens -- it will not be possible to execute Queens' database schema updates. Rolling upgrades ================ To Reduce downtime, the services can be upgraded in a rolling fashion, meaning to upgrade one or a few services at a time to minimize impact. Rolling upgrades are available starting with the Pike release. This feature makes it possible to upgrade between releases, such as Ocata to Pike, with minimal to no downtime of the Bare Metal API. Requirements ------------ To facilitate an upgrade in a rolling fashion, you need to have a highly-available deployment consisting of at least two ironic-api and two ironic-conductor services. Use of a load balancer to balance requests across the ironic-api services is recommended, as it allows for a minimal impact to end users. Concepts -------- There are four aspects of the rolling upgrade process to keep in mind: * API and RPC version pinning, and versioned object backports * online data migrations * graceful service shutdown * API load balancer draining API & RPC version pinning and versioned object backports ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Through careful RPC versioning, newer services are able to talk to older services (and vice-versa). The ``[DEFAULT]/pin_release_version`` configuration option is used for this. It should be set (pinned) to the release version that the older services are using. The newer services will backport RPC calls and objects to their appropriate versions from the pinned release. If the ``IncompatibleObjectVersion`` exception occurs, it is most likely due to an incorrect or unspecified ``[DEFAULT]/pin_release_version`` configuration value. For example, when ``[DEFAULT]/pin_release_version`` is not set to the older release version, no conversion will happen during the upgrade. For the ironic-api service, the API version is pinned via the same ``[DEFAULT]/pin_release_version`` configuration option as above. When pinned, the new ironic-api services will not service any API requests with Bare Metal API versions that are higher than what the old ironic-api services support. HTTP status code 406 is returned for such requests. This prevents new features (available in new API versions) from being used until after the upgrade has been completed. Online data migrations ~~~~~~~~~~~~~~~~~~~~~~ To make database schema migrations less painful to execute, we have implemented process changes to facilitate upgrades. * All data migrations are banned from schema migration scripts. * Schema migration scripts only update the database schema. * Data migrations must be done at the end of the rolling upgrade process, after the schema migration and after the services have been upgraded to the latest release. All data migrations are performed using the ``ironic-dbsync online_data_migrations`` command. It can be run as a background process so that it does not interrupt running services; however it must be run to completion for a cold upgrade if the intent is to make use of new features immediately. (You would also execute the same command with services turned off if you are doing a cold upgrade). This data migration must be completed. If not, you will not be able to upgrade to future releases. For example, if you had upgraded from Ocata to Pike but did not do the data migrations, you will not be able to upgrade from Pike to Queens. (More precisely, you will not be able to apply Queens' schema migrations.) Graceful conductor service shutdown ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ironic-conductor service is a Python process listening for messages on a message queue. When the operator sends the SIGTERM signal to the process, the service stops consuming messages from the queue, so that no additional work is picked up. It completes any outstanding work and then terminates. During this process, messages can be left on the queue and will be processed after the Python process starts back up. This gives us a way to shutdown a service using older code, and start up a service using newer code with minimal impact. .. note:: This was tested with RabbitMQ messaging backend and may vary with other backends. Nodes that are being acted upon by an ironic-conductor process, which are not in a stable state, may encounter failures. Node failures that occur during an upgrade are likely due to timeouts, resulting from delays involving messages being processed and acted upon by a conductor during long running, multi-step processes such as deployment or cleaning. API load balancer draining ~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are using a load balancer for the ironic-api services, we recommend that you redirect requests to the new API services and drain off of the ironic-api services that have not yet been upgraded. Rolling upgrade process ----------------------- Before maintenance window ~~~~~~~~~~~~~~~~~~~~~~~~~ * Upgrade the ironic-python-agent image * Using the new release (ironic code), execute the required database schema updates by running the database upgrade command: ``ironic-dbsync upgrade``. These schema change operations should have minimal or no effect on performance, and should not cause any operations to fail (but please check the release notes). You can: * install the new release on an existing system * install the new release in a new virtualenv or a container At this point, new columns and tables may exist in the database. These database schema changes are done in a way that both the old and new (N and N+1) releases can perform operations against the same schema. .. note:: Ironic bases its API, RPC and object storage format versions on the ``[DEFAULT]/pin_release_version`` configuration option. It is advisable to automate the deployment of changes in configuration files to make the process less error prone and repeatable. During maintenance window ~~~~~~~~~~~~~~~~~~~~~~~~~ #. All ironic-conductor services should be upgraded first. Ensure that at least one ironic-conductor service is running at all times. For every ironic-conductor, either one by one or a few at a time: * shut down the service. Messages from the ironic-api services to the conductors are load-balanced by the message queue and a hash-ring, so the only thing you need to worry about is to shut the service down gracefully (using ``SIGTERM`` signal) to make sure it will finish all the requests being processed before shutting down. * upgrade the installed version of ironic and dependencies * set the ``[DEFAULT]/pin_release_version`` configuration option value to the version you are upgrading from (that is, the old version). Based on this setting, the new ironic-conductor services will downgrade any RPC communication and data objects to conform to the old service. For example, if you are upgrading from Ocata to Pike, set this value to ``ocata``. * start the service #. The next service to upgrade is ironic-api. Ensure that at least one ironic-api service is running at all times. You may want to start another temporary instance of the older ironic-api to handle the load while you are upgrading the original ironic-api services. For every ironic-api service, either one by one or a few at a time: * in HA deployment you are typically running them behind a load balancer (for example HAProxy), so you need to take the service instance out of the balancer * shut it down * upgrade the installed version of ironic and dependencies * set the ``[DEFAULT]/pin_release_version`` configuration option value to the version you are upgrading from (that is, the old version). Based on this setting, the new ironic-api services will downgrade any RPC communication and data objects to conform to the old service. In addition, the new services will return HTTP status code 406 for any requests with newer API versions that the old services did not support. This prevents new features (available in new API versions) from being used until after the upgrade has been completed. For example, if you are upgrading from Ocata to Pike, set this value to ``ocata``. * restart the service * add it back into the load balancer After upgrading all the ironic-api services, the Bare Metal service is running in the new version but with downgraded RPC communication and database object storage formats. New features (in new API versions) are not supported, because they could fail when objects are in the downgraded object formats and some internal RPC API functions may still not be available. #. For all the ironic-conductor services, one at a time: * remove the ``[DEFAULT]/pin_release_version`` configuration option setting * restart the ironic-conductor service #. For all the ironic-api services, one at a time: * remove the ``[DEFAULT]/pin_release_version`` configuration option setting * restart the ironic-api service After maintenance window ~~~~~~~~~~~~~~~~~~~~~~~~ Now that all the services are upgraded, the system is able to use the latest version of the RPC protocol and able to access all the features of the new release. * Update any applicable configuration options to stop using any deprecated features or options, and perform any required work to transition to alternatives. All the deprecated features and options will be supported for one release cycle, so should be removed before your next upgrade is performed. * Upgrade ``python-ironicclient`` along with other services connecting to the Bare Metal service as a client, such as ``nova-compute``. .. warning:: A ``nova-compute`` instance tries to attach VIFs to all active instances on start up. Make sure that for all active nodes there is at least one running ``ironic-conductor`` process to manage them. Otherwise the instances will be moved to the ``ERROR`` state on the ``nova-compute`` start up. * Run the ``ironic-dbsync online_data_migrations`` command to make sure that data migrations are applied. The command lets you limit the impact of the data migrations with the ``--max-count`` option, which limits the number of migrations executed in one run. You should complete all of the migrations as soon as possible after the upgrade. .. warning:: Note that you will not be able to start an upgrade to the next release after this one, until this has been completed for the current release. For example, as part of upgrading from Ocata to Pike, you need to complete Pike's data migrations. If this not done, you will not be able to upgrade to Queens -- it will not be possible to execute Queens' database schema updates. Upgrading from Ocata to Pike ============================ #. Use the ``ironic-dbsync online_data_migrations`` command from the 9.1.1 (or newer) release. The one from older (9.0.0 - 9.1.0) releases could cause a a port's physical_network information to be deleted from the database. #. It is required to set the ``resource_class`` field for nodes registered with the Bare Metal service *before* using the Pike version of the Compute service. See :ref:`enrollment` for details. #. It is recommended to move from old-style classic drivers to the new hardware types after the upgrade to Pike. We expect the classic drivers to be deprecated in the Queens release and removed in the Rocky release. See :doc:`upgrade-to-hardware-types` for the details on the migration. Other upgrade instructions are in the `Pike release notes `_. .. toctree:: :maxdepth: 1 upgrade-to-hardware-types.rst Upgrading from Newton to Ocata ============================== There are no specific upgrade instructions other than the `Ocata release notes `_. Upgrading from Mitaka to Newton =============================== There are no specific upgrade instructions other than the `Newton release notes `_. Upgrading from Liberty to Mitaka ================================ There are no specific upgrade instructions other than the `Mitaka release notes `_. Upgrading from Kilo to Liberty ============================== In-band Inspection ------------------ If you used in-band inspection with **ironic-discoverd**, it is highly recommended that you switch to using **ironic-inspector**, which is a newer (and compatible on API level) version of the same service. You have to install **python-ironic-inspector-client** during the upgrade. This package contains a client module for the in-band inspection service, which was previously part of the **ironic-discoverd** package. Ironic Liberty supports the **ironic-discoverd** service, but does not support its in-tree client module. Please refer to :ironic-inspector-doc:`ironic-inspector version support matrix ` for details on which ironic versions are compatible with which **ironic-inspector**/**ironic-discoverd** versions. The discoverd to inspector upgrade procedure is as follows: * Install **ironic-inspector** on the machine where you have **ironic-discoverd** (usually the same as conductor). * Update the **ironic-inspector** configuration file to stop using deprecated configuration options, as marked by the comments in the :ironic-inspector-doc:`example.conf `. It is recommended you move the configuration file to ``/etc/ironic-inspector/inspector.conf``. * Shutdown **ironic-discoverd**, and start **ironic-inspector**. * During upgrade of each conductor instance: #. Shutdown the conductor. #. Uninstall **ironic-discoverd**, install **python-ironic-inspector-client**. #. Update the conductor. #. Update ``ironic.conf`` to use ``[inspector]`` section instead of ``[discoverd]`` (option names are the same). #. Start the conductor. Upgrading from Juno to Kilo =========================== When upgrading a cloud from Juno to Kilo, users must ensure the nova service is upgraded prior to upgrading the ironic service. Additionally, users need to set a special config flag in nova prior to upgrading to ensure the newer version of nova is not attempting to take advantage of new ironic features until the ironic service has been upgraded. The steps for upgrading your nova and ironic services are as follows: - Edit nova.conf and ensure force_config_drive=False is set in the [DEFAULT] group. Restart nova-compute if necessary. - Install new nova code, run database migrations. - Install new python-ironicclient code. - Restart nova services. - Install new ironic code, run database migrations, restart ironic services. - Edit nova.conf and set force_config_drive to your liking, restarting nova-compute if necessary. Note that during the period between nova's upgrade and ironic's upgrades, instances can still be provisioned to nodes. However, any attempt by users to specify a config drive for an instance will cause an error until ironic's upgrade has completed. Cleaning -------- A new feature starting from Kilo cycle is support for the automated cleaning of nodes between workloads to ensure the node is ready for another workload. This can include erasing the hard drives, updating firmware, and other steps. For more information, see :ref:`automated_cleaning`. If ironic is configured with automated cleaning enabled (defaults to True) and neutron is set as the DHCP provider (also the default), you will need to set the `cleaning_network_uuid` option in the ironic configuration file before starting the ironic service. See :ref:`configure-cleaning` for information on how to set up the cleaning network for ironic. ironic-15.0.0/doc/source/admin/node-deployment.rst0000664000175000017500000002476713652514273022124 0ustar zuulzuul00000000000000=============== Node Deployment =============== .. contents:: :depth: 2 .. _node-deployment-deploy-steps: Overview ======== Node deployment is performed by the Bare Metal service to prepare a node for use by a workload. The exact work flow used depends on a number of factors, including the hardware type and interfaces assigned to a node. Deploy Steps ============ The Bare Metal service implements deployment by collecting a list of deploy steps to perform on a node from the Power, Deploy, Management, BIOS, and RAID interfaces of the driver assigned to the node. These steps are then ordered by priority and executed on the node when the node is moved to the ``deploying`` state. Nodes move to the ``deploying`` state when attempting to move to the ``active`` state (when the hardware is prepared for use by a workload). For a full understanding of all state transitions into deployment, please see :doc:`../contributor/states`. The Bare Metal service added support for deploy steps in the Rocky release. Order of execution ------------------ Deploy steps are ordered from higher to lower priority, where a larger integer is a higher priority. If the same priority is used by deploy steps on different interfaces, the following resolution order is used: Power, Management, Deploy, BIOS, and RAID interfaces. .. _node-deployment-core-steps: Core steps ---------- Certain default deploy steps are designated as 'core' deploy steps. The following deploy steps are core: ``deploy.deploy`` In this step the node is booted using a provisioning image, and the user image is written to the node's disk. It has a priority of 100. Writing a Deploy Step --------------------- Please refer to :doc:`/contributor/deploy-steps`. FAQ --- What deploy step is running? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To check what deploy step the node is performing or attempted to perform and failed, run the following command; it will return the value in the node's ``driver_internal_info`` field:: openstack baremetal node show $node_ident -f value -c driver_internal_info The ``deploy_steps`` field will contain a list of all remaining steps with their priorities, and the first one listed is the step currently in progress or that the node failed before going into ``deploy failed`` state. Troubleshooting --------------- If deployment fails on a node, the node will be put into the ``deploy failed`` state until the node is deprovisioned. A deprovisioned node is moved to the ``available`` state after the cleaning process has been performed successfully. Strategies for determining why a deploy step failed include checking the ironic conductor logs, checking logs from the ironic-python-agent that have been stored on the ironic conductor, or performing general hardware troubleshooting on the node. Deploy Templates ================ Starting with the Stein release, with Bare Metal API version 1.55, deploy templates offer a way to define a set of one or more deploy steps to be executed with particular sets of arguments and priorities. Each deploy template has a name, which must be a valid trait. Traits can be either standard or custom. Standard traits are listed in the :os-traits-doc:`os_traits library <>`. Custom traits must meet the following requirements: * prefixed with ``CUSTOM_`` * contain only upper case characters A to Z, digits 0 to 9, or underscores * no longer than 255 characters in length Deploy step format ------------------ An invocation of a deploy step is defined in a deploy template as follows:: { "interface": "", "step": "", "args": { "": "", "": "" }, "priority": } A deploy template contains a list of one or more such steps. Each combination of `interface` and `step` may only be specified once in a deploy template. Matching deploy templates ------------------------- During deployment, if any of the traits in a node's ``instance_info.traits`` field match the name of a deploy template, then the steps from that deploy template will be added to the list of steps to be executed by the node. When using the Compute service, any traits in the instance's flavor properties or image properties are stored in ``instance_info.traits`` during deployment. See :ref:`scheduling-traits` for further information on how traits are used for scheduling when the Bare Metal service is used with the Compute service. Note that there is no ongoing relationship between a node and any templates that are matched during deployment. The set of matching deploy templates is checked at deployment time. Any subsequent updates to or deletion of those templates will not be reflected in the node's configuration unless it is redeployed or rebuilt. Similarly, if a node is rebuilt and the set of matching deploy templates has changed since the initial deployment, then the resulting configuration of the node may be different from the initial deployment. Overriding default deploy steps ------------------------------- A deploy step is enabled by default if it has a non-zero default priority. A default deploy step may be overridden in a deploy template. If the step's priority is a positive integer it will be executed with the specified priority and arguments. If the step's priority is zero, the step will not be executed. If a `core deploy step `_ is included in a deploy template, it can only be assigned a priority of zero to disable it. Creating a deploy template via API ---------------------------------- A deploy template can be created using the Bare Metal API:: POST /v1/deploy_templates Here is an example of the body of a request to create a deploy template with a single step: .. code-block:: json { "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "priority": 150 } ] } Further information on this API is available `here `__. Creating a deploy template via "openstack baremetal" client ----------------------------------------------------------- A deploy template can be created via the ``openstack baremetal deploy template create`` command, starting with ``python-ironicclient`` 2.7.0. The argument ``--steps`` must be specified. Its value is one of: - a JSON string - path to a JSON file whose contents are passed to the API - '-', to read from stdin. This allows piping in the deploy steps. Example of creating a deploy template with a single step using a JSON string: .. code-block:: console openstack baremetal deploy template create \ CUSTOM_HYPERTHREADING_ON \ --steps '[{"interface": "bios", "step": "apply_configuration", "args": {"settings": [{"name": "LogicalProc", "value": "Enabled"}]}, "priority": 150}]' Or with a file: .. code-block:: console openstack baremetal deploy template create \ CUSTOM_HYPERTHREADING_ON \ ---steps my-deploy-steps.txt Or with stdin: .. code-block:: console cat my-deploy-steps.txt | openstack baremetal deploy template create \ CUSTOM_HYPERTHREADING_ON \ --steps - Example of use with the Compute service --------------------------------------- .. note:: The deploy steps used in this example are for example purposes only. In the following example, we first add the trait ``CUSTOM_HYPERTHREADING_ON`` to the node represented by ``$node_ident``: .. code-block:: console openstack baremetal node add trait $node_ident CUSTOM_HYPERTHREADING_ON We also update the flavor ``bm-hyperthreading-on`` in the Compute service with the following property: .. code-block:: console openstack flavor set --property trait:CUSTOM_HYPERTHREADING_ON=required bm-hyperthreading-on Creating a Compute instance with this flavor will ensure that the instance is scheduled only to Bare Metal nodes with the ``CUSTOM_HYPERTHREADING_ON`` trait. We could then create a Bare Metal deploy template with the name ``CUSTOM_HYPERTHREADING_ON`` and a deploy step that enables Hyperthreading: .. code-block:: json { "name": "CUSTOM_HYPERTHREADING_ON", "steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "LogicalProc", "value": "Enabled" } ] }, "priority": 150 } ] } When an instance is created using the ``bm-hyperthreading-on`` flavor, then the deploy steps of deploy template ``CUSTOM_HYPERTHREADING_ON`` will be executed during the deployment of the scheduled node, causing Hyperthreading to be enabled in the node's BIOS configuration. To make this example more dynamic, let's add a second trait ``CUSTOM_HYPERTHREADING_OFF`` to the node: .. code-block:: console openstack baremetal node add trait $node_ident CUSTOM_HYPERTHREADING_OFF We could also update a second flavor, ``bm-hyperthreading-off``, with the following property: .. code-block:: console openstack flavor set --property trait:CUSTOM_HYPERTHREADING_OFF=required bm-hyperthreading-off Finally, we create a deploy template with the name ``CUSTOM_HYPERTHREADING_OFF`` and a deploy step that disables Hyperthreading: .. code-block:: json { "name": "CUSTOM_HYPERTHREADING_OFF", "steps": [ { "interface": "bios", "step": "apply_configuration", "args": { "settings": [ { "name": "LogicalProc", "value": "Disabled" } ] }, "priority": 150 } ] } Creating a Compute instance with the ``bm-hyperthreading-off`` instance will cause the scheduled node to have Hyperthreading disabled in the BIOS during deployment. We now have a way to create Compute instances with different configurations, by choosing between different Compute flavors, supported by a single Bare Metal node that is dynamically configured during deployment. ironic-15.0.0/doc/source/admin/index.rst0000664000175000017500000000275513652514273020121 0ustar zuulzuul00000000000000Administrator's Guide ===================== If you are a system administrator running Ironic, this section contains information that may help you understand how to operate and upgrade the services. .. toctree:: :maxdepth: 1 Drivers, Hardware Types and Hardware Interfaces Ironic Python Agent Node Hardware Inspection Node Deployment Node Cleaning Node Adoption Node Retirement RAID Configuration BIOS Settings Node Rescuing Configuring to boot from volume Multi-tenant Networking Port Groups Configuring Web or Serial Console Enabling Notifications Ceph Object Gateway Emitting Software Metrics Auditing API Traffic Service State Reporting Conductor Groups Upgrade Guide Security Windows Images Troubleshooting FAQ Power Sync with the Compute Service Agent Token Node Multi-Tenancy .. toctree:: :hidden: deploy-steps Dashboard Integration --------------------- A plugin for the OpenStack Dashboard (horizon) service is under development. Documentation for that can be found within the ironic-ui project. * :ironic-ui-doc:`Dashboard (horizon) plugin <>` ironic-15.0.0/doc/source/admin/agent-token.rst0000664000175000017500000001071713652514273021223 0ustar zuulzuul00000000000000.. _agent_token: =========== Agent Token =========== Purpose ======= The concept of agent tokens is to provide a mechanism by which the relationship between an operating deployment of the Bare Metal Service and an instance of the ``ironic-python-agent`` is verified. In a sense, this token can be viewed as a session identifier or authentication token. .. warning:: This functionality does not remove the risk of a man-in-the-middle attack that could occur from connection intercept or when TLS is not used for all communication. This becomes useful in the case of deploying an "edge" node where intermediate networks are not trustworthy. How it works ============ These tokens are provided in one of two ways to the running agent. 1. A pre-generated token which is embedded into virtual media ISOs. 2. A one-time generated token that are provided upon the first "lookup" of the node. In both cases, the tokens are a randomly generated length of 128 characters. Once the token has been provided, the token cannot be retrieved or accessed. It remains available to the conductors, and is stored in memory of the ``ironic-python-agent``. .. note:: In the case of the token being embedded with virtual media, it is read from a configuration file with-in the image. Ideally this should be paired with Swift temporary URLs. With the token is available in memory in the agent, the token is embedded with ``heartbeat`` operations to the ironic API endpoint. This enables the API to authenticate the heartbeat request, and refuse "heartbeat" requests from the ``ironic-python-agent``. With the ``Ussuri`` release, the confiuration option ``[DEFAULT]require_agent_token`` can be set ``True`` to explicitly require token use. .. warning:: If the Bare Metal Service is updated, and the version of ``ironic-python-agent`` should be updated to enable this feature. In addition to heartbeats being verified, commands from the ``ironic-conductor`` service to the ``ironic-python-agent`` also include the token, allowing the agent to authenticate the caller. With Virtual Media ------------------ .. seqdiag:: :scale: 80 diagram { API; Conductor; Baremetal; Swift; IPA; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> Conductor [label = "Generates a random token"]; Conductor -> Conductor [label = "Generates configuration for IPA ramdisk"]; Conductor -> Swift [label = "IPA image, with configuration is uploaded"]; Conductor -> Baremetal [label = "Attach IPA virtual media in Swift as virtual CD"]; Conductor -> Baremetal [label = "Conductor turns power on"]; Baremetal -> Swift [label = "Baremetal reads virtual media"]; Baremetal -> Baremetal [label = "Boots IPA virtual media image"]; Baremetal -> Baremetal [label = "IPA is started"]; IPA -> Baremetal [label = "IPA loads configuration and agent token into memory"]; IPA -> API [label = "Lookup node"]; API -> IPA [label = "API responds with node UUID and token value of '******'"]; IPA -> API [label = "Heartbeat with agent token"]; } With PXE/iPXE/etc. ------------------ .. seqdiag:: :scale: 80 diagram { API; Conductor; Baremetal; iPXE; IPA; activation = none; span_height = 1; edge_length = 250; default_note_color = white; default_fontsize = 14; Conductor -> Baremetal [label = "Conductor turns power on"]; Baremetal -> iPXE [label = "Baremetal reads kernel/ramdisk and starts boot"]; Baremetal -> Baremetal [label = "Boots IPA virtual media image"]; Baremetal -> Baremetal [label = "IPA is started"]; IPA -> Baremetal [label = "IPA loads configuration"]; IPA -> API [label = "Lookup node"]; API -> Conductor [label = "API requests conductor to generates a random token"]; API -> IPA [label = "API responds with node UUID and token value"]; IPA -> API [label = "Heartbeat with agent token"]; } Agent Configuration =================== An additional setting which may be leveraged with the ``ironic-python-agent`` is a ``agent_token_required`` setting. Under normal circumstances, this setting can be asserted via the configuration supplied from the Bare Metal service deployment upon the ``lookup`` action, but can be asserted via the embedded configuration for the agent in the ramdisk. This setting is also available via kernel command line as ``ipa-agent-token-required``. ironic-15.0.0/doc/source/admin/boot-from-volume.rst0000664000175000017500000002060313652514273022213 0ustar zuulzuul00000000000000.. _boot-from-volume: ================ Boot From Volume ================ Overview ======== The Bare Metal service supports booting from a Cinder iSCSI volume as of the Pike release. This guide will primarily deal with this use case, but will be updated as more paths for booting from a volume, such as FCoE, are introduced. The boot from volume is supported on both legacy BIOS and UEFI (iPXE binary for EFI booting) boot mode. We need to perform with suitable images which will be created by diskimage-builder tool. Prerequisites ============= Currently booting from a volume requires: - Bare Metal service version 9.0.0 - Bare Metal API microversion 1.33 or later - A driver that utilizes the :doc:`PXE boot mechanism `. Currently booting from a volume is supported by the reference drivers that utilize PXE boot mechanisms when iPXE is enabled. - iPXE is an explicit requirement, as it provides the mechanism that attaches and initiates booting from an iSCSI volume. - Metadata services need to be configured and available for the instance images to obtain configuration such as keys. Configuration drives are not supported due to minimum disk extension sizes. Conductor Configuration ======================= In ironic.conf, you can specify a list of enabled storage interfaces. Check ``[DEFAULT]enabled_storage_interfaces`` in your ironic.conf to ensure that your desired interface is enabled. For example, to enable the ``cinder`` and ``noop`` storage interfaces:: [DEFAULT] enabled_storage_interfaces = cinder,noop If you want to specify a default storage interface rather than setting the storage interface on a per node basis, set ``[DEFAULT]default_storage_interface`` in ironic.conf. The ``default_storage_interface`` will be used for any node that doesn't have a storage interface defined. Node Configuration ================== Storage Interface ----------------- You will need to specify what storage interface the node will use to handle storage operations. For example, to set the storage interface to ``cinder`` on an existing node:: openstack --os-baremetal-api-version 1.33 baremetal node set \ --storage-interface cinder $NODE_UUID A default storage interface can be specified in ironic.conf. See the `Conductor Configuration`_ section for details. iSCSI Configuration ------------------- In order for a bare metal node to boot from an iSCSI volume, the ``iscsi_boot`` capability for the node must be set to ``True``. For example, if you want to update an existing node to boot from volume:: openstack --os-baremetal-api-version 1.33 baremetal node set \ --property capabilities=iscsi_boot:True $NODE_UUID You will also need to create a volume connector for the node, so the storage interface will know how to communicate with the node for storage operation. In the case of iSCSI, you will need to provide an iSCSI Qualifying Name (IQN) that is unique to your SAN. For example, to create a volume connector for iSCSI:: openstack --os-baremetal-api-version 1.33 baremetal volume connector create \ --node $NODE_UUID --type iqn --connector-id iqn.2017-08.org.openstack.$NODE_UUID Image Creation ============== We use ``disk-image-create`` in diskimage-builder tool to create images for boot from volume feature. Some required elements for this mechanism for corresponding boot modes are as following: - Legacy BIOS boot mode: ``iscsi-boot`` element. - UEFI boot mode: ``iscsi-boot`` and ``block-device-efi`` elements. An example below:: export IMAGE_NAME= export DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack" disk-image-create centos7 vm cloud-init-datasources dhcp-all-interfaces iscsi-boot dracut-regenerate block-device-efi -o $IMAGE_NAME .. note:: * For CentOS images, we must add dependent element named ``dracut-regenerate`` during image creation. Otherwise, the image creation will fail with an error. * For Ubuntu images, we only support ``iscsi-boot`` element without ``dracut-regenerate`` element during image creation. Advanced Topics =============== Use without the Compute Service ------------------------------- As discussed in other sections, the Bare Metal service has a concept of a `connector` that is used to represent an interface that is intended to be utilized to attach the remote volume. In addition to the connectors, we have a concept of a `target` that can be defined via the API. While a user of this feature through the Compute service would automatically have a new target record created for them, it is not explicitly required, and can be performed manually. A target record can be created using a command similar to the example below:: openstack --os-baremetal-api-version 1.33 baremetal volume target create \ --node $NODE_UUID --type iscsi --boot-index 0 --volume $VOLUME_UUID .. Note:: A ``boot-index`` value of ``0`` represents the boot volume for a node. As the ``boot-index`` is per-node in sequential order, only one boot volume is permitted for each node. Use Without Cinder ------------------ In the Rocky release, an ``external`` storage interface is available that can be utilized without a Block Storage Service installation. Under normal circumstances the ``cinder`` storage interface interacts with the Block Storage Service to orchestrate and manage attachment and detachment of volumes from the underlying block service system. The ``external`` storage interface contains the logic to allow the Bare Metal service to determine if the Bare Metal node has been requested with a remote storage volume for booting. This is in contrast to the default ``noop`` storage interface which does not contain logic to determine if the node should or could boot from a remote volume. It must be noted that minimal configuration or value validation occurs with the ``external`` storage interface. The ``cinder`` storage interface contains more extensive validation, that is likely un-necessary in a ``external`` scenario. Setting the external storage interface:: openstack baremetal node set --storage-interface external $NODE_UUID Setting a volume:: openstack baremetal volume target create --node $NODE_UUID \ --type iscsi --boot-index 0 --volume-id $VOLUME_UUID \ --property target_iqn="iqn.2010-10.com.example:vol-X" \ --property target_lun="0" \ --property target_portal="192.168.0.123:3260" \ --property auth_method="CHAP" \ --property auth_username="ABC" \ --property auth_password="XYZ" \ Ensure that no image_source is defined:: openstack baremetal node unset \ --instance-info image_source $NODE_UUID Deploy the node:: openstack baremetal node deploy $NODE_UUID Upon deploy, the boot interface for the baremetal node will attempt to either create iPXE configuration OR set boot parameters out-of-band via the management controller. Such action is boot interface specific and may not support all forms of volume target configuration. As of the Rocky release, the bare metal service does not support writing an Operating System image to a remote boot from volume target, so that also must be ensured by the user in advance. Records of volume targets are removed upon the node being undeployed, and as such are not presistent across deployments. Cinder Multi-attach ------------------- Volume multi-attach is a function that is commonly performed in computing clusters where dedicated storage subsystems are utilized. For some time now, the Block Storage service has supported the concept of multi-attach. However, the Compute service, as of the Pike release, does not yet have support to leverage multi-attach. Concurrently, multi-attach requires the backend volume driver running as part of the Block Storage service to contain support for multi-attach volumes. When support for storage interfaces was added to the Bare Metal service, specifically for the ``cinder`` storage interface, the concept of volume multi-attach was accounted for, however has not been fully tested, and is unlikely to be fully tested until there is Compute service integration as well as volume driver support. The data model for storage of volume targets in the Bare Metal service has no constraints on the same target volume from being utilized. When interacting with the Block Storage service, the Bare Metal service will prevent the use of volumes that are being reported as ``in-use`` if they do not explicitly support multi-attach. ironic-15.0.0/doc/source/admin/multitenancy.rst0000664000175000017500000002427213652514273021524 0ustar zuulzuul00000000000000.. _multitenancy: ======================================= Multi-tenancy in the Bare Metal service ======================================= Overview ======== It is possible to use dedicated tenant networks for provisioned nodes, which extends the current Bare Metal service capabilities of providing flat networks. This works in conjunction with the Networking service to allow provisioning of nodes in a separate provisioning network. The result of this is that multiple tenants can use nodes in an isolated fashion. However, this configuration does not support trunk ports belonging to multiple networks. Concepts ======== .. _network-interfaces: Network interfaces ------------------ Network interface is one of the driver interfaces that manages network switching for nodes. There are 3 network interfaces available in the Bare Metal service: - ``noop`` interface is used for standalone deployments, and does not perform any network switching; - ``flat`` interface places all nodes into a single provider network that is pre-configured on the Networking service and physical equipment. Nodes remain physically connected to this network during their entire life cycle. - ``neutron`` interface provides tenant-defined networking through the Networking service, separating tenant networks from each other and from the provisioning and cleaning provider networks. Nodes will move between these networks during their life cycle. This interface requires Networking service support for the switches attached to the baremetal servers so they can be programmed. Local link connection --------------------- The Bare Metal service allows ``local_link_connection`` information to be associated with Bare Metal ports. This information is provided to the Networking service's ML2 driver when a Virtual Interface (VIF) is attached. The ML2 driver uses the information to plug the specified port to the tenant network. .. list-table:: ``local_link_connection`` fields :header-rows: 1 * - Field - Description * - ``switch_id`` - Required. Identifies a switch and can be a MAC address or an OpenFlow-based ``datapath_id``. * - ``port_id`` - Required. Port ID on the switch/Smart NIC, for example, Gig0/1, rep0-0. * - ``switch_info`` - Optional. Used to distinguish different switch models or other vendor-specific identifier. Some ML2 plugins may require this field. * - ``hostname`` - Required in case of a Smart NIC port. Hostname of Smart NIC device. .. note:: This isn't applicable to Infiniband ports because the network topology is discoverable by the Infiniband Subnet Manager. If specified, local_link_connection information will be ignored. If port is Smart NIC port then: 1. ``port_id`` is the representor port name on the Smart NIC. 2. ``switch_id`` is not mandatory. .. _multitenancy-physnets: Physical networks ----------------- A Bare Metal port may be associated with a physical network using its ``physical_network`` field. The Bare Metal service uses this information when mapping between virtual ports in the Networking service and physical ports and port groups in the Bare Metal service. A port's physical network field is optional, and if not set then any virtual port may be mapped to that port, provided that no free Bare Metal port with a suitable physical network assignment exists. The physical network of a port group is defined by the physical network of its constituent ports. The Bare Metal service ensures that all ports in a port group have the same value in their physical network field. When attaching a virtual interface (VIF) to a node, the following ordered criteria are used to select a suitable unattached port or port group: * Require ports or port groups to not have a physical network or to have a physical network that matches one of the VIF's allowed physical networks. * Prefer ports and port groups that have a physical network to ports and port groups that do not have a physical network. * Prefer port groups to ports. Prefer ports with PXE enabled. Configuring the Bare Metal service ================================== See the :ref:`configure-tenant-networks` section in the installation guide for the Bare Metal service. Configuring nodes ================= #. Ensure that your python-ironicclient version and requested API version are sufficient for your requirements. * Multi-tenancy support was added in API version 1.20, and is supported by python-ironicclient version 1.5.0 or higher. * Physical network support for ironic ports was added in API version 1.34, and is supported by python-ironicclient version 1.15.0 or higher. * Smart NIC support for ironic ports was added in API version 1.53, and is supported by python-ironicclient version 2.7.0 or higher. The following examples assume you are using python-ironicclient version 2.7.0 or higher. Export the following variable:: export OS_BAREMETAL_API_VERSION= #. The node's ``network_interface`` field should be set to a valid network interface. Valid interfaces are listed in the ``[DEFAULT]/enabled_network_interfaces`` configuration option in the ironic-conductor's configuration file. Set it to ``neutron`` to use the Networking service's ML2 driver:: openstack baremetal node create --network-interface neutron --driver ipmi .. note:: If the ``[DEFAULT]/default_network_interface`` configuration option is set, the ``--network-interface`` option does not need to be specified when creating the node. #. To update an existing node's network interface to ``neutron``, use the following commands:: openstack baremetal node set $NODE_UUID_OR_NAME \ --network-interface neutron #. Create a port as follows:: openstack baremetal port create $HW_MAC_ADDRESS --node $NODE_UUID \ --local-link-connection switch_id=$SWITCH_MAC_ADDRESS \ --local-link-connection switch_info=$SWITCH_HOSTNAME \ --local-link-connection port_id=$SWITCH_PORT \ --pxe-enabled true \ --physical-network physnet1 An Infiniband port requires client ID, while local link connection information will be populated by Infiniband Subnet Manager. The client ID consists of <12-byte vendor prefix>:<8 byte port GUID>. There is no standard process for deriving the port's MAC address ($HW_MAC_ADDRESS); it is vendor specific. For example, Mellanox ConnectX Family Devices prefix is ff:00:00:00:00:00:02:00:00:02:c9:00. If port GUID was f4:52:14:03:00:38:39:81 the client ID would be ff:00:00:00:00:00:02:00:00:02:c9:00:f4:52:14:03:00:38:39:81. Mellanox ConnectX Family Device's HW_MAC_ADDRESS consists of 6 bytes; the port GUID's lower 3 and higher 3 bytes. In this example it would be f4:52:14:38:39:81. Putting it all together, create an Infiniband port as follows:: openstack baremetal port create $HW_MAC_ADDRESS --node $NODE_UUID \ --pxe-enabled true \ --extra client-id=$CLIENT_ID \ --physical-network physnet1 #. Create a Smart NIC port as follows:: openstack baremetal port create $HW_MAC_ADDRESS --node $NODE_UUID \ --local-link-connection hostname=$HOSTNAME \ --local-link-connection port_id=$REP_NAME \ --pxe-enabled true \ --physical-network physnet1 \ --is-smartnic A Smart NIC port requires ``hostname`` which is the hostname of the Smart NIC, and ``port_id`` which is the representor port name within the Smart NIC. #. Check the port configuration:: openstack baremetal port show $PORT_UUID After these steps, the provisioning of the created node will happen in the provisioning network, and then the node will be moved to the tenant network that was requested. Configuring the Networking service ================================== In addition to configuring the Bare Metal service some additional configuration of the Networking service is required to ensure ports for bare metal servers are correctly programmed. This configuration will be determined by the Bare Metal service network interfaces you have enabled and which top of rack switches you have in your environment. ``flat`` network interface -------------------------- In order for Networking service ports to correctly operate with the Bare Metal service ``flat`` network interface the ``baremetal`` ML2 mechanism driver from `networking-baremetal `_ needs to be loaded into the Networking service configuration. This driver understands that the switch should be already configured by the admin, and will mark the networking service ports as successfully bound as nothing else needs to be done. #. Install the ``networking-baremetal`` library .. code-block:: console $ pip install networking-baremetal #. Enable the ``baremetal`` driver in the Networking service ML2 configuration file .. code-block:: ini [ml2] mechanism_drivers = ovs,baremetal ``neutron`` network interface ----------------------------- The ``neutron`` network interface allows the Networking service to program the physical top of rack switches for the bare metal servers. To do this an ML2 mechanism driver which supports the ``baremetal`` VNIC type for the make and model of top of rack switch in the environment must be installed and enabled. This is a list of known top of rack ML2 mechanism drivers which work with the ``neutron`` network interface: Cisco Nexus 9000 series To install and configure this ML2 mechanism driver see `Nexus Mechanism Driver Installation Guide `_. FUJITSU CFX2000 ``networking-fujitsu`` ML2 driver supports this switch. The documentation is available `here `_. Networking Generic Switch This is an ML2 mechanism driver built for testing against virtual bare metal environments and some switches that are not covered by hardware specific ML2 mechanism drivers. More information is available in the project's `README `_. ironic-15.0.0/doc/source/admin/building-windows-images.rst0000664000175000017500000000724113652514273023535 0ustar zuulzuul00000000000000.. _building_image_windows: Building images for Windows --------------------------- We can use ``New-WindowsOnlineImage`` in `windows-openstack-imaging-tools`_ tool as an option to create Windows images (whole disk images) corresponding boot modes which will support for Windows NIC Teaming. And allow the utilization of link aggregation when the instance is spawned on hardware servers (Bare metals). Requirements: ~~~~~~~~~~~~~ * A Microsoft Windows Server Operating System along with ``Hyper-V virtualization`` enabled, ``PowerShell`` version >=4 supported, ``Windows Assessment and Deployment Kit``, in short ``Windows ADK``. * The windows Server compatible drivers. * Working git environment. Preparation: ~~~~~~~~~~~~ * Download a Windows Server 2012R2/ 2016 installation ISO. * Install Windows Server 2012R2/ 2016 OS on workstation PC along with following feature: - Enable Hyper-V virtualization. - Install PowerShell 4.0. - Install Git environment & import git proxy (if have). - Create new ``Path`` in Microsoft Windows Server Operating System which support for submodule update via ``git submodule update –init`` command:: - Variable name: Path - Variable value: C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\bin - Rename virtual switch name in Windows Server 2012R2/ 2016 in ``Virtual Switch Manager`` into `external`. Implementation: ~~~~~~~~~~~~~~~ * ``Step 1``: Create folders: ``C:\`` where output images will be located, ``C:\`` where you need to place the necessary hardware drivers. * ``Step 2``: Copy and extract necessary hardware drivers in ``C:\``. * ``Step 3``: Insert or burn Windows Server 2016 ISO to ``D:\``. * ``Step 4``: Download ``windows-openstack-imaging-tools`` tools. .. code-block:: console git clone https://github.com/cloudbase/windows-openstack-imaging-tools.git * ``Step 5``: Create & running script `create-windows-cloud-image.ps1`: .. code-block:: console git submodule update --init Import-Module WinImageBuilder.psm1 $windowsImagePath = "C:\\.qcow2" $VirtIOISOPath = "C:\\virtio.iso" $virtIODownloadLink = "https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.133-2/virtio-win.iso" (New-Object System.Net.WebClient).DownloadFile($virtIODownloadLink, $VirtIOISOPath) $wimFilePath = "D:\sources\install.wim" $extraDriversPath = "C:\\" $image = (Get-WimFileImagesInfo -WimFilePath $wimFilePath)[1] $switchName = 'external' New-WindowsOnlineImage -WimFilePath $wimFilePath -ImageName $image.ImageName ` -WindowsImagePath $windowsImagePath -Type 'KVM' -ExtraFeatures @() ` -SizeBytes 20GB -CpuCores 2 -Memory 2GB -SwitchName $switchName ` -ProductKey $productKey -DiskLayout 'BIOS' ` -ExtraDriversPath $extraDriversPath ` -InstallUpdates:$false -AdministratorPassword 'Pa$$w0rd' ` -PurgeUpdates:$true -DisableSwap:$true After executing this command you will get two output files, first one being "C:\\.qcow2", which is the resulting windows whole disk image and "C:\\virtio.iso", which is virtio iso contains all the synthetic drivers for the KVM hypervisor. See `example_windows_images`_ for more details and examples. .. note:: We can change ``SizeBytes``, ``CpuCores`` and ``Memory`` depending on requirements. .. _`example_windows_images`: https://github.com/cloudbase/windows-openstack-imaging-tools/blob/master/Examples .. _`windows-openstack-imaging-tools`: https://github.com/cloudbase/windows-openstack-imaging-tools ironic-15.0.0/doc/source/admin/inspection.rst0000664000175000017500000000670013652514273021157 0ustar zuulzuul00000000000000.. _inspection: =================== Hardware Inspection =================== Overview -------- Inspection allows Bare Metal service to discover required node properties once required ``driver_info`` fields (for example, IPMI credentials) are set by an operator. Inspection will also create the Bare Metal service ports for the discovered ethernet MACs. Operators will have to manually delete the Bare Metal service ports for which physical media is not connected. This is required due to the `bug 1405131 `_. There are two kinds of inspection supported by Bare Metal service: #. Out-of-band inspection is currently implemented by several hardware types, including ``ilo``, ``idrac`` and ``irmc``. #. `In-band inspection`_ by utilizing the ironic-inspector_ project. The node should be in the ``manageable`` state before inspection is initiated. If it is in the ``enroll`` or ``available`` state, move it to ``manageable`` first:: openstack baremetal node manage Then inspection can be initiated using the following command:: openstack baremetal node inspect .. _capabilities-discovery: Capabilities discovery ---------------------- This is an incomplete list of capabilities we want to discover during inspection. The exact support is hardware and hardware type specific though, the most complete list is provided by the iLO :ref:`ilo-inspection`. ``secure_boot`` (``true`` or ``false``) whether secure boot is supported for the node ``boot_mode`` (``bios`` or ``uefi``) the boot mode the node is using ``cpu_vt`` (``true`` or ``false``) whether the CPU virtualization is enabled ``cpu_aes`` (``true`` or ``false``) whether the AES CPU extensions are enabled ``max_raid_level`` (integer, 0-10) maximum RAID level supported by the node ``pci_gpu_devices`` (non-negative integer) number of GPU devices on the node The operator can specify these capabilities in nova flavor for node to be selected for scheduling:: nova flavor-key my-baremetal-flavor set capabilities:pci_gpu_devices="> 0" nova flavor-key my-baremetal-flavor set capabilities:secure_boot="true" Please see a specific :doc:`hardware type page ` for the exact list of capabilities this hardware type can discover. In-band inspection ------------------ In-band inspection involves booting a ramdisk on the target node and fetching information directly from it. This process is more fragile and time-consuming than the out-of-band inspection, but it is not vendor-specific and works across a wide range of hardware. In-band inspection is using the ironic-inspector_ project. It is supported by all hardware types, and used by default, if enabled, by the ``ipmi`` hardware type. The ``inspector`` *inspect* interface has to be enabled to use it: .. code-block:: ini [DEFAULT] enabled_inspect_interfaces = inspector,no-inspect If the ironic-inspector service is not registered in the service catalog, set the following option: .. code-block:: ini [inspector] endpoint-override = http://inspector.example.com:5050 In order to ensure that ports in Bare Metal service are synchronized with NIC ports on the node, the following settings in the ironic-inspector configuration file must be set:: [processing] add_ports = all keep_ports = present .. _ironic-inspector: https://pypi.org/project/ironic-inspector .. _python-ironicclient: https://pypi.org/project/python-ironicclient ironic-15.0.0/doc/source/images/0000775000175000017500000000000013652514443016423 5ustar zuulzuul00000000000000ironic-15.0.0/doc/source/images/ironic_standalone_with_ibmc_driver.svg0000664000175000017500000040336313652514273026251 0ustar zuulzuul00000000000000 API API Conductor Conductor do node deploy User User Create ibmc driver node Set driver_info (ibmc_address, ibmc_username, ibmc_password, etc) Set instance_info(image_source, root_gb, etc.) Validate power, management and vendor interfaces Create bare metal node network port Set provision_state, optionally pass configdrive DHCP DHCP TFTP TFTP Validate power, management and vendor interfaces Node Node Set PXE boot devicethrough iBMC REBOOT through iBMC Prepare PXEenvironment fordeployment Run agent ramdisk Send PXE DHCP request Offer IP to node Send PXE image and agent image Send IPA a command to expose disks via iSCSI iSCSI attach Copies user image and configdrive, if presend iSCSI detach Install boot loader if requested Set boot device either to PXE or to disk Collect ramdisk logs POWER OFF POWER ON Mark node as ACTIVE 1 2 1 2 2 2 1 IBMC management interface IBMC power interface ironic-15.0.0/doc/source/images/logical_architecture.png0000664000175000017500000011230513652514273023310 0ustar zuulzuul00000000000000PNG  IHDRMsRGBgAMA a pHYs+ZIDATx^o]}'G3ƥLIlj.[MGjXāJ$p)v@P\yQ}Q21!~a3(ȃ[.2v@IGkἸQ2Z{u9ph|Zϟom|Ⱦ/ @ @ z @ @!ޅ@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a @ @!5@ @)a^ :x;v>mnqdBɓ +ݺx o*Fʌc e#w* @B|~I=y~~}_ e{Yhغ}Ζ6UX= Y_WC?`v;- lkyYms6~z.ҟtBَ,nv0]㭳>Vob?LhMe}fun9L @ B``!~v}gl3g'[_og7綘7?_7&vV~}rO_5F*#kA}g sfȞx"riOv_s nWIԧo>`Km ΅Cl)9ŇuyVUp @Mn%/etxO=^8+(GmrfA~.7[.7'koIs S(=hgՖF׌ _v8Xlct6]K4 @ V+o$7WO]Wߚ=Ȯxjt0SYئJ @@#[Ϟt>yJ|g}zn|,'  @6n%~to:Z-3YZ @ ЖC|[%@ @ l;}Z'@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|"@ @m mj @4( 7) @ Цߦ  @ @@B|S!E @}U99 o|#$@ @ XoXnUe| @ @``B|-5M @ -kpͱ- W @ B*0tM @ @@B|ũ44G @  D NIS @ P1 M4C @ F LU @ У_Ͱf5t @A@EN%@ @].4 @@@G 4N!@ @-q }%  @ @!~ Bt c؂̡ @ Р_3XJ9 @K cS J @ @@ 7o<}'^XG篆sӟGjZ @ 5Ia;1Fv @)]J8&J1p @D į(Bӡ7O?m//~Ĥ|v~_ZXdsצ! @ @t_p<~!u+_u%7=v @K@G!>7'{Yw}${(^g&}-AMviu{w @) t L|Ų> ي}ë'ڇnugy}Ȟ~ow 46u4 @, gBO»zȶ2}ǧ!+J/S~gs2J|WS>M @ s~Wn/Uo/gB};o2O?_́mCss< @J| wɏT/vV܋必?v/slwtZ'@ @mF?pkE~c  @ (WKos%gT @.0?0;W8 @* Vy  @Mw/ۿPA @Fֽ^~@ @/6۽XM @͗F @ @@, iKUy+G @ @OAxtKf @ @@ hK]u;g @ @ Ax!%ð @ Ж`B%²9K- @ @IAxKbMH @ɇxa%l۳2 @$L9 @ @ E!ӛ;Qw @ 0CMEO%ȧdE` @!pFU| _sk/#@ @!$rWc @ Ю߮uad] @ @`B|E7i @* @ @_vq-U @ R[o%@ @GuA @ -zZ&  @ @ B!lZmy @HH@oXUV| @ @ R!lz]ٮR#@ @Ⱦ5f;WmFe @p.7 98}{sYwMT;bno9lSlj{vn49ƦjTj|ަy%d /+b?;!JAh 'w+:(»? ;#j w . wX!Cl] Љ@ N и' N<f @`X~VM͆ on)1jbDZV. @ _A* @V@[o uO B?b*GUnM\@OOC~~J`[!~[1 @ @uK§T-c%@@uI+!+i @ @B|M@ @ @CDlOpMWsuC @ uO @:;  @+ t> @H@Z7 @ @_W @&?q qŨ ?) @" @ @0  @ @ @ @0  @ @ @ ^B8u@8pT0 - fG,=߷'ƴ(u_q~Q.jZw̮9T⫹9HM'ng,؟h'npwJ\ey^a'fJsșg/U?ٙ* ķʫq/kh@i;z6uF1>r*tᔭ]+'I=·Y9…-u}l8ZGsF\@w @xpa "8~'|grlJSs[h½ƅ٪t bN-o럞wlg+.3v}VPnw>Ock{)cjkҭ3G|lعpBjwnOvmq=L~??ƪ3@ΧGg3'2 /$|}N½Y;Ymv[=[緍wv}pjzѳob:م1LyYg&lxb^~EvS s>yr~uc.yuiT+9pb}Z? 痶 yPl/xv`w[݄dz>:; Ź-1]xbvqŭWu5r;(,YuѢ8\çcۊ?Y1Evǧ[g*<1ydX׍y1 ue :_`gRGDm{,m_~4l;k| IWK~[}fN j5-;7fok|R[aȅD~Z-ֿS_0q,o1cئ%@`BX+o @`L;:ּE~w'Cwb~fVWXOC_;=.{h%o.Awbo=n;+͓CQ0ڹL0얉wKԪ6![eyP7"X1:c(s9q 㪷 @ ϶g_xXlK?zna,+ mY_>^ o,ϖk?;ާ;o}ѕˬ:L3}V~:97{$ïygݘak<' 0t!~6? @`"pU=6h~elupol]wguh [/2s?tm^1=N/Lv6=s K]uϽשբ>nuư l׍O3 䵗Ǡω?w'#(2?nUV-' `%>"" @.yۥl' ohNr+  @ -g~6U=w~K~Vɓ#@  !|0 [s @ bǵ8TM @ @@B|E7< @ @@D>bb,|Gwl $N*)s}ar/} @ 5 M8j:C>jqosbV ! /=ۛ9t! WT1 9bNc:BۍOxk|}C- @&-5SEBȖE)\{ Z%oha~ @n)"AdqvXj%2%o6r _E9 @Y!~ &ֆp[E֏Gxo^|z @n³PPZ]/\!\qj(IvFNp|knۜ媖TeSa @-!~P`zqҮvS)8WE*W2koTٞyw]7aTDZoT @`?'4?ye^~Of1AJ@Ϥj^f~nj&no%[UA50C S}[߯  @c}s $i]G!: @`lnoH۶fbj, 6{R?F @@[ ˉK[/fUfnE] @:хxpA @]FF`o}uO1: @F]TΩԣ+z NNl @FvcW\CP& @ oQ'hB8k֩ x3% WZ2Z @@vir, @%0v )M6uk[ڳ2 @@wl @4)0$ @ @@V86M+q ?hx|`xL'?}<f?6  @#tY-M@C'tЏ.b8쥘X÷?xx %0o%udzpZ6cǗVӜQ @HQ`!>b3\ 7L~p%a)pϤ[<6WcN럙=yJuďb87W?q<D.NuB_mzo/~)_ _艝~~;|-%_?ݟΩJxr6CC!|gdӐ'σpiP7}7 i{XܣG?/f};_)~6ق ܯ~ṛ[Wm9hs  @XYuӏI y?b5P';dW^ gA>W'}7oo,;w3A/UZ>;ɓ簦o\Zq}O?d[i @ ZA)^]_X~z$ЁcU΂΂4OyUt#p}䃳;]̭.0h]?~lbX/V]  ]Ow%X? ojNVIPl?Oo;}eI' @\|w~''/>-M{}5:9.]G!j?ܟ?ܹoeo-]}4%V \=,<{ ɏO~U<R46[~zscm_7s|+`%_hZJ|Ӣ#МߜZr ͛ݢ]IF@I!IMmhV`P酠f/ @ @@\Z⻻7om4[v_ 1pg=+|:+qh @`Zň?6:{B}' @O!O}} 0*VQᬘ79{CdVOG@O͔#VCК*_N{9'G @Ws3&WؽYh7ܳᡧ'Nxxd:3&- )_+WnχoM~8_FEt% w% d+gLGw?`ojcRxݹ':_)oCso,of|ߺ6`xBV@4F#c_ }^ g?Vϯ϶go,p8,nXY;1kbX#[Rm^)v eGf]/wͳS/LͮKggIׁ  @E!+o}q'|b@?6 5?kiC \N`_$`\6K0pܡᥥc9tN~L`tSW۹h0|x3$Ws#Y0߄ٳ(^`'@F( ķPv(+V?/<0zxb8Lۿν4\VI[c* i>] ['ڕp1[YW].[1`)互.G/, y"(L}ūG rw߄?gzlp]o1 wmۦǼ?m: 4'p7i}i;Bok5hzxgZ^<7'7IK`s; PEXq/Ӱ ;X|S|v @Q *ǰowx5[Yσ|^f}xzsFqd4Wasm_=hk!qe+_<#hw9b~1+lW90ESV}tؑpjr @ *wdb+_dY5w}|}?<\ fI &?~2^sWxeﴛ'} ewtU^sT2SY`y ; =g➅B<c#@b8p#mPuƓ?Ԫϯg: '˞<@}>'vohWNVFfox|k!^uYeW) Ͻx_{.k`U@Lߔ>c9_q:N#0(+ sg+>m~>>Bn5E |>b77?qsx͏?+f>"~C!@,l+}xcﻳW~nV[ªpQ&YP  @B|WDavEs>{Xk[S L?SJ|L @ : n)+{~7^b/2{⻺b=Uxzr6aXFb@C ~*Ic%6' @l]" @ @ !>B& @d k;Zeݝu=_$@╖ @N~3 hU@oW @ @9xYHx6Y5e;fuufD @x3'@/7k @D`!~+b}\Pɾ?{= @Q`!>Fpc"@`7.=>ə @@V2]g5q4~Ծ @|Ĝk@"@ @`nd_c Scfs(V{OnpܾJƫz8X+n|1!]VɃyMdG(g){ M \ B}$(9UZ|UZwI5 @I@SuG@w1 @ bՌ_ @( H @@sB|sZ\@@G @F!~# @ @?  @)8'*fݬ'mPk @tN]@BG* X+c\B]YV"@ 0z!~Ԍ @ A * @ $ Ļ$# &B @>BKc h @l]" ɗ @() ėrX|u1a yҰ&d6 @7 c̟hKmj@}!z{@% @ s< @CRɑCIM @B #>R( @- -jYYO @ fF5j~T6Y @.cp @t, w |y+G @ 0!~uNn|r%3` @:;@vv^&@ @`<BxjL$d @Ν>1ttk7 + Dr)i @Q'!_C3 @ @}!}c=l]" @(' ėsuԪZi;cX*a @ ķ\"nȫtm^oSc @@@e|:z] j @-æ[n' 0'ɦ~z^HM 0j!W U`J+ ĪT=o&H 0H!j8D-.˾(C4>hb  Ѝq *B| l)|Iv֓$f @ χP;EË,񮪾yUĹ  @`B|ER #Xo=IDJ б1Vw|-ўk`G k @Q&!WF4[OR @# >\k;O=a̽|,8fH (pF<7z`idot!~>Ƹ5 {H?fu8{:S$@ bG\C/U޷>o0| @c_V5J\ߴj$׼>zDaS_~O`S7h[J|'P]@`fШ0R 0KA+|-}M((:B|= Ѕ߅>Tp3pl1̵V .)7Ub#w]_CV滾ߟ  @@[Vd>pV_FбC} u^X @f^XXOֈnCa3 @ чx+l/M.yW6.6]+]>6.mE Y`!/u[7m|9bc} uCf́ ؅UQmGx>$0?k\ @) .wӛrC;goE|5C{ʯsJC$01?> yp/CjN @8FȱT[QT+g @CMJb C75G @Fq=Kۜ*ɩ @ >űLتZoW^y9 @`;AxLow1l{4m;^Ռ[57g @llG7q&lCmeH~>^޵l @@ CbdK @EAxE&@7fgslQ+ @ hޏ{e< @` Xr߯  @M xJS!мB|4d󧏇õ;߻u~8_ }G @ bB|[N -2ܞ? @ 1pZcǗV[7V~y?-jgߞ?g @! U WֲGoL{&ݚ䱹w]tT^8l5ug['|}>:ٺ_+קAn/ſ$8>o49Mm(p­\Z|x-lcOi@c-ސ1N @B Ӌ^W~rj,O|w%vdw}|p4|;~Yiux<+V?&_>?Jj|.ax7NEu'_Yp{̓W/O gJyvVM/ھ7g__ c_B}O0"U?6o?9*t},ԿD4I;j|N, 6yn%~qE8`G540ijbe|cF Z>N5+oʯ/Ue\B_*8~Çj5tdyo7S*n܁HN wrtL#-L˃*oH "P^ $@z)|7N?= @I_^>zx8ܞ D!`Kw?eޞUlkY޼ƶg3X/ t} =w>z!@o>⮏HM@4:<9{P|ql#{|VU7m_K8vzYu{{;g++x xJ.S6Fmj Y`A|!ayl`?O[>y|WM޼xha._DH[n뗢wb_ ze:N]׋a}C- 03mU1YUcYȝ+vf"zŭC&/2}==HlewTR7;t+U{szGq/;UD @ ^!i~h~[<.ۗg}}lZ \9w~wy˵~ˍQ/ (]C>یN-Z,,D`Ʉ4B}> Bm,ntW`˹b|D?<$M87,lI6U`!~R=: WbxnSvkW·g+ON?L,[æ~zś7C0  x#b3mT&0˾ϯlU}}//x=?R14]qf4Zk7N`F?ʚ d)yhniM @@d$ C !VsW+ ]]KCӨXuڴj{n5gjV[mq6Ti5gU|fKM7tk[k d>_ @,F@ E!>Ū3f MA>͓1J|EY?$20#GVp%@ u|52 c໯:H uݳege7yի٪k~c+_oi4jMv^cwHUJ|3nx~nxTz:_"@_ު#c?^q qŨD# oWmVo9oWG @ |*vI`lImH[AczXƤ8-ƯFou~{ rj $[8+1mMM^jKe[ V)-hT@oSc-MPDU}P _"R @@ZɆxa" MҨQ:{uXCg 5C@LTB NdC|FE`y`#([bS/;geG @@zIxA" N}Oэ1ӛl_w>J7z!@!>Ut&0l)|W>- (]?'@`ɇ.j|c[_zy՘)kijI W ?ҙ9xL|xc$@ A>*S[2hvDѬgJ CW |Jlc`83sOӰV -0+ ڕu >nb]pi.]"eV盺R w|[n?s7MNs^\O<`<0\\(so2śl[A~2CCvϦ=<ٹO<8w0 SٶlK?BIо;TsJ8 ] W/09G^W'o  W]DDz7 ypH:hul]t/03 \LqC/nn1Wc[4oߝi}=׮OCb{i/Pӽ6= |5RbLlۿ0}Cnon :?wឝd{WW&ATv?} ?m$RLϋXWbWA̳ݘ : ]|{Z&@K>_~=<-m~|fԽ@-]-~vKY^Zpǭ ~G_ 륫׌~f *0?кش:_v)7?qsxfA~nƗ_ z'qqG=soݬT eǭ{#GEq.3| 0^71L?f7[*ϝ>Q{uye輁__rMu',!dJupD>vy!۹7<&p䡑óOk)̺;gi=h2߂?=O?nK i U`4!>/ _2j@jM!W5aOOӚM1C̣МZ}ޏS_ WO>8dJ{fcw|QM#v<ةr_}zFfoX̌F^wl 4 ??+UM K0W/>!_᎛><@ri%`>2}~gXܗw}r} *SO>/a % :{g*XӜgNG~u6o\\~#pFa<|c3ݻwy7 @&}DZUOϝx =]|^i0cyμv%\,ݕU1Տq$%0WJP]}rIul\~ǣ5FRU;~m+OfwK썀l+}1+?DW`—q= z;k>m£+oe--}C5b_7fѯc!0w -~k]wJ{帟WrowC?k]} K~c|z\elh;=7McS-_U~_y[G @ !~JeUz,LibȷT6U\u+}SXngd=uor)U<mqn~æ tpY'A´؝Zi k5!-ן@lX:3Gf!ο|pJ!Т7:Z4`- 5Yyߢ%@`"PFM!t|쳩hwG`M/ϨG“IO|ofOоzxgss';dYGi_ٛwt fxnietҏy%XAlV#m~j [ x m`EyS^Ny[ȡ(Pf=UÇ>6 ]wdG|~4p價 1ٖGS1y;gIkRX?v mn;IE@4|40QPWSg8Hmn/xHdu{;O.NN~x^xaq;p~vJdzбl|nol9~q9F( B@oU>lLS$$PVR;<_<.;qswnw?鹭{W7,/Y msdmWDoVa$@{%߳_Yr_ɶ{_Onpkϝ椝^/^xerxRlϟx?_?v!i XO~FOF֭+uVd,+ jdbE>̏k&>o(Pv=b~sӘc8쥱L< $' 'W2&@@uxH]I@Vz㭐[##C- -j  jՔ zp5Lg @I!IMm @gM/j3  й7U:'!$$fH}]- W@p_VGm+xD Pf;-ƿn>+Ŋp!b{O?wD(z'@`xʊ{7 )a @:;'! %7w # C arN"@`* Ļ U /Gv1 {+&p穧' [!yV{v}XiLv^g*׆{⻽F`!~-ǎ^J`/&ؗqc yՖ*Z4XʦqB|U!w{6B6Z@}PI(ߏ)W*|ޖpukPIWݑ{[B *B|5 Z@%7^T/ħ:9-v\[5!ķ @U!'cx_FӺ!6EZK=ۼC|:u:z@ ]ORJpo2W~E:bJz5Y5hW.#^aW )|B)}?nʼQ @$`%~L6׉@}]l.NXu~f{{ԽU:n^sVYO1V}sb^Ū}(NʼQ}WQ^ 0L+ìY- 5*U/2atȡJ|Eu 6բߖlz5ЌV]@0)_cUB|^}wՂҕt{M4c~ |c |Zp_mmo#6US$뭞D=?g +xW`}e/lO/+U!fPMdg +㨳Y[ q X.FUC@x݋g{/ v _ `{ڲ_U.ު(Ķ8*B|5D- į.؃˶{pMbP_-xmWT_Bvuv4.lR[_ '+>ζ-FI^o!?4m]j,`%~j|0)o^=UZ%ܶrzYʥ_Ǝ"Ї߇>˓5 寁6%!šm6S(Œq!\S~j(}}Q~{Ρy}ڷuF!>@?S$m]cnW___/B_{o_I5 k֙]>_w'c1UHB`Xl7zj6|rպͳ˼f&@?Cjrl6Bk(xʒo,aeǬz"I, ď =U䘙cscZMl^L7bV2i:!b . 8M<Խ:~+)o}Ig @] h:XXu/Afmi ƹ'z5xGR1'Vr\V7 % } =nP&kU|UGBb'02aBpcZM7}- u_{F, @ v!> ߎbTYpW,~%A_#01ϽM-pF5nOELLe.. me]7b+ _*ﵴ6j#r T+7q |nŽK7jbkLȽS% U!#)So2s, @`WvzWCV+S~yoUX:uS?wTϺ{޹n  0F!~UOl|7T(MB|{lL@=k4-`;}Ӣ#0<#) x@W@_ OfOnd\m s){vS{Q׮G(' ėsr@tf; 듭|Gg/0)ߥ۽mݛ c'0>}y} gl*{7o do a}ͅ{;vzfW @66T@-ܕx/wj|`V܇10*~7?]OB0~w @@!c-0Y^}W?>;gqEJX^Z+~ߝvWt$P_*|}C- @ esO<|c'0h7A`d'_X^g1u!j YJkn `S @ A X 2E+ Ļ* 0b2~ @ ѕĀ Џm c  0 2 @! &AzV9 Еߕ~ Ua(P xQ_) Y|Fy~*|"@9 HJf @!@ F?b* 0H!~e5)l(}-AJ@JZ?Y*|= @!DM'jٮk{?]Zv1 @`H^*|%4 @D@w! @`dVGVp%@% Ü֕w]#P @!g @{a) @5!5Z  @ .㪇 @UԜӹ-ݐsƹ^w% @@wB|wz"@@ouVϝ>۸uL ( Ļ"JnuV @!g @7: Z @+xFRV)v\jG%_ @@B|z @@gwF# ZP]^K\@OcUfγYϘZ>j  @!Gt, x6αXZ K% Оߞ  ЛUuLhU@oWm XE˯_lg KMx @G[aL\VGnR9*|*2N @BfΈL@ ݮ Jh)B ЌߌVzLS9js}JJ?J`"6  Еt% ۛ9#bA5ZV'  @ z!> ܨo{ Vc @qJDy`ZaхP7 9 @@"B|"2>tϰ n팜 k뼓8k Gs* @ q!>fڱw0#lV=͆ ýcc_E\M @"#-a#0UΫ VZ?wDhFN?f!Xi.+y @8qݬ3pcwUkhE ЖߖvH)4d.j>2 @!!!Hͤ/s@ylW>Hkd @@~X\,HpJ @`Bl]W]ƙ] XJZ? @4ȾQSSOm=WzX~{{)CzU^ PC̳T- -jub xSRfeRo31N?*# HJfUAd& Cl^M͈ PU@*<$`uK%@ @@@M @! ďfI@"V)a @z{-<5A I@$Az@% @ !>"" ?! @ ! Em @A bj* yRD@B|z @Zs @J9 @, \ 0n㮿 @s<HavNhhƚ!@+ t>05A?""  @- ďfO@V#(! @)a 0N-̈́ еߵ @ @@E!" PE*|5 @Bk-a}uf&  @#GVp%@}2a=ٝyRhz @(% ėbr zp6 @Ⱦ0Qs$@`8wzjɼqѭϩzªS>_?wDUHPX-av]eMSoUԆuNk`ճ%c6 @Vs< TYeoj@Mm>S [^wFl @ e+)W D`~)1ۥh|}]/{\|34" @m!ma @c$jblX~L˟DX}Jdн  @܌ .D@jMu>ϹO .S3#!@ VƸ P@Mc_Vd>(SEbq@ku gŘR ilU49- @e2J!@|ʗɨN,sϨ g @ !>B 9/תЮZs2{PEs |Niy(+#+D& GV!0bV*_) Y"@6 |ʪ>jןת.w?*#sޟႲ@Ep7g @GkCsx_-?y#-_}vdz<ݘ n|2I7z!PE@~CO ,s6*fc~L6W--jq< Xs6hJ@oJR;F( /P|RB?N|Z#@ ۊ9PjԶٶge N@w} `8jq794- 7-=H[57g/ _##$@a êhU@˯_ٳʲRǼ9K- @MB&!'@ 1#7Il[_@oXUG NsO90"@ .J 4\=YX?wew|^Mv᧳O|p~~}N\1͆6Ԙkc4 @@|B||51"8|cy]v+퇎'yI~<;ھO@&FD` /cÉ3~{/?ޯ7}`ϼM~4޹ wk: @@B|Z#@`;-™oVaOKsᅅm o; @hW.N@jUx[gu3=ݶOnpe+ xZ5Ϻmw}ZȷӻE`M!`%S _lt?`-S#@H@Z7$'"@ G@4UG3k=~u!j 0$!~H4 @? Z @4% 7% @ в2  @ @@SB|S!0=Uܗk/߿~ִZFKHQ@OjL Y_ < MO>}uWƒdKi~>|Íg_t_ PI@$C\4#>}c+8qÁ/:L"@` NsȌLpL-p~믄'U<ς?nc4Vmt++ _;ig?+V2ܣG?/f};Jm~W}_4ݪj;yEs'h @@!c/.O)4ObK|(V~b?uɱ^^h@[vތ4?9iG“]>;ɓ|u07e?ӭwihJ!R⹧m~]Gg'AJxS!;O.N~^xarN^U,k7?= "@& C z^o̷׿j߷f+fA>'gy~~wl#ޙ\۳l$_,q%|{eྞkק!~WnqW_z1|=o"{ r@B-sC6?f+Gݧ~e/ӟeS8Y>O{v~w2W&ATv?%H(AdO2B}Snm/gvC~'s7w>r'׬7{ oLkn/V_5t @ J!>ʲ nqssWl|Krqq'TJl5t)'n=,|pwc;ڴt/pF}z$@ F[UnUOy?õWi}m'?W|_+pKNtg? @ H@P1M@]uϿ  @cPes$@ @A(I_ f+e @) )W  @ @`TBm^ B/ @88`p~Jw @RS @ @hі  @K;>0<3w|{}?pED* GZ"Ч;ގ'?ׯ~~h 7gD @c!cp @y*yjVC;ܕA V.i @`Bn s/9>{lð޿ь@';93#x @[x%.nztӏy̑.㸧bt @@B|: @ ?)HY@OzNem7÷NG{a;?rxxhӖt&n @@!S @@Sy0ݸq D-Eܣ Wk j@B|:# AJc\c-ۺ_Vq㮏э[@w͞-߯ 4) 7- x]Z#@gg?z-h\@oT @`_!Ak-)} )UX  @mmO A^oL7X] 0XAҲmV @` @A ͖{Z#'bq?K/ x6Sco4 PGs.F 0 /^}# @@B|#@ @l4 @ еߵ @ @@E!" @ @@B|#@ @pN#@ 0TI| g] @] @ @ !>B&/y @   @ H  @ @ H } ;}"//} @{~_) Y"@ @{x @HD@OPI>lS_NVD_Uy @ @c!cp @ UVθ  PN*|9'G[@'@@B|B2Tl! oP= =@ @7Ⱦ? 0fs}s'@`hcVq%@@kSSX@CB|C!@XʏM@NrM`* Ļ @¼1!otc"л{ c6mlY6hMm}ocWMm+iۧEu^˺U6˚5h [8 @ @R>b @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)! @ @@B|50 @ PJ@/  @ п  @K19 @/ _# @ @RL"@ @  @ @@)ej}|IENDB`ironic-15.0.0/doc/source/images/conceptual_architecture.png0000664000175000017500000013765613652514273024053 0ustar zuulzuul00000000000000PNG  IHDR[AsRGBgAMA a pHYs+CIDATx^|U'd'72d*ZE(ԪEت:JZwu2D =Sا-v4=ɪ?# IH@H`\#11Wˤzmw18 "5`}_Gƞ+o[f}@b+U2K_z}yg12{/ʕSeϾ y)r#Oݷ_rrrƕ"Oɖbq$XXM,_V}Yb%}Ͽ8qqR>lM5Wn!CfO/smi٫u7I w ?LGSE4j`n]N @@6LZ&_̤Khl# x$cF~4ɯ9$2E!<"P?Vy?R}v={ɃZ{EIINUucHOY.-9  bd#ѯx [Y-'2KǗU5+❂4}YVrUe{|}~@0~{Yr}%$<|w3_Ni&guTI,6m+^Jߵ[TR@()a9nɡリe{i:1WǷn,%ya+9aS@/#ľhaHr:;FYYc,;>#G{iOfv]XGgV.\Wbiu_FY8^]5Hڵ+lɮl,rԏRޏ+Cb%Zڵi)/=iNs?}.C㞐^0^@+`dKKJ\Rzt 5GbwI~( *e֯JequetcA}J%D~Y[\O {M~\e̓,OڔkFKm؅hɜ?JFΩs$!>^f}@^ys<55;䞇8S6/Ͻl+,EΤi B@B(`dˏQ$&]j˿+~O0iX=ޙ"K 'JޏFՓZZGb]yt9Y]ztnҘֱz|i~]"]}8:gi ߲Vd|޾AW.&ָK5s"mj['(˹UX]^:G Ҭ<$55E{MdW[qҨA=] ^sB=8ҁN#OV{J\׮طtM;UY2VX|3EEk-}Fn_t41->e͂x.;jha)4[և\Ew do],cuI߳5OEͱa[v>i?[x+9W˹_̖ 5hļ[-̚|4 |Z᧜n1q 2gxb$y;7s̰A7Ng/z@ 8*_Jޠr/y):i ɊF tֲE/&8ۇRTNK+1|3GorrՓ&|``S?or# + u9R ..l۸WsZY)v1<QYzHA@ɓ~O6Nڛ  opv 3`< fJ> y<|t5WdӐpz68avPN}WG7nhql]Yf%=REGY| cyTYvG_0.u\=IN*ZlBi(=ect@@jFE]ŷm]f$;n1_z?lt´&:hU/ŪYͬt.+҇?\d;(zz78䭧ŲTW쀒-r>YwEy}KzY l E m6*9TaDn݅kK|\^ K>r8D{@ ٲv3nmmP=2OS   n'['.++ K}2:@@!e  sXFX,#, pV͙   @HB-L   $[Qv   0#  @T lEe4  Zd+ԏ  Q)@ag   jP S?  DVTA#  @HB-L   $[Qv,KK˰@$[C+i-l )ak@ H@|ɕR4a! ٲol vuILLLۥ T  @1-.@@bJe\".HZSRҪlT ɖcD@ 4l^&N@c٨U5@  x Erdrpo:0-M< 8Wd˹ @3]$PM(uA Cb}cE@ :H#ΌjK>{{XD@d$[\ d˟$\@"Aa@@@W   R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  lq   ! *U"  $[\   @HBJ      R%   @5  @d+T  x   @p '!(!xf;ürI @IV۷֭[&Gd+<δ ppq9HeذaRn]_*Y͞=[ 9%)/b@眀`5녵KÆ &Gd+<δ p_,l޼Yk_P h-G#X~Y7zh1W3f/]TF%Ç1cƘƎ+͚5Ν;o}+FСC%''|33gmڴ'x¼Gi+Voٱc\{%K.]k׮裏pN93r@   @8ƍ1b MZ,x=<ܹs=qqqW_}su_SNCyliժÞ3gFOEy㱒!ϼyh³fO?֭>|ᇞkƴm%t+93m-k4im޽syyc%r+y3='M[ `f+Ԕ@!.YYY'T鬔wJ䦛n4pРAjmϥRtϖ5j駟.L:U7o.V&K,+9KvdΜ9r}ٵw}!mf͚b%ib%vfVKgmV켊 W@ɖAo@h۶Ix߁O>-[JrrqUCjժ^Jf5MtB̚%3ɛ~Y3S2p@iݺ,_\ԩch^իW5fѣts*^TA"Q&@eg ʕ+-R=mJ~䣏>2 O4;=wb-?oYnٸq/,/[f+77לVf͒/\?KV=Y @Ǝ+͚5Ν;o}BkO<~Ϟ=Í̟?_?{EgôB2Bz6ȈK@yGo>{-@6g/q e +@g vlI AaK_$'4n9lm 3XF̸k@KvS]ٗY5=5, H?zD.^ݱ),ҷHZ%>F@ l9$}ܳ}T=>$S) ;HWFD@톧٭`Lڿsj\!Ύ|72@ 0Uqèaƌҿ-E\ÇƗ[t@*[{ ԨSD-@$[7̙#^ziԌ7q7o^4 F@-e|#YVӥ۽i*8@H&Xbt99: &w>wkǦ/]UY8@$- 6l ͚5xtu˖-0=EOsjK"4C@t-yLM8}}1eDe{ޙPҝ !D@$[&@; TX2(svKg*55p H, nisړ~: 3@J @p@Mޭ{K=b~mNj @H%I= Н n(+caZ:/   $ ZvVd.jm [غt@y$[΋=F(@\|TYOl[_l݁PgS*VNA@t-@Qv=els=]@-@lQC@brqл3@XVIJb>1|)((pɖBF@*"PAsؽM/܁ZxfddHLLԬY5l0ٲeK ՔxY+8{|3 'f͚IϞ=_6 :T͛#8pX1vXc=&[޽{˚5k|Ǿ{Ҿ}{iӦ[.39rtYtM v ٲC @XyZ;%:E).9s&ꪫ$33S^|Eիu]&Izwdܹ2|߫WO?]}MթS'%֭3eƌO˂ ̱{zHMf&M$~L>ݜI&{$|#,@ x$[&@ UwPo)SɓeժU&[L'u&7tԩSeҲeKIHHAɄ k1HitךE8=Og4 ͷ͛ˢEdɒ%ҪU+ĵkN̙#wɻk?0zQ)@awzSy_[N [! +Ə/?7Zj{ӦMRN5 Hǎ>'u]WkΫRp͛7Kbb䘯;Ot &e˗/7hտ3@pɖ3D/.?)Z\}.فoa:r "3K `٢K/CY/))P-zə 0@3o63i7n4t`/^VC@&?vA%nTm۶5ZJ}.On9]t1j92Ů]JdʕrK>}'};SeEg@1sh*..)9x]VZhjB[vĺD?D֢]}f+)K/zmFs6XkGjz#wlllCUr @ AU~} JrrLsU?EgիgR@;l! 郝~&[dD->>|ؗlo8i1[5e&_ٸqciԨܹ'u.Oygi-4 >{?K/仩<җi G@!2`(RGz)_D[t&A[N΂=fViIwa2IKgɴhm҆&V/_͌/2bv4  ZR{}YWʕ}-z]23W*U27dk믋-%'zUYOn?iC38loСC!z@@H lEH^,\Mלzt!_:i]v 2U4Q&2k]r[ݩJީNj,>bɒ?mx.5cG@|@_9cn3XtOO|Wz_lv!zGh߾}f_2 V;zJynSN1qyɒGK/ 9ta|Fz|9oӧ,\pW)vK|tYǓ{j1ޗ\ߡ|#v_P⌺I'ݔA֬Y#\uU2ݥ\}N.tajĈon6,}o|?`wg8z錫n?(Nu3lw:x_=‘m;%o=D7ޡo,D-%--MY6(!kyݼ|ryM%c:hoB׭!fk֮=z֣<د6vmηf|mL8ѼfmnPUk uĚTVc%H5R fJ̱f<kfc"d}2Z*."uaJu٠h3Z5:3e%=E7V,FZZ=No]Uo̲a .˳֭{³f͚ pBCa2yYt_$[anɖ. ai%vohkm۶5޽{hvl.~IcE-AaIx_BqI\r{ iSL1p89 d' ř< ֲZLmڴG [~ uT7u0ѢErE%@; ݛ V];k74,{8 pjGo[\g{0^w@( 96} 0c#{gviN2Q YT {xFvr'G߲qP7 l߾<0aÆˈGU\gQn`Iw20ܑM 8  Dc+uV^9g2BgǏ#  MHl   rv=  Td˦[   l-gǏ#  MHl   rv=  Td˦[   l-gǏ#  MHl   rv=  Td˦c$;;ێ]O t@@-. 5k&6lxtƵQF0=E@ @ ٥;w.ݡAXvh|)   k袋d̙g4Pڻwh:cF@B&Jjb##n`ޱcmV,Y"mڴ!.Y.]ȷ~+-[tɨ; |W!8#pp3[nbP^=_*#FCU @^^ƕD+ԍ (VdOv-Ǐg+ O-Md٢MRB/σЋ}zcZpJ'b3[yq%0}t8pqɓ'O?Ķ6:t{wӌ3Y:x饗h3c +i׮祗^X J, E]T8п P#<"V7" L`jR dݰbK=|תUߛ6m:u*ؠAر|g2qDΚU*v^*U7o,c;<%Y3VM_hSRR|~u@?ȧ~jo+֌$''qjE=|Ќ @$H"N @$l/ӂf=WIIIfh{.Fg5  Xd+T  )20ĝi0 W 끮q۱c1)Ž3rel} Wv)`]Ϸy;Z7ӭԪ!\: CLxsM޽YOiٰVԭ 5KRB'<3]+g*IiV+ұsTǹu6?IkT@<>lu݆ /4֩JpȜH +Gm:JChu`AnLgw6mjjgr.Ow>|ŽW1e{DcLA@ $[PM[ y?W}X _eTtFk yZ5"͚ͬX}>{رҬY3ܹ۾!zϞ=̓w!C &M.u=ޓۛkiRR7ph򤿔Xl1Cy8pYn~sn&={X}>hgOR8.]*FǢ1cJ%Xvifb1XS@>gZ+mڃ&kyi~}zƐ'777hmP}-v큰vσ_~өS'ϡC<[ljcxukYJSIn߾c%qJ۲rp챒(a5)((}9ƍX 9zQFbX966NS\\W_X+vc5}_"@E*s {zKv=jժrÝM!a=ZNI+١? 4H'u]WꅢEuYfɓ2S h5s   6m9BO>Znb?:bt/]wkׯ ҄?33Xeeen-3Ff͒K.$a 0@exr7-"7ntSIjEgު$1EљgyƌZx>kF4s   6m9F74}g:.+f塽{6\kNz!wuT\Yz%Ҷm[.3%/x]v5;pf}XRZֽTfN4iӥstS/3`:5No}k9@p 0h\0 o0Jq>TX2`E٤Wh:۸[EKIZ'Y6M-EY.5ꫯߥ7R1׊"'2r=e' p99z;V>ӭodCzڟ+2Hn}#{ $WXU%AMZU0*R?x󠬰vO?ԨQ<8<%R1Xi30h}_:3' CҤX Ͷn MDuoWڇFo}{qg)A?.R1@0 :$T,:/يθ{Ԛh}ٲD܈69qaF&!qu77nDո @$[ =;3Zdn+~"g  )@U&x-]:HqpڟA1@@$[6SapSX?7 L@@H6∣ ŝ[wFQ! @$H"ﰶu{w;>-VOჅU~NvyڮtvU%>.F=ٝQysTn~ @ $[=P)vS`^߰k,Wfwd>:({sVwTʩt+8\K$UB6# $[\;MٺWÙCO䐭q5JK8~$Y2WI6MM( Zd+2$yZ2#e[/I!OdvG(/>~ok^Wkoٝ#k\.2c~!jLy.#ֿ!7_Zyvn]?]-?w|1C2d!b#׃f alE%ѽm%Vݴ.vĞq" يth\1֯ ][,b uQĦRlھI#Gˏ+u?};wtڎksWϐqco0G왇GȜɞ[rcYBbއ]&,~ 8)p[6U?3+Pfo3 %ݢ   2mDn;/%Eކ~o^gjeɪoHuȕhfbm ׃O`ʛ4aV@t/=Wer۲v֍Sl?: N-w5*F,nOҭxE˦_0طK^z/19$@|U}d-JS 9~ok׹Ԩ]ߺ*ںWXK􏖴5S;[H̫G2+*8Q@ZuYAd=+YSJ+ jUMc%z@p X?I0\FY;ߛfg-|mzfjn`6x_ie'IRzblHY}R)77 !7k[4q{&UZ3^OYDf~'׎|ļze[x1{PwJSig"{߃/Eu1)Y]u3 ܣσ_v`ܹrٳo߾n}c|\gC{Aqli3yC|LO*&MYVԚ*]qU?|_7$͟dht4yLbKsM],M_vO]/ p灓"`4h >\x y.skx_Yt^,#θjԧutv^1~KL{9y_oc%EJ޿UF]>!Q[fOdY <.…裏_~cIjL/7 ~ @(XFJݣue. Yu-+Q4hZ,!zt }Vu%66ĪuRŕl&/Ou^iи$%Ո2-*+H :$V+W>l%7vә/M̾ eHBK]V$O^؆Y7DKzgJjСCM3B;qO߸@F2@8@:坱:Y/K3uX X-dž#h彇kĈ tR! @4Ѻ+=Xeh7M9E(Mdk@b}֋/=Ze ;ålp@ *K#N@D'Zj:.A%2@ jH6 p-]:زeː {BJ @ lE]0 \M:訽piFr  Z@p&ZVyw t:å-|r $[\ (:5(f%I= @ lE_1 ( ût0܉.3fpEVi@ fG`xw)> Wdk@ _ p+q @t lEw= `KHl/#Fc8R( ^/YQ1-r觗 gTA:W@kVHǿYmMt>uЗh/םsSC6HE/zy{p>o]:?Ќ۟] &o.Fc?a<@.$[v(Q`ҭkF q** +3Z+בMɌ5Ⱥ}O:}`zd,\P%R4 Rsi\OerSyJ˓z7(9\kguHʩR%]Bˊ G.@eEq8$ru$?Ͽߞ;,:)8RTF<c `WuIlL%)fVP DKgv^x&8YƋ2TC2םx\ pzYٜl[֟׊Z/;V;`y7ЄsƌŎښ)֮EwHNAT^]N@ C+m ِi1Fr sh*'H* $5>՜x$?GزL[ q W^-[&q:Gt]GHaKftk[7Jxߤ&LX{DzYu7;zw븇K[x^/$0~^nXȈzÚ՗LV0.)ԄkgVLY^>ٴUrIiҢJf9d!48_w_okG)~gVo_VJ&iދ> [PN;}< ,@j 8:^f2B]BhfK>Kg׋e^=f[2Z3Q=hR~[Lz:=byH|}6Xl+*5;n<%zڴi2qLruIqRJ(Z4Il @wWsuX$[,.\!6|K`y7ǨT)FԬ[[tk VƪJ Վm+-I\),Y{8rܿS7TM"IfyD},>ZJ8S5֗|y7(u-<0#٪"ILɖl>X|[rނc Tuނ㒭:/M [ɖn}sOY_E[PTNJ\ܤYN趒i=-3CV'gׯgH/#iIۥ pEOCw[zm&Ѫ#5&V5? <ke,-kVb̌.6͹'_6NCߙ[>Y\E']& ӬI=DHJ|u\ҢVvn۠YOլq[VlX_xV y3}]XfR@k)aabh]hw&Z-9NmҺUT6İ~$;gɊ{17vh<Ӻ'SI$~xqm8o-.[ ˶R~e9My~S9S߶])V!LO$$JJc[Ϟ\C>}9W,MIo/@ $[ԧm@RK K!VpM2E&O,V2 -b֭tM&SNK˖-%!!VB8=Og4 {h}͛7Eɒ%KUV&k׮̙3G>˓w}w q &3\$Z'FI7]yѣZj{ӦMRN5]4h ;v>L&N(]w]o^*Uo޼Y%''|wyK5)[|9GB".@@ xgt#R"*5گ_?PgQ^r%B.uh{赢_wyWTP c et̞\<]+pؙGKծm)?͒m +M \σ͊G]tjժU2?vCt _鿝YFÆ %99y:[EtիW,5W*"uV=̖scG@pc M HwiRH-~O:>כ6mPR pW . tP5-̙rAg@[ l*t@P>ػtW^!Cӝ>F7,( lq DKa0gtFK̙B9_`BL( 8E$}W0-' "Yl;V.\hcꫥE?OҤIML.]Pw|G}-=YYz>'k{oFkmG);waÆɊ+1" 5-!@ {g^0⿀nϮ;j23\s|C+IRwO?/R}]Yv2e#<"/ܹSk6m4i9f[xMFK5G" 3 P@YR}3Z駟.gy٪Q҄KyfYIa:cٶmItvzZO-SN5;곸!KvdΜ9fLzIɖB@(ϦE-f&?v[tU;fҩS'xC:vh?]5-z>/+''|wy2p@pRNp߿❧@ $[!@J@xgtd'Vc;^~?YwnjѤKg DV 0]gn/,/[Νkft"ɖ"B@*$}6͈+:ȌV:Y'wKaiLSO=%vԫWlѻwoiР$$$出P7С\$k}'|NN/C cp4mR ̞\<]+pؙGKծm)?͒m.m1>!^w{*WkXg40X~xx[ҝ>ƍtӌL9|lR4袋mPnqzVѢ3]RÊޗ\rql+@@{K nAK%66֗hit }Mgtkw}n&^ݻTRDKU4ъ"@lEK' p1x3Z:h aÆ9]r{-/N ܌3@$pi5)$Z6.܃l5@-8@p}+kCG@@(@Ĩg@z_-\ƌczOHfͤgϞ/׆*bb Sرc=nZz-k֬{IM6n-pȑҹsgY6 YdelQ/KfΜ)Iz*̔_|Qz%wuIy;w̟?իO_$Y:u]n:~[f̘!O?,Xw^y衇dڴikҤI͹diM¢>H  kH\ZȔ)Sdɲj*pr-[nrM7IZZL:U,-[4hL0AH錔&L^{[4tfK /|Kk޼,ZH,Y"Z2I\vdΜ9r}I^^ 8vaCZ =ݰbK=|CU7m$u]!UAұcGdĉru#۾}{TbyfILLuy.AԤlM !`  $[Q| ,5lѥ\R,-~IIIfh{Zv^&:U~}Z7hذ$'';OgtyhљzꙥX<\xGm#Z.S`?ˈy\}lQ+I,: f34) $ XDKphM6D@Kd+h@@@ L$[a@@.7E6zO@pɖ(A`޼yfHݝ+dh@Vȉi >|X>Ѥ YdelQ!0tbeŊf@f͚IϞ=CGK{g…W_}hBƎ+ӟI&a4{Ҿ}{|-mG>yȑҹsg6lEgv郐qҥ2j(>|3&* @@}$[)#2O?]}3j}VֲeˤSNfywdܹի}:4m޼-[Gr=,=y/"G޽{ӦM3_&M?POntɞ&K$hHƏ/5k֔)SLyE|]wEYD. Ed-dQ+p5Ȍ3̬&=^{yԩSe9X 2h 0a_NyffFfKN;$hZ>CkѢEdiժIڵk's̑^'md}֭tMW9@ɖ"BPAұcGdĉruי6m$uզOAAA۷=}hVjbi•(9993[n-˗/7jտd}UV Kd^7K@gtɞ>,qp̙f+;;[f͚e]ʧɎ~_-]~o0`Yr3^rlܸQ}a/,_Y/]zXZ)@@; l1* 4%}NzT۶m&ҽ{wuvtJ[t9.ڵtAV\){>}'};SK>C@(㱊;>2,~%{|T::hM3tƫ~%֩3Ru-W{:gsVѢ3]3K )eџ:r (7m:@e g2 /-J}% pEuVtƝQG&B%ZPDKϭR |he1P@@E$[. &CA@@l'@@\$@`2@@ɖ}bAOpf9t@V#@   J-WA!  @H"G@@W l2 @@"-@>  Rd˕aeP   iHG@@@$[ +B@@H lE:   reX  DZd+}@@pɖ+ʠ@@@ $[#  +H\V   يth@@\)@ʰ2(@@V#@   J-WA!  @H"G@@W l2 @@"-@>  Rd˕aeP   i G`ŊcǎRk}GAn@|+*7:Do""2"4@ l95_,k׮-V?nײe Ju `G>/ `Wd+޵jՒk^uAFaZ SO=d+T]ҪU+m&ժU akTvk??ңGIlڴIԩ&NlAұcGdĉrukuֲ|rs&\/ՃRRR|Ɗ㑲oIޗԧmCd+q~7M ~{[zP~RC]zx%u6K&%%~mUZx}f n`-eRKx_Zn_3_) |'zgնm[iӦ}Z `OhLg1|I۷;.rzKyYgjޗ~[ Z 4#eӁm^9)Gs"%?@bl&H8gz͞\<]H۰SGKծԥb},ئp! cmU vK XQ+(1Hl׸TY;ltfD4qFW$Z}/ZoY'g~8 ֶGr$ۚݣW ZrfWX'9IWI,= CjC>N(.dK;vr9ʬZKΨIJC#٪`pUIՑe_Hb} 6غ%s$&*I;dˡL)Lڰ 5EmP_ٙ8vid.o?"P˶$Zy[ۙ[>d1t(Py.r[7h9<%k,dgj$o<Hg4њ~l:xD+8-ذIM]( KtFp1f˯ Mzdr8e#_-|:uF!oB*|A `$[z.K@gvgsנM-t*|EhAt+0pvWT % D$0AН挭 n=Z!h& fܣU!B۞C D$])<ЍJwS`?:Hqlsvgd"lw dvUt{w;@v~srE7PW}B?jyOOVNF \;՜-iuMwo_W?y}7B /?OroȽ}.%k27/ fԅ8Jdp}9wοˁ7]V*+V|,WBB m\z 6E"5{(]իH59Տ?lsx |;<}͒!##\v˭r`y{eVq H8/7O{<ڣ2dԵ OIdEY/جCëcɢ$rxsZ8<{2﷎wp۵s[+ Ͽ-lQ&_gy㱗Eg*:U`_$%>Ś٪zZYK{xg=ߑ^g; S1ˤK}=?by\3_&y{p*v@Nv5a_ gs^%GҬCqRGuZ3 ΗG >ؽ迥7ƌ/gΔdž_#c|ز]{+Ryd^[صK^r+ˇ?g;zb! HkǦm QJZٻG0>?Oλl꯾-)//_[BX />M#Ix7cs4ƿdl)FnN{1b+Itϙ+҄lͪtU' -$]vp0Ue7ߖX'_̞+[앚i˟>) WdoR%[oֱ̼YsWяGWJٕq@46*B PkטjZҡgb2T8YF<:j5aך{RҬ_ 9Ǜ0-yy򃕨}9s/=]֭\),2W󷏔-w'O&s~KÆdMݽuLϋV}:TG"@UqۆߌժWKb*ֿƞzу [˸O^K|˛X\6Դ~kߕS-Ƃd`ѻ5y7Mٴy>߱{v얋+( P]?m;I e}Aֽ[$!hY;[|%IS͡wx3s+66~IuִڰYآ^k2Yd6-[+1@ b6k6hnCZu4KoE|8M{-s+v|֡Cr[o?>+ |U+͟V|!5՗/T5o.;~mXZO*l.Dil%v_oo-bx4Q)@U$ o<%ziyZkz{Y7vy4iLRK5c2vq}F^fRRo-_MAyGdrd_vl_rER2nf |fvh+ k֐Jgs6-%MbOLkI׿y+7Գt{IZ5#~̼S'@0NZ3H޲ZI/MxJ*O=UnY|=fKF-WN3|+lC g.I{(reTˬs:_ W|س[^/rGGS25˗:@b o cӬ0S:5[$!j[^I].UkV+=Ȟ%-s䮧,<5^:Y:Xuo>Rɺ_ ~b5k\*7 ɭzMȿ|dZkjʊfrg.8M93bLKެf3Pߣ9͈< ;9bdx)>>sCa.id%U:'+?'n4[7vx~I.\"7yvpײz?p>/lzi-;G Ru#ݏ\tu\|?[A]a}a{hm-{o=֗.KM~t7w*ԑ|nf,jյ?L}y /}RtxݽvfVJs>9/$<\xu2W|1m4>TGEKov݋"k˔.3} O~݀/"L:t祷k jכx8(X9%dΖFKڟyyqSxU/ٚ/###]/ԫ+yMg)c .@H#ӄgJ-1L}ɹr{ .9{$95YV.^._Yd=khh,qyٷdkSfZr3${`<[tiwJuOOi=leSZb+U+{cgIMeJurĺgK8J`mԕvc Nt .  ٪^Y;9RCPήoShe}*σ=;IϔI~^[WJ7!)Vpx"HU!l@C(N ;bluUrE]d+䱩NH6_ZZxJ[:@?L>!*:Ĥ8Ypi!J xpKd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOF  6 ٲI   Kd]d4   `-n   HOFQ&#'Fp@pɖ3D/@R$+3&}8WRV!,@g x9/`8?[SIzN rr;D@ɲ{ǡw x{3ZUHM @ lEq:8_^*s!+p`A?5@ $[n"c@R=IkրOUBfgJM $[F.h߭lZOvn;0H I3G Ndu!e@ m:q9e[RVv Tdt;[cR9"q. @-.@59CGd5drPrA|=#yf&ow3I,MRAZ>@ bh+_GKծC܊{=xH;q;ሯncٷ+zWds]¥ρRn41ߝePM8"9 #F֋?Iq`t3 s'_zQN!NZLUq2kùL@U1?ζ>xWyxvf}^v<: +Kzͦ"IR)6&8*G-o<8S"%7Rhdq t,# T@".3[mzr-Lxί'lrDe-/)V @XHL# \WHV-fje 7K44ߏ㟼Pŀz@J_ @W&![T6nƣKΦ.@)@̸k@Zwmfn0hP'@"&@1zF.)M06kd`@ @8@PFu @$H"O AR-I @")@I}Fn )̭7F Dd@z@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@@@$[ -C@@H lER@"'=SdZC9OVPڢL#@HI vH "X4Mٌ`.@( ي 3D@ɩ&yNjF3U_nɮ퇤AӪvƇ `3- O}RRٙi*kYXؽA*,@* @U' Q#@5f %qiKJFTs! -Hl: Pͪ;Y,,# Gd+8Ԃ `ظJҸEuӓTkš*۠Wt@ ZH5p@V5:vo2,@l9%R@/7}KZ~A Jd+Tԋ 1/>^i@@*pV chuthqq' i׋Cm8"6dȾ݇%9nx.}Gԭ? b݃uTj5HH6`93#"MӨ8:fx@T-_Ynٷ0qۿ'K6W|^ts@" $[FQ$?񯒾 ]A*{3MEe@@$$[\ @|6̍Ѻ˕KpɖG@[{89D@gt)(@$[ B*apcvPImSٶl``@8-. @ tA;v -! e w >΁1*@$[."C@9Ze 9ֹ ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-ǘ"  @H"N   ~-#q0@@ يx;rI)qȾi@@ ي;fɩ r`_cKG@@P- T,w}? _`s咡ss_I˦ é  z@FUdCW`ApZٴʼn v ٲ{Td| @ -eؓeד#IZJ@ يθ;n՗Mkmw:.{lZyO oXe[-"_ž/wrR1.XZX>ABV#@~ $4_lVKA>[A@`s,dӖuNJƝpզ'ֱ:L[mef8+zl P @XHL#Y7Uι8" &;6ÇrxW0 se~w^dr; W Y_eۗұw?Fmε]C@XB壥j!n%׭wl9 veYʑyI~z|PFid*^+L}fIi2o<6.YhUjr4@o2="園%f6g֫Q10 ĸg00d+ .jE,a( ?|ֱdppAL| mؓd+Lq! ;!rg\ 6ݲZ1X=m讄ǵ@ *H" p/ N9Sf\$=ew؍ZHSsn6tӌ;Ch2cD$h &SEf䪑>doÍcۿݘbs6 @lEI& &÷`׽aTpMZjx.o]t@AF04KPmTek 2l"Gv 266;a wƵQ1UG@@!@U4NA@@ *K#   *   e l%@@@rlS@@@H   @9Hʁ)   @Y$[e }@@@$[@@@@,>   Prq    PVYB|@@(V98@@(Kd,!  Cdh8M !1i]~ [?8 Vi@jp7I{aHh@ P@8p@I5]GJub  ي:m"ahtJu s4M@(V98p@ZDӰӺMY7UN @$[6 B-j/BN=FuEGd%A\ )RQ FC6Ĥ` Hl S@="Naux0CSƨZdiҲtLՔD+4ԊU cHe'd,-UF^brA7k/PłwW[vN!_d1f #֝ȱ̌76|K֦?Dɖ=B@g{⪴Krʭ 3OLCVP9 @ ٻ%(Im9\*%(|bR{|;o $[F8^ 6 IVAWq2@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ @ E@@pɖbF@@@$[]D@@ l9/f@@ 㱊.f,-Uv|^goxqs g}$&-Ánc=#qE_B݉XCRI|kI@LLcw x63[N}F@!Tu?rd"ɳ-'&ZZ75F[ΥrW%s[=h يH3N@,?e ̭\0j.`RW4lEC# ?)i2>EԉkH\Z @=ZDҥOd?'B@t3 -1 s^+Kd/&B@@tA;vs` l*@vޝN#{90F@HBJ Dۻ -# VM@@-.@@@ $[!@J@@@dk@@VP@@ @@@B @TD@@H@@@l*@~~jD@VP#5k} 6Ll_q'k@@ $[7 ڵK#钕%ƍ h_C%   ي=-H ..Nt6jٲetR5j >\ƌcFOHfͤgϞ/׆*) 8PVXa>vXc=&[޽{˚5k|Ǿ{Ҿ}{iӦ[.39rtYt[   (p+bpM8ƬYd̙ꫯ窫LyW^r]w$wޑs^ZN?tyMׯ_oN:Gbd_~m1c<Ӳ`s޽{塇iӦI&ɇ~(ӧO]ިI&{$,&WDp3 Td+ H*R? -DL"'OUV[n1]֭tM&SNK˖-%!!A $&Lk$R:# ӵ^kOZL̖&a^x[k޼,ZH,Y"Z2I\vdΜ9r}I^^=?<)D@ =ݍ5 Z}T0ĪRb5h"a矗=zU7m$u]!UAұcGdĉruיcdIdپ}{TbyfILLuy.AԤlMnC?pA@t`dތ,/뽾_38pTt Do'DVbD CK4aG~Y*]zx%l.KJJ23TZIǖU=`z&gz.C2`9p|f&mƍflrwcu+77׎$Q'>~lV'Y׵ySdڝa{Wso.Cs_@޽;J.@t+Nr3~nC7 ܶkB{ڶmk6HMM}jz_!ZU{cstbrie]v:ʕ+/>}ȓO>)}5_wy#;AxlVi$)rl)M.]H5ENMIT{(DK{OMh <9,yKIݚMN%|1kcѢ3\3_AVNmY[|YV#?ج{u /D;hWgtFKw Ǹx",VNFZ;M?H<+qpJS\/NTI|Y8*5yкde:7Ӥ[& rמL}4ْ`ֲBMwn"7҂~5k{kYwR,_lo9ݚz|G~ˬzudnkZO:t)&vZיG1wJKAŢ<͖Veֲn56ԖO3XMklivML=3Dm5KO?-ovdK;n8sO>?R7nҳgOOKbH;Aev}p/gy9><(?c?|ɈDo2D(yrYXGnyC.LL}]OPz9d :5Rdŏere%ZȐ?#SNm^K^I^۸[_D$.ӢuotU7q=ϒS_*$MK:K^6IJ uavԭ6|~w+ ҁX7~,sfu|QvF&zj,/$OiND%zNԣG9묳D/R@^hP|w!\4}v&]|4iD.Ӗ~OoT%W_}m|T"eOZ_q`ře԰hэ+ο.A<=W쓄Xiި$nӮ v1Zt#2L7(O)3sʺwY~DA ي3 VNoj׮-'NlQ~믅;`TN*6ɗnۄ ̡ݺunD+ѥ2p@|\%iۢgDk;KNZF VЫWI.HVuOH]J3]DK_Y2dC+ov ѲK4G$H"OۮН>T4iYJG0F*}@.C<=_6m:uzs}$@-{ k"kFDh ي3 @7xC^y5rHywf͚ɑ&dh-[׶Vү_?9s3;;,CKS '0͛voZ(>~ ,@ @8Rt aÊ}W [je6hӦ_ڵ+ѺnݺRJ1bDz s֫h<#`FN@JQ\ ,6$ P@zzI]*ͪ\5\~.PC"` 2‚F Cgh$V@\@OקM|ϋD1L@@ $[ԥn@@Z =G@@P lR@@@ jH6 @@B)@J]F@@ ي3p@@V(u@@Vd+jC@@@ $[ԥn@@Z =G@@P lR@@@ jH6 @ ĥrj(?;Lda 8A 6I!P)F9Sd+:Ψ@@lJOMi l9,`t@'*ÏNU}1S8S0jC@@lr:t PsDNM"6"D-'F># &Kj)]G@cl?r X @@ ÒPTOa-Mb+EA$[#@@Y;$w ?Mw&a nhtоg ٲw|  8T{8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8   ٲw|  8Tdˡ   `o-{LJ!  CH8  h*~ IENDB`ironic-15.0.0/doc/source/images/sample_trace_details.svg0000664000175000017500000024445513652514273023327 0ustar zuulzuul00000000000000 image/svg+xml ironic-15.0.0/doc/source/images/deployment_architecture_2.png0000664000175000017500000011255413652514273024305 0ustar zuulzuul00000000000000PNG  IHDR1 AiCCPICC ProfileH wTSϽ7" %z ;HQIP&vDF)VdTG"cE b PQDE݌k 5ޚYg}׺PtX4X\XffGD=HƳ.d,P&s"7C$ E6<~&S2)212 "įl+ɘ&Y4Pޚ%ᣌ\%g|eTI(L0_&l2E9r9hxgIbטifSb1+MxL 0oE%YmhYh~S=zU&ϞAYl/$ZUm@O ޜl^ ' lsk.+7oʿ9V;?#I3eE妧KD d9i,UQ h A1vjpԁzN6p\W p G@ K0ށiABZyCAP8C@&*CP=#t] 4}a ٰ;GDxJ>,_“@FXDBX$!k"EHqaYbVabJ0՘cVL6f3bձX'?v 6-V``[a;p~\2n5׌ &x*sb|! ߏƿ' Zk! $l$T4QOt"y\b)AI&NI$R$)TIj"]&=&!:dGrY@^O$ _%?P(&OJEBN9J@y@yCR nXZOD}J}/G3ɭk{%Oחw_.'_!JQ@SVF=IEbbbb5Q%O@%!BӥyҸM:e0G7ӓ e%e[(R0`3R46i^)*n*|"fLUo՝mO0j&jajj.ϧwϝ_4갺zj=U45nɚ4ǴhZ ZZ^0Tf%9->ݫ=cXgN].[7A\SwBOK/X/_Q>QG[ `Aaac#*Z;8cq>[&IIMST`ϴ kh&45ǢYYF֠9<|y+ =X_,,S-,Y)YXmĚk]c}džjcΦ浭-v};]N"&1=xtv(}'{'IߝY) Σ -rqr.d._xpUەZM׍vm=+KGǔ ^WWbj>:>>>v}/avO8 FV> 2 u/_$\BCv< 5 ]s.,4&yUx~xw-bEDCĻHGKwFGEGME{EEKX,YFZ ={$vrK .3\rϮ_Yq*©L_wד+]eD]cIIIOAu_䩔)3ѩiB%a+]3='/40CiU@ёL(sYfLH$%Y jgGeQn~5f5wugv5k֮\۹Nw]m mHFˍenQQ`hBBQ-[lllfjۗ"^bO%ܒY}WwvwXbY^Ю]WVa[q`id2JjGէ{׿m>PkAma꺿g_DHGGu;776ƱqoC{P38!9 ҝˁ^r۽Ug9];}}_~imp㭎}]/}.{^=}^?z8hc' O*?f`ϳgC/Oϩ+FFGGόzˌㅿ)ѫ~wgbk?Jި9mdwi獵ޫ?cǑOO?w| x&mf2:Y~ pHYs+iTXtXML:com.adobe.xmp 5 2 1 2@IDATxUWy !(`5 ̴ T-0~f ܷooք`$3D} ^|m X Z iBL"F!ښI Y39>wgzwmV۠IB @bK`Tl-p@ @@s#@ @ 1@̇ @{ @1'yb> @ @9D};!@  @@ cށ@ D= @bNQ|@   @ sw C @Q=C`Ν裏 ĒXZÇ塇X) KO}|)=F֬Y#c>T@r ؼy3Ã]@AQ~ CMU BQ F I}z@MQuM@7K!@>TT7BIQ~u@ %ة4nq1 >]Mk oB (D}x)}ZV^-ǏBn)@Uz_]VTرCԛO h B?'^}Knj`ܻW!>QWdo>AЧ^ SNb^tvvڇǠ|@26Ba|B~pthWe͚5xcO_kQy}ԇI B gk2'8r5~0 WaO @W7񚌾 g5B9IV;wZoibS4ֆx 96l Byg]sm8&a {n&δ5֭['=h ZGQ_f~xߨKeg)>ʷKػ)|iM[G@HOC A@0O}_s_juobBu.IBi942'NȔ)SFXGA/P7-*Bzg_?[J P+D}RZ)/q.05ery۵[ c?۶UٳYN B-OM}L`芳0S* Dŧj L_^1SRu,K[[X`x*8f̘a יHHh0a2z![ٵS^4_5B=˨Q짆"ZQ+ Bث:v+{VΉ{i Pes)UO 32 R^VN.9k8*vF Y]g#i@NQ_7ǿ!wK.YY3/ ~}*"// ~nWѮ^?G.7\ow=~9D 9]La_̃o 4OQ_Krygǯ˘K Fvqk^@r,}Y, @A*~On^E֛/?㭯S + @u@ԗҟzbsf1K9AoD[_ j&!@BQ_NC]$蝸[_`-m}v~̩Z}M@@8üg/[?7'/>Y0",td 9@s'@ D}z/,/x8:=e`C]n@qw{)?O @!@M@{yuƛ>78 y߳p<,Gy҂'Cz|Y/\PM8@ǡ =>)xuƛsz靸|֗kiy{I{FN+]C23{ ӈg񏰯7 @H(D}cy鯹V\|?oro}&vj_,zEtuu9bPU1Oҗi6E.xcy@hDz7K_֛x=0=1SӮQõLio7ǏTB2!c] @h)D?K52-:MXsy;lClg?k ֕o{j ĉ>[E^ 1:7KW@^kH_{<\hHE#Mm 8H楿\,}^z絷W]_BqL}4>O}t>7#)KxKW};p>T,V (H(xu^'Vgzw/^ryƼ2Xf@[t;O!NȻ,>xt-p><9)#m3t :zr<̵=}풱zP1{-fAU@|U 2T^+~W^?և|#V 3nHa/#&śuMXW^ӟ._ }>/g[";{a>l*ӯM0ӾP $@EmY#N_Yr'_N La_4F[̧ZE2U8_ xm˼ꖌObt}Ք5AUm8'!@@R+yL1]䋨r&No=#[tS~><å*5 r* ,2}X ͼ |sB~Q|zi< 󐱽/[hCFڛj5}Ky(iv| H߼N[olDE>߿_~̜~̔;_22llքIv`3TchIsUaǦg%k:#WP.t;kVgpi̓R3T`aCť bJ ^jKG.1W/ZO39t}"$652 mR73Mڻ`Y3HO5x֛݆f1oBm͸ D;*l!@D0//yMcYik>ukp*Ⳬ%NȚAی(7FCЭWڂe(=nFOJ@ '9/K5x zS{ETm71"]U R7_M|{"hjG5\⫗ցv{o6;Nꆟ53ᘩp/f3<;}eJ!8%G U~]1c*k#-mK#fjR[V/)<7{Roܽ k Ր^3WG̔-"e7WmHOڇT!7 N6cH !mF}M67ߔӧO?]K5xecCS= +WqUބ jWRcCL]\Nu~3W_rYWwǏwv z>^)7`_WwLװ3FBQ!PMЗ!@RJ UǣF /P.b}'yݿ lx֣u P}= @H7)MAE{vy;)]vL8׏7><v[,vhNgVqqEIˇ^kF+QTV u{:1fo-(v@@S^=:xUC`ԃ1AxFp9֫ 6"{6VtlhNY1~7G$i43h2zYF̈D!=*җ{ xoo-fd13UL4j %2s:A  zR;*J*^*,JZ9Q,F`ˉ܊. hklɗ>SfJL hJv6 N^nV4!W{uYQ̊yA5w̬"۳!ę4ԜP`@@2F {袋: zիnܧ?S~mLA#c|U]5y##Ċ#iEĮ"?kλ 7v5bݶZ B7j @F u^a8Ϲl $AW93eG>Ɗ嶫V/UE{VEq1{wL@vzDCuۋtn܃JssWoMl8@R)GB?੷5}flޟ] ^Qߕ5y:ԙ5^y[*:rPNaP@@ _TWzԽo4uk.]37# iTݹfi| ͇td{w_YW?ӆu77RwݺbityBv4l ;B T(Eշ3vFwֵa旜Vn TXkH )q46Vq/i3 %)&_jg>SwgeY*\Qm>i= @^vEeةѕ /|ɇJ?Nt,,GϚ3e!27[2TWϘql2˝*=C@(uŽ{Lv黨G~@@<vxtVE c4 d'qo}CIA"(q$$C `w.Mk 0رcӭi(B t$@^yM4IN:t & ꛀǥ[kSNǏ{`{PE u9AZ!ѣ "ISH@AI :}KCzuיi(=3Z-|e/`R J+Eɮ]vńރz/ @u!b!2>׾G}pf6{+m8Z)赅7pWNSp:P{Ϟ=x'9@EYRt̂QYQl+{1FՂ^y7k֬?ixiOLlڴ zM  vAmtdy=fU67lt豥;>O יu`o>cEOG\[w͚<}vZgG1]|yï77GMERLi9 ^jSVvbLV3/Gi~ϴ',v̲ Yy/۳L2.5eݟgl^ϿlмjL_7]=CQ6ڑt=]9#( zmzw!+WdzKа˗ƍԪss&yQoCoڻfeƣݗW}[tٓFw\ڻn69<상lףSuƏ~8o\on>OGǽ=ywb7w9l1oË&]ӦU\uvvÇaz^S9sB!@>Kݣ9֑1"^0n_/l[BbFτo^wȗ*YfYڵk%;ذa{,\зr)#iQ֢!.a4FkI[<&>4֫&'!3&\Y)qW,6{LNx$vb@kz5GٳeղsNw:[orܹP>Z7B6ӦM'NݻЇT@`$_ޅ[Or3k4\velhH| ځy{ խvtÉǠaכ7 c4!;f AWohh>)q㭧СCruفA.+QFɸq .ѣGѺN;N²U)Sd߾}~zb?6"Dx#d@%4\E/?q8nH=gϞ~| |ߕ/| Oʅ^XTR%:BL5ost}y٭~'A@ DS z 9s=̏>Phj֛c.t.Ǐ+sε!5:s֭[=)iN{ T@ԧil'*uD݆u;{¶!t`~tm&'O_ڵzg̘! E`tZOA|A`O#?Rta6}@2U@!.y)뚿nxPv" @wzߑR  ,BpF+Z>]r 7#]kRe^a=T@h D}KS) (k>#w__ZM wrʴ5B4D} ^#y׋+bGUNj:^\ݻEgAǎ$.eC3ߒ[l_|7;vرa(@ D}P)@ ôD}z {!d:6T؟>}) j'9!@*,Y"@| D}H@PJgToٲEtZ #55AH4͛7#42D}{ r~O9f_'AfTd Zx@-$o C?Ry">}̟?_h9 D}x @_^-ZOUX@UFb PW9 L&c V}dTDRNQC>h”@@Q]DHvKF!FDžj"  (xWȧZxٻW\=~ ׅfϞmCqNj@b ӕ4#PkK—n{x,\@da T{}c `wC \}r<'=W!I3 4u'zsm%cTۣ^{@hiDo)_ꌚe%`@sj@ xW~rUΘNyeqD-XB9ܹse2k֬P@I#OZ@ø*'M$8v 6C @HtF@H7 ͛?|paz@ @7K 𛚶'QH#< pB&8A!OLW$@g7,lS+,#55AO>@ 1Jps eaF{PW؟:uJ446B(KQ_ !@ T:t=}d28|')B~Pywu]{=/xP\9P~zʗV?~|Y}zz!V`Vҧn@'{#ňAsd_SeNڵ o};  -CS]w+Pv>"|1'~ʹk#}x h=<,"}BcZl.N'cϒ >-=M;!3S?#篼"?b|~i*T8p@?.+WSCQ% '0vXٱcT{n @ ie$x鍯&j¾!l\Ĕ>H?*5axb[>OМ4Lkiy @ԧi1Gc '>'}Ό6aZ?!hF*w-:a_!X`XwCCU_?,a6olΪׁ*I@O}z6@ W2&U ~ҤIx+0 OxVC@MR@ #}gg -EQ1 P͓#G^@ r(. $zkIvwM׻n+1Rmmm#e| d2k }r"l.A(KQ_ !f81sN9tЈE91o믿.'O,LC9>d 6_,G굈ٳgŋ\> JQ*n*@:H>l߾]zZO|Bz.i*֥uD,L~y쵣F!)C8P@R8 06ǷB@28Aoș3gS?d40ߊo|f+_\tE2fA'Uڵc?ud4V@'O|@K@EѨW_KM717>oK/ {P"M6oc!*@a~0:.*pNs=rQ1mжhŽtm{?G`ժUmg9~8X D1" @ lH\x~V>+WArw6imE>+V,nV<#< JVҧn$[-P5hGE {wsmSi MWZ$@'WiZLyUkQa㺒gݻWǍg1%\b\ǡ+w!fj0 D}@`)i'P*<$콂~2a+Xg^_D@'iJQמnD1W<^ k<رcZZD`JDt#@t s_~Yt8Їd…jĞ={ !7<Hu[(轱rJ<Ц@VZIh@-`d6F& D_Ot8ĒfL,-h@ 67[:Kxgy{+[n]ji->萮.y/LwP$>]K - {]U^w]<Ѻu"kާI Vݷa@ 7A|=AϽ(Ç7omfGQ>E.O|e&M|#6^O6@@ :X|*:}%H'GZJ~{U8p}`+۷: - @aP@"ZGWN@@=*ϟoKC"  GDD@ h}cMOo B@GB<_~@)a{}S]u@ @SMb@O {ca5"?}Zjk@Li5%Pts̩Y#. eɒ%h"d25\A@U_vrGN>*`|%{Yq.R?~L0n?ZuPik(Ack.D} NQ_OdϾ '' ,ZO>Y*=GME<>Y})!@MJ:fB ܹsryD~믿^>я5QO?M }D޿:%8Q l@r0P69}IK H hpws\Q؎\tE6ǽ H$4ic^THۋq@<੏G?a%RO`$ {kR=>L^j͛#o/B&v`] >|Xv)ﷵ!E[Vk䫯껱*IgYl̚5~?hNs~ҤIF!r `  'B~ڵ6\Cc]o{+z<_^|2n8+pouSN[ni=,-}l!)Huƴ{q*М7PgmJ/~ ;VőcNJK.$z ˖-;u@ؗ!@.p9i4fʕV;|ژxoa={6 /b~̘1 Uիz.}H %OhҬ S/+֑YE5ϟR?t*5.bf gfYpaUQv WGO;vo\HQ#L iY5K׭&7]ebk 2pƍ an*w a_7:.@j SnܹsmbBJoU4S:Uz H @d,7:}>2]!C[ /^,6lhj.^"AFO}5:K$ɓ'[/$A *M&<"5:(3=اi)!z\;O=L>zbg<džN)ٳ'66cht ^iؠ%A(GQ_ K`޽`ĶE.`k׮%&ҁXw%C 0Rp 4Ggggz L& &@ F'M4  qW  } ~PaI=c̆ %>HD}ոzf,ii$r}bA)qqh,f6B("^gϞm=:>O!@ nNBq1H̆O>@@VXaD؇M = Aj&!hDw/k%ÇΝ;e֌Ǐ~H"V` pa:ed֬YEyz %6r~ wk$ G-ge+uuбL\/ T!8*Uȯ]bteQMi{N߶mׯ[n%Lh}Q{d<,VB]'N(8\tqaMS^{:[&E˻ |&/{v/x6I HQ?K,T- Pq*l½S 4T$gYreꅽ a'ޟ~iuc*Ub0FA~CIcad޼yơ]@{BPD?*G%W^}~C&Uﰊwݪ]=*p'NhU̧!EۯN&e+ q[.ʆ,d'@^zY5?˗/5k… kL(G ^wUݻ׆QN\~G(3+'}Y`~GDQ֦ -U  W:EƍPbuH:^i׮]mq*F]y]`l 胑}p {4n/x6IoQ?w\*WM: h|}gg:t?u#{1֥q{GB^E NmZR5C@^'}RηzkfZD}{"২{LvQ d@tqƉJ@34B7o i&y衇xdfn:uoFoWpUV`caSOИV 9W$4K@:SW)ot}P!}=:Ygv;pvZ?5QJTAE:@ h Ƹ ( {}Ё з*3t[R X:x1鬘o$E+[-T@Xm_r7IKߪt$W'g&~#AO*U\ӊ+6}O |ui[ٳgU8}+Oo()(MiֶUxY&a.S[qz׭[cC,{ ~~5_l9w`Mc^gv҇ b#~T1O&L_J.NEHS @ >P} Lz$J+|Ϙ @&w}ٺm۶" lݺB)a4 H0. ~,ԳR ӏ&bٲev= ^Rd2M4;s~*rD+nj;5/1W9z@ڵk*U ҤgQ?އ;R:x[現[R͆7O ,aǎamܸcJ^stD}hVI#Rtɒ%V%+? ˖-;@q&a{;ᄏ&::SogtjT׏b h1 q3PرQ[:y}TSOŭ !Ǐ3jABWWTG@Soj'EhL/_jzY:"l!^V}5fVl$iT 3UU޿NT~R, jiO]8+\?sl%7_?>~84lB= ^S1B Z:7轤Q6HW+UUqCɞ={-~ʩL@?c^Em6L̻6p8SKDW9EgА 'vvvvʡCv 6e!kշNía@Gŋ˂ (<ވw^;㚆_zJ檥Z?~UƎ7DcI.>n8 7ee-շSRI|6WqO?pBStC~\OQ_?31~|*g֋T7]شig*E\̋8?}:IŽRT_wu6TDn(ݪyYf*McrpL8`~Qꀰ}%8M41aވ Nk7*TO W:CY&*n$3$eWѮ G^n{ kDǏrHMf`7I$E`ʕVDj`#ޏё℮ |2SNmUN2%QjK^z%;@GۢmRFLζ:D}u>M~|(I#04'"oz' ~hF;L[ԫ_}Gz]ro9ݪ8qPC‡\^I.E]TIJӪ~lUX/>MǏrf̘a$^ &믿d5R {?~GBcZӎ8c=c^D}{ &׏Ri^6 cC{66 4"'C@*P .@IDAT h\'iD]A%}c ` J TPa:~WA `~ WJ(^QeD3fE[;%&e2;ےN}v`JKR\ 5QFic]pE7mkܼ wTꝟ7o8qŠ}=dHAmN'#X~|(Ǐ2@vaٹsg x:uG.0lٲB|= P@ +@-BLE@? ?_v]FSϚ&O* sbm69uꔬ_^nT2эH=+WSH]e1S]k G9~M]}ɒ%V̫@sX?ʦdPQ*Ҷl½uQ a)+~N#X~|(Ǐ2E6-S1<͖#իWsjq~j[ctnN_` B%BL;v*U1:x.\F"@'>yˣG9~( АB&%u׹7n r8VDE)cS|Q+N :/X^~eJ3#Gaܹ6TB=$I@;;;СCM;.ʊVFF|-w,>rNsYn>wApmׁ֋/ 6W %C0x!@ hhjP?<~H_jEUaO@4 gڴi3Y2OU'Џ< , z5< I熞>}:>7@H;v}p޳gOH5R   RE@UڵkWm޽{eCC iT  P N%Nj Hci,Ocχf~#A@8pаuIOOOD, $L,}Zhʶ^P0㉍[ )x+]X ҷ#\swV~* S4r$:뭸2erGmG* 3eg*;$]c{tgdsae-譃;6-ii @@1?bF;z]vd q9y5rMwI`tgf̡wo4 @@_OF]rzUwdd@g<_F$Umiب W>T@ԧi, @J@\g6Q/j8Nkyس՟%]C3__0ӼU/3ov+bz4@@W=}tWr  @,2Kdp(6Z n+l9\I2}ޗ1/zF6&AmX~Srp[BJ[Lͼbl T6mLkM?V(?a-nxeӏ&<#<P12g8@'95&>H @ "OPgҔ,ZHv^@ NA$;yd9yFR+[Qg#l&ߒяA#(K?˻Zmx*P!n:驐 @ <xcMM #3fȁdҤIuލVY7.H te kҏ#X!h8[eY|9sWGv.vunFJ}LnG! =cmEl@&}wԩ.$7 D}S8̙#z\25(,"-M1)fˋ_fɮJez[%ݺN_UM0ׯ~L}r~pzĤ03CZb+VyE=V{. |WJG>[o$]݈Vٳ;v͛ k2M@bСCO[qsΰ͠>>u]N j*6ѹsʆ "EeFeI}KMLYAnoX^'YZoiQW-Y>1*{@Iq7@ ̊nA''NiӦڵk婧_CX#7(KƟn5 lOAu{K`?_oϘ+= u/p:g=zO /o #Wz})"zߓA|Jy?tMlfxza>̌Q]T79m;2**;}iOIT;ﴞ{R]g+#sx3&(DrC;zEtuƳ\vvt{31C_]ڶ7̞ⶎhİ ؞!dkҷP&\Gk>v-P q.ؖH}xb5k P>ŝO! &$kgőTM=XK&̢0SH㉠7KW{V2fut xwn3fjFRIlPf+ cAEug̔3C,xC`7gC+ pEgŸ|᭪dMa8LX'_uR?C6jƍbc̆@<X 0m'+<Z$+SLײR9:edu9+ԷwKf+ 35 qL3ҡgHt;`o%|-SP!n:1T @hfq-$3f5u6 S?J;~SPo-[dɜ>}J!@ ouꄀ9s[o+Wz @j'9!+VȼyD  @ OfҪp+ivmrw b}GJUpU)3%6ӋxX}ĉ2m4YvЯN:U!@H<)l/Ν;UՄ}C!x–?ctQE{O/^,;vZ~|kD&p2p$c#7 ,@ԋi&ѣUCtj. Ė>]i&{ާm >w?x`av :ulذ!vm`@!@L?)={twwXzxaYd8p@&MTbN@Go>L`QT1:>:}%zǏo7Y{}@RBO}J:fƟ4;c ٽ{;_jos|&?џctQT%cO>-7n,+1UO]D*J1E/,#J?F N_` иyN;vl5J $)-Jz=$OJ&Hdt&~SN@C @ YOZtKBs"$ @ 6@ctǏ7v1WAzF p L-D}$!F S4D}:S 4E)/!m6Yl @ $~h@u@N*7p1Ki>Gz꩔>S hQ[>)'ЈD3f̐ΆCT\R:&uK,[oU/^VƧixWX _~˖-|r(x H{y`Ba,8L&S<M@ȍ?AnBG30a3g#A/*ήZ[*rN z-Hxk<}w=ehTBZOHG K`Ŋ2oLG"G^hr"(_ P3[4w\ٹsgj54n8};Lԗ!ą@#1mիWm&wygiC*[ځx諒Htk-5~"N{$b׿n7ëj&!=~zm1LjLAJ_?E?CCЂJD} KԻvL_ڵK3C@9:&CZp!r8 =zGdӦMQ9u:ꋡ *DoYf cLjtfBE}:8o:bhǎn ֧: N@q%5G@߼[{ѣGo%9Q  }b9{l;Zc`5 5KrT+7G:=u]WO]E}ޤKoII%oa9QMP. sKq,SH@E }n_^UWϾiJ*5Fѱ/>񐟬;Q5)#OY\L;vL.b,Gu$k, <:P\57X*tRϻϳ>k'=Pb^k&n}ڬf r=ZHQBZnܴ>?*8`ÈO@SPq/R(Ul]zKq~ uzu넺9sPߵ g>L2ڧn@ԧiq8&lذ`o>[>Yv=~zj$z]ro9ݪ\&*Ľ u=aCew #!#^#15@1c޽ IC D}HA ʧPOיn\*OJ=.[@/Q5!@ h,}e GO}x (^&I"F@JO}U< @@ GX@ Up @'~a! @@WI@ D>} @JQ_'!@ }B @*D}U< @@ GX@ Up @'~a! @@WI@ D>}6mT w  d@'iE <#裏VmÇjNB <m&%Y#S;&cǎ lkk?sm&+VHȱ-ݶ=zMQzojҭۯ C@Fמ@+ ̚5K.\(s_~)[na7ah:B^?O/>)tT85SNɌ32}t+Txj idݢ'ջyNٟ˂o^=~E2ҙ-xp/ʅ^(\p>B@ MOSo4itww[o1GxbJϟgoaK8Ÿ~]sq KyR'j*A;wu?~=tPu'Ϝ1n?1N->˘1c zc|a: iHwA`8${k׮u։zI'[oPO= %@M|)Pפzޓ'bUGEwR\2&A@pǖ!(ٳ΄㦸 „ONBWA+˹oŚ@P}|xm.)-{ Do()(teGꫯ}% .6:;d\r[d{>~Ld· iO PD}mcNݑ޾l[R1>܉PMRyxozr ѸoȻ] + ]GxafD} 0DQ?Ăp^=Ĝ+'MmI>Vur?4FE~Ǝ(7MKh\i!v$qS< T%@٪x8N'Rt"msL yUԫ^ }ӱqlꗇAk׾*2xiZ+_#A@xᱦpybiR~ڥ: b;Is}Ov?Wrզ{7$@Hoэ4''1⺳RS"{\TG=91{'?ܚB^m~/ʿ9/O?Mj߲n)k2aܯzuǟ~R>rײsk䢱;@PSQ8a<ڧ*.=wm9ڠ69wfK䊫ʻ>_\?sX O=r[*׿1wGX^8ke^H <|иפ$wc|z]=,sTm&~'}Ư1s[}w[w- KQ.oj jx_rTS*tg2z1?T;O}݊6$74T1If JK%/Ez^QVՉ6zoL ?V{gUqoWG d @ǥ"l^0|>Y&!'Ia>Ê`qswog/?h/>(w/T:A$@ 5L$H /_~ӽGRͬso6>s:Bmmmy3h @ǧ 'TkJ>B SM {헧d;EE@[_ۍ /qȩ"3ېQNO'_M5O,φJ"@/*W z;}^'*b[[DFЛps|?L%׷~J/ EIFKEo1VBfKoNWqVݶllW7a7x>mio=cz蛲6¾]B@J Sqlz\ TDZUѰ篝Η㴼ۖWPsfЧKo11by6cc >@ 8A… <1ca:9\80 _zRF?ɘ3˨_ț?a6ĺQeԯL؄-?|Tl' ć>>}zKss)-X+}w[w<.[+MXț?\M&m,{2'Fk22by3w;_1^ENka? EQii@AoMqUMmUrǽy"g ..c~?eW7h("Ai OC/'NiK56䦔ۆlRM̙3r1N6]r @i. G86[3or1v[k 4<)0 ȥ;fn[ =>eNsB 2U@P:+ƅl~!sl)zO @>P$tK-XeBh쨈ϛ yk~o3K~siy_Q }˥l%ñM`snI pV |) 3ϡH2kW)ZH8]28𢴽ʒM[Q_a.wsJJ+ GQ[JB ^lt1Q`QF]UR敎FL@Xd$Xya-1v[(쫠׏[LE}0(iuvn AHD}B qi+s;YٕWx.yW$@h D}kS>@Qk%OY;oec=#븹3}w[wܳ?\v!$>ݚFYJ+Tx-;^%m91XٕWxFQMSsc鶮dmHox#mwSvU:nl(t"6@i%Ok'ݱRX3}w[w}w1-^z +ǯqݜwo" L Fߤf!ys>UWtW#Kh=. 1/<gKM~ce+ zqǫב+]cuxkp_120EeȥaҀ2ynAڮ*dLIO}TN}6#VuBu4[@SC\ \=xLlϔ{bH6}KyO9_Շ5暬|uyWaO.GwȲ=r6LȲۏȧ7U/-Tg;LWo͍y 옺UejSvW:% @S_?3rNFYtKִ90/ʍ%y6;+|<ϵˍ e0ۻ iwˍqLf eHK.-=澏tNG+3&5_7ӈ;3=Q|8~෇QZ:79+W;M2knK>  2uB S[5pX9c.WϬsiXvpghvyaw.06ͫg xޝV\G|^f:AqkʫtY*˺;k>{~/K}5#Ҧa߱^kod=?7<s\zn^G wٖ᭯|o_67nvr͟g@a & KpE!/W+LͷYdwVn3 /Pg"*"jRÔH9:vbˊk%\u̔~Jϝ̚w4ytu J 9!=_Usd2 dIפyP]:֘DI(~[\=JMζ6o@e(ɥo#GSj<$@`Rz9hXF߼MveZ\\_-ie17K0oU7˻_oWp-d_+LFQ%p\YzγߵO}w[ ~׈ҷx-}O3P?]z92pQ(V"r*|&^$Ǒ̐Bλf-kW+KYIN^ B^ 9duP8e}?H'Ccň!H,켿sfHYsNnzj##R*X, fO* @DB}DLJWre[ťV+ߍdxw@>vǙnRw.i׷աzW.y\[k!Zv|?r{dKߺs[zߍzۅr{{:F!OG% ?i,/$;$%؇i{cb(211!~Ȉ'% @w~WjMAbe#]A罖BW&ɅKp"u|ߒw~{[~Rk۷yګʽ;vSol[~iӪC8CMyKw֥ \1ZiyԐ:R#̫op/ -F%K]L?L-+ C'37ޕz >=N_s,]7ZΟ\Ο6˽~l[F?qujg)9+x}NEgi{PďM/jҖ  @]@ʔ.`R:Şi,u+m{˝>ꐍzGN!@ivRjv7ޏ?{x>8Si.Rj=]卯/xrO vzD'@-5wY]r}]}fV@omL۽ ;m}|ieXג @ti--YZi5ۺ=e7neg:moN;$Guԋ Y3٣G;駇'һ[s?s^=ugWڽ :mw/e?ަ\Ep ` ߹jX ɤ46v.7:Zols0@R ԧL`@6kX|z?LuhBEn[ɗb/@! @B}D^ʙW#v[i3]ib;%b ԤT#1rl*$/"gͳRv~/2B.@`p\)">2 }\-IN/  D GĤt ({ZX3RZ^|x=lY7A]rRcɍۨ|}ֳyQ 0羼J?5/,2K+3x!?77 da]8ڜ~UPv znHmE?0yzo7e:!`C̩"@_ [.kOJ"}}͇w?Nz23SO]fB]""2uUfj+n~Dr]ˈᩩoچCz@P)h7~O'`Vnrr4NŢn&P̜pt>9گRI ~t^^Gw!>{ tCP ULE 1J܈z&Mu]LLL gF-iDO^D?wZkk&^zO@_PV1[i rYT 0]K :bkkqQ3s]ϓCWkF>itrCAP5 E k:FG굴@5YN;;}4UP^*SFJr4Rvs'V@gSug}Y@NP?p]?̀l4K+f[pT> r{ee-Wvp./?l7 td7Ny dKPl@K_Pvm#Ҫ!i/EB9IL7F*R[G[ROɣd ;dwvF'_j> }ոyu;:^׵혈2BT]~nUk_Cy{Zhe2_F5J}oM[.Я5y2ʛ3{?}rn]k6J?~!g;Co=NJBU~әoycKj\]j{rR^¡|𧽺ƒ\Coߔk/Ю!\mTݶYa-kڲ4+@!;F /[ +.nlM9e~\d8/2?|E?i9= n\)YX(5]D9FGRZr#r#K4,~ˇm gNp-[o޺B{-۟@ gA`F*G.My!;2?Qъ xMq&05» h>UDÍj(c;Ҳ)5h<{[->!}I9,0o!^,|Z[ymw<@!@F?p+kh{8Q {r{*37\\>ߚAeG[Crkm 750궞~dQ;!k}wT]j\?x"17eyXhy ^<䈼i> ϖj ~p4r~!|$=\/>=5p7F5ZCpW=-exm ]Ig2~,C*L?gJ~bZrŒ}ٻn;!G2<yU6ޖhw5ćAF_/L@@&AswO>O?$#7kR/Fȹ=+-[aXk/Y&;:'꺞^᥯ޞ/IT yaLڑ %z Rwo )xW}ԏ]wЪmky~&y/pjޟ՘{ZZԂ>M] @'uڃdTWy$ˮ2+|Xf {[ O Z&|.]i{O/-:MNNJZ^G7dխ=5m>*K˴L^2T %+\N.lMRYwt^ vs R\IVPK[o䓣OIK@ zR[=G_@Mk=7!ܜa> 5$F ~4*z4IDATէ}5١ё}}䗟nc?,JC?PLy= x'`fנMYfYE@\BDUg# ?^GoMv,1Uky 0¼v!zW:Ziia^C|#7}nAF`ߪpk¹YXj ajYc@.&@Ge@t`5g:c" SytF䷾*gF-ki3,[ϳ؂u^Cl!J{M0kpo[KkO۲PoJۮعٱZէ<@!@H X0z4"?׏e~zJ&˲~uTJ<†4o{0{mT^}8kge˝KK w+= 򶏕ZҺ0rcezن @ ?bq6}L&WdyTVGeHŅY)OLɵqSW9诺nmu?"AƜn-]wTǻ+6[ҖDp{pel]lC@nRgt [:Ҫ_ޖ5ܔ]H+}pT!YJeBNrqX1)M]/Qa) 5nnn;ؕJEmɢDMnLhy}(y[0\yMf,tUSx=<󶅯 @h ;;;틑Z#u}nWDv*9Y;Q5''E9yYO2? ƅ5 suڌtG#6׺Ԍm@@/1W"`5ȇ-Zi^˺.$ GbuF-[OzאΓ^+mJ@@ r|&Vjpק[V:zoEh𶧆r v 6ƦՄ#6*u¬# @LB}LMJExXT--[i-[CK av ܭA>  @BPa@g}]ou0[HmapVX=  @Zi' -8-!]m2  >Mmں2 Nkapm(@@ _<"  I{*@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  UP@@H_P9-"  U~`jZIENDB`ironic-15.0.0/doc/source/images/states.svg0000664000175000017500000011642413652514273020460 0ustar zuulzuul00000000000000 Ironic states enroll enroll verifying verifying enroll->verifying manage (via API) verifying->enroll fail manageable manageable verifying->manageable done cleaning cleaning manageable->cleaning provide (via API) manageable->cleaning clean (via API) inspecting inspecting manageable->inspecting inspect (via API) adopting adopting manageable->adopting adopt (via API) cleaning->manageable manage available available cleaning->available done clean failed clean failed cleaning->clean failed fail clean wait clean wait cleaning->clean wait wait inspecting->manageable done inspect failed inspect failed inspecting->inspect failed fail inspect wait inspect wait inspecting->inspect wait wait active active adopting->active done adopt failed adopt failed adopting->adopt failed fail available->manageable manage (via API) deploying deploying available->deploying active (via API) deploying->active done deploy failed deploy failed deploying->deploy failed fail wait call-back wait call-back deploying->wait call-back wait active->deploying rebuild (via API) deleting deleting active->deleting deleted (via API) rescuing rescuing active->rescuing rescue (via API) deleting->cleaning clean error error deleting->error error rescue rescue rescuing->rescue done rescue wait rescue wait rescuing->rescue wait wait rescue failed rescue failed rescuing->rescue failed fail error->deploying rebuild (via API) error->deleting deleted (via API) rescue->deleting deleted (via API) rescue->rescuing rescue (via API) unrescuing unrescuing rescue->unrescuing unrescue (via API) unrescuing->active done unrescue failed unrescue failed unrescuing->unrescue failed fail deploy failed->deploying rebuild (via API) deploy failed->deploying active (via API) deploy failed->deleting deleted (via API) wait call-back->deploying resume wait call-back->deleting deleted (via API) wait call-back->deploy failed fail clean failed->manageable manage (via API) clean wait->cleaning resume clean wait->clean failed fail clean wait->clean failed abort (via API) inspect failed->manageable manage (via API) inspect failed->inspecting inspect (via API) inspect wait->manageable done inspect wait->inspect failed fail inspect wait->inspect failed abort (via API) adopt failed->manageable manage (via API) adopt failed->adopting adopt (via API) rescue wait->deleting deleted (via API) rescue wait->rescuing resume rescue wait->rescue failed fail rescue wait->rescue failed abort (via API) rescue failed->deleting deleted (via API) rescue failed->rescuing rescue (via API) rescue failed->unrescuing unrescue (via API) unrescue failed->deleting deleted (via API) unrescue failed->rescuing rescue (via API) unrescue failed->unrescuing unrescue (via API) ironic-15.0.0/doc/source/images/sample_trace.svg0000664000175000017500000016123313652514273021612 0ustar zuulzuul00000000000000 image/svg+xml ironic-15.0.0/doc/source/index.rst0000664000175000017500000000461313652514273017024 0ustar zuulzuul00000000000000================================== Welcome to Ironic's documentation! ================================== Introduction ============ Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance), and Object (swift) services. The Bare Metal service manages hardware through both common (eg. PXE and IPMI) and vendor-specific remote management protocols. It provides the cloud operator with a unified interface to a heterogeneous fleet of servers while also providing the Compute service with an interface that allows physical servers to be managed as though they were virtual machines. This documentation is continually updated and may not represent the state of the project at any specific prior release. To access documentation for a previous release of ironic, append the OpenStack release name to the URL; for example, the ``ocata`` release is available at https://docs.openstack.org/ironic/ocata/. Installation Guide ================== .. toctree:: :maxdepth: 2 install/index Upgrade Guide ============= .. toctree:: :maxdepth: 2 admin/upgrade-guide admin/upgrade-to-hardware-types User Guide ========== .. toctree:: :maxdepth: 2 user/index Administrator Guide =================== .. toctree:: :maxdepth: 2 admin/index Configuration Guide =================== .. toctree:: :maxdepth: 2 configuration/index Bare Metal API References ========================= Ironic's REST API has changed since its first release, and continues to evolve to meet the changing needs of the community. Here we provide a conceptual guide as well as more detailed reference documentation. .. toctree:: :maxdepth: 1 API Concept Guide API Reference (latest) API Version History Command References ================== Here are references for commands not elsewhere documented. .. toctree:: :maxdepth: 2 cli/index Contributor Guide ================= .. toctree:: :maxdepth: 2 contributor/index Release Notes ============= `Release Notes `_ .. only:: html Indices and tables ================== * :ref:`genindex` * :ref:`search` ironic-15.0.0/doc/requirements.txt0000664000175000017500000000047513652514273017151 0ustar zuulzuul00000000000000mock>=3.0.0 # BSD openstackdocstheme>=1.31.2 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 reno>=2.5.0 # Apache-2.0 sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2 # BSD sphinxcontrib-apidoc>=0.2.0 # BSD sphinxcontrib-pecanwsme>=0.10.0 # Apache-2.0 sphinxcontrib-seqdiag>=0.8.4 # BSD sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD ironic-15.0.0/zuul.d/0000775000175000017500000000000013652514443014332 5ustar zuulzuul00000000000000ironic-15.0.0/zuul.d/project.yaml0000664000175000017500000000547613652514273016701 0ustar zuulzuul00000000000000- project: templates: - check-requirements - openstack-cover-jobs - openstack-lower-constraints-jobs - openstack-python3-ussuri-jobs - periodic-stable-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 check: jobs: - ironic-tox-unit-with-driver-libs - ironic-standalone - ironic-tempest-functional-python3 - ironic-grenade-dsvm # Temporary disable voting because of end of cycle CI instability. - ironic-grenade-dsvm-multinode-multitenant: voting: false - ironic-tempest-partition-bios-redfish-pxe - ironic-tempest-partition-uefi-redfish-vmedia - ironic-tempest-ipa-partition-pxe_ipmitool - ironic-tempest-ipa-partition-uefi-pxe_ipmitool - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-indirect - ironic-tempest-ipa-partition-bios-agent_ipmitool-indirect - ironic-tempest-bfv - ironic-tempest-ipa-partition-uefi-pxe-grub2 - metalsmith-integration-glance-localboot-centos7 # Non-voting jobs - ironic-tox-bandit: voting: false - ironic-tempest-ipa-wholedisk-bios-pxe_snmp: voting: false - ironic-inspector-tempest: voting: false - ironic-inspector-tempest-managed: voting: false - ironic-inspector-tempest-partition-bios-redfish-vmedia: voting: false - ironic-tempest-ipa-wholedisk-bios-ipmi-direct-dib: voting: false - bifrost-integration-tinyipa-ubuntu-bionic: voting: false - ironic-tempest-pxe_ipmitool-postgres: voting: false gate: queue: ironic jobs: - ironic-tox-unit-with-driver-libs - ironic-standalone - ironic-tempest-functional-python3 - ironic-grenade-dsvm # removing from voting due to end of cycle gate instability. # - ironic-grenade-dsvm-multinode-multitenant - ironic-tempest-partition-bios-redfish-pxe - ironic-tempest-partition-uefi-redfish-vmedia - ironic-tempest-ipa-partition-pxe_ipmitool - ironic-tempest-ipa-partition-uefi-pxe_ipmitool - ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa - ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-indirect - ironic-tempest-ipa-partition-bios-agent_ipmitool-indirect - ironic-tempest-bfv - ironic-tempest-ipa-partition-uefi-pxe-grub2 - metalsmith-integration-glance-localboot-centos7 experimental: jobs: - ironic-inspector-tempest-discovery-fast-track: voting: false ironic-15.0.0/zuul.d/legacy-ironic-jobs.yaml0000664000175000017500000000621213652514273020700 0ustar zuulzuul00000000000000- job: name: legacy-ironic-dsvm-base parent: legacy-dsvm-base irrelevant-files: - ^driver-requirements.txt$ - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ # NOTE: When adding to 'required-projects' also need to add a corresponding # "export PROJECTS=..." line in all the playbooks/legacy/*/run.yaml files required-projects: - openstack/ironic - openstack/ironic-lib - openstack/ironic-python-agent - openstack/ironic-python-agent-builder - openstack/ironic-tempest-plugin - openstack/python-ironicclient - openstack/virtualbmc pre-run: playbooks/legacy/ironic-dsvm-base/pre.yaml post-run: playbooks/legacy/ironic-dsvm-base/post.yaml # TODO(TheJulia): When we migrate to a non-legacy job, we will need to set the BUILD_TIMEOUT # and the DEVSTACK_GATE_TEMPEST_BAREMETAL_BUILD_TIMEOUT to 1200 seconds to prevent # needless CI job timeouts as the scale of the job is greater than a normal test jobs. - job: name: legacy-ironic-dsvm-base-multinode parent: legacy-dsvm-base-multinode irrelevant-files: - ^driver-requirements.txt$ - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ # NOTE: When adding to 'required-projects' also need to add a corresponding # "export PROJECTS=..." line in all the playbooks/legacy/*/run.yaml files required-projects: - openstack/ironic - openstack/ironic-lib - openstack/ironic-python-agent - openstack/ironic-tempest-plugin - openstack/networking-generic-switch - openstack/python-ironicclient - openstack/virtualbmc pre-run: playbooks/legacy/ironic-dsvm-base-multinode/pre.yaml post-run: playbooks/legacy/ironic-dsvm-base-multinode/post.yaml # TODO(TheJulia): When we migrate to a non-legacy job, we will need to set the BUILD_TIMEOUT # and the DEVSTACK_GATE_TEMPEST_BAREMETAL_BUILD_TIMEOUT to 1200 seconds to prevent # needless CI job timeouts as the scale of the job is greater than a normal test jobs. - job: name: ironic-grenade-dsvm parent: legacy-ironic-dsvm-base run: playbooks/legacy/grenade-dsvm-ironic/run.yaml timeout: 10800 required-projects: - openstack/grenade - openstack/devstack-gate - openstack/ironic - openstack/ironic-lib - openstack/ironic-python-agent - openstack/python-ironicclient - openstack/virtualbmc - job: name: ironic-grenade-dsvm-multinode-multitenant parent: legacy-ironic-dsvm-base-multinode run: playbooks/legacy/grenade-dsvm-ironic-multinode-multitenant/run.yaml timeout: 10800 required-projects: - openstack/grenade - openstack/devstack-gate - openstack/ironic - openstack/ironic-lib - openstack/ironic-python-agent - openstack/networking-generic-switch - openstack/python-ironicclient - openstack/virtualbmc ironic-15.0.0/zuul.d/ironic-jobs.yaml0000664000175000017500000005134613652514273017446 0ustar zuulzuul00000000000000- job: name: ironic-base abstract: true description: Base job for devstack/tempest based ironic jobs. parent: devstack-tempest nodeset: openstack-single-node-bionic timeout: 10800 required-projects: - openstack/ironic - openstack/ironic-python-agent - openstack/ironic-python-agent-builder - openstack/ironic-tempest-plugin - openstack/virtualbmc irrelevant-files: - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^driver-requirements.txt$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ vars: tox_envlist: all tempest_test_regex: ironic_tempest_plugin.tests.scenario tempest_concurrency: 1 devstack_localrc: DEFAULT_INSTANCE_TYPE: baremetal FORCE_CONFIG_DRIVE: True INSTALL_TEMPEST: False # Don't install a tempest package globaly TEMPEST_PLUGINS: "{{ ansible_user_dir }}/src/opendev.org/openstack/ironic-tempest-plugin" VIRT_DRIVER: ironic BUILD_TIMEOUT: 720 IRONIC_BAREMETAL_BASIC_OPS: True IRONIC_BUILD_DEPLOY_RAMDISK: False IRONIC_CALLBACK_TIMEOUT: 600 IRONIC_DEPLOY_DRIVER: ipmi IRONIC_INSPECTOR_BUILD_RAMDISK: False IRONIC_TEMPEST_BUILD_TIMEOUT: 720 IRONIC_TEMPEST_WHOLE_DISK_IMAGE: False IRONIC_VM_COUNT: 1 IRONIC_VM_EPHEMERAL_DISK: 1 IRONIC_VM_SPECS_RAM: 2048 IRONIC_VM_LOG_DIR: '{{ devstack_base_dir }}/ironic-bm-logs' # NOTE(dtantsur): in some jobs we end up with 12 disks total, so reduce # each of them. For don't need all 10 GiB for CirrOS anyway. IRONIC_VM_SPECS_DISK: 4 IRONIC_DEFAULT_DEPLOY_INTERFACE: iscsi Q_AGENT: openvswitch Q_ML2_TENANT_NETWORK_TYPE: vxlan SERVICE_TIMEOUT: 90 devstack_plugins: ironic: https://opendev.org/openstack/ironic zuul_copy_output: '{{ devstack_base_dir }}/ironic-bm-logs': 'logs' '{{ devstack_base_dir }}/data/networking-generic-switch/netmiko_session.log': 'logs' devstack_services: q-agt: false q-dhcp: false q-l3: false q-meta: false q-metering: false q-svc: false neutron-api: true neutron-agent: true neutron-dhcp: true neutron-l3: true neutron-metadata-agent: true neutron-metering: true c-api: False c-bak: False c-sch: False c-vol: False cinder: False s-account: False s-container: False s-object: False s-proxy: False - job: name: ironic-standalone description: Test ironic standalone parent: ironic-base irrelevant-files: - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^test-requirements.txt$ - ^tools/.*$ - ^tox.ini$ vars: tempest_test_regex: ironic_standalone tempest_concurrency: 2 devstack_localrc: FORCE_CONFIG_DRIVE: False IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_DEFAULT_RESCUE_INTERFACE: agent IRONIC_ENABLED_DEPLOY_INTERFACES: "iscsi,direct,ansible" IRONIC_ENABLED_RESCUE_INTERFACES: "fake,agent,no-rescue" IRONIC_RAMDISK_TYPE: tinyipa IRONIC_RPC_TRANSPORT: json-rpc IRONIC_VM_SPECS_RAM: 384 IRONIC_VM_COUNT: 6 IRONIC_VM_VOLUME_COUNT: 2 # We're using a lot of disk space in this job. Some testing nodes have # a small root partition, so use /opt which is mounted from a bigger # ephemeral partition on such nodes LIBVIRT_STORAGE_POOL_PATH: /opt/libvirt/images SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey devstack_services: n-api: False n-api-meta: False n-cauth: False n-cond: False n-cpu: False n-novnc: False n-obj: False n-sch: False nova: False placement-api: False s-account: True s-container: True s-object: True s-proxy: True - job: name: ironic-tempest-partition-bios-redfish-pxe description: "Deploy ironic node over PXE using BIOS boot mode" parent: ironic-base timeout: 5400 required-projects: - openstack/sushy-tools vars: devstack_localrc: IRONIC_DEPLOY_DRIVER: redfish IRONIC_ENABLED_HARDWARE_TYPES: redfish IRONIC_ENABLED_POWER_INTERFACES: redfish IRONIC_ENABLED_MANAGEMENT_INTERFACES: redfish IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_BOOT_OPTION: netboot - job: name: ironic-tempest-partition-uefi-redfish-vmedia description: "Deploy ironic node over Redfish virtual media using UEFI boot mode" parent: ironic-tempest-partition-bios-redfish-pxe vars: devstack_localrc: IRONIC_BOOT_MODE: uefi IRONIC_ENABLED_BOOT_INTERFACES: redfish-virtual-media SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey IRONIC_AUTOMATED_CLEAN_ENABLED: False devstack_services: s-account: True s-container: True s-object: True s-proxy: True - job: name: ironic-inspector-tempest-partition-bios-redfish-vmedia description: "Inspect and deploy ironic node over Redfish virtual media using legacy BIOS boot mode" parent: ironic-tempest-partition-uefi-redfish-vmedia required-projects: - openstack/ironic-inspector vars: # NOTE(dtantsur): the inspector job includes booting an instance too tempest_test_regex: Inspector devstack_localrc: IRONIC_BOOT_MODE: bios IRONIC_INSPECTOR_MANAGED_BOOT: True IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK: '' IRONIC_AUTOMATED_CLEAN_ENABLED: False devstack_plugins: ironic-inspector: https://opendev.org/openstack/ironic-inspector devstack_services: ironic-inspector: True ironic-inspector-dhcp: True - job: name: ironic-tempest-pxe_ipmitool-postgres description: ironic-tempest-pxe_ipmitool-postgres parent: ironic-base vars: devstack_localrc: IRONIC_ENABLED_BOOT_INTERFACES: "fake,pxe" IRONIC_IPXE_ENABLED: False IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_BOOT_OPTION: netboot devstack_services: mysql: False postgresql: True # NOTE(rpittau): converted job but not running for now as there # could be an issue with the lookup in ironic-python-agent - job: name: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool description: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool parent: ironic-base timeout: 9600 vars: devstack_localrc: IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_DEFAULT_RESCUE_INTERFACE: agent IRONIC_ENABLED_RESCUE_INTERFACES: "fake,agent,no-rescue" IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_VM_SPECS_RAM: 3096 SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey devstack_services: s-account: True s-container: True s-object: True s-proxy: True - job: name: ironic-tempest-ipa-wholedisk-bios-pxe_snmp description: ironic-tempest-ipa-wholedisk-bios-pxe_snmp parent: ironic-base timeout: 5400 vars: devstack_localrc: IRONIC_ENABLED_HARDWARE_TYPES: snmp IRONIC_DEPLOY_DRIVER: snmp IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_AUTOMATED_CLEAN_ENABLED: False - job: name: ironic-tempest-ipa-partition-uefi-pxe_ipmitool description: ironic-tempest-ipa-partition-uefi-pxe_ipmitool parent: ironic-base timeout: 5400 vars: devstack_localrc: IRONIC_BOOT_MODE: uefi IRONIC_VM_SPECS_RAM: 3096 IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_BOOT_OPTION: netboot - job: name: ironic-tempest-ipa-partition-pxe_ipmitool description: ironic-tempest-ipa-partition-pxe_ipmitool parent: ironic-base timeout: 5400 vars: devstack_localrc: # This test runs cleaning by default, and with a larger # IPA image means that it takes longer to boot for deploy # and cleaning. As such, CI performance variations can # cause this job to fail easily due to the extra steps # and boot cycle of the cleaning operation. IRONIC_TEMPEST_BUILD_TIMEOUT: 850 IRONIC_DEFAULT_BOOT_OPTION: netboot - job: name: ironic-tempest-bfv description: ironic-tempest-bfv parent: ironic-base timeout: 9600 vars: tempest_test_regex: baremetal_boot_from_volume devstack_localrc: IRONIC_ENABLED_STORAGE_INTERFACES: cinder,noop IRONIC_STORAGE_INTERFACE: cinder IRONIC_ENABLED_BOOT_INTERFACES: ipxe,pxe,fake IRONIC_DEFAULT_BOOT_INTERFACE: ipxe IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_VM_COUNT: 3 IRONIC_AUTOMATED_CLEAN_ENABLED: False SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey devstack_services: c-api: True c-bak: True c-sch: True c-vol: True cinder: True - job: name: ironic-inspector-tempest description: ironic-inspector-tempest parent: ironic-base required-projects: - openstack/ironic-inspector vars: tempest_test_regex: InspectorBasicTest devstack_localrc: IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_INSPECTOR_MANAGE_FIREWALL: True IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_AUTOMATED_CLEAN_ENABLED: False SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey IRONIC_DEFAULT_BOOT_OPTION: netboot devstack_plugins: ironic-inspector: https://opendev.org/openstack/ironic-inspector devstack_services: s-account: True s-container: True s-object: True s-proxy: True - job: name: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-indirect description: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-indirect parent: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool timeout: 5400 vars: devstack_localrc: IRONIC_AGENT_IMAGE_DOWNLOAD_SOURCE: http IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_RESCUE_INTERFACE: no-rescue IRONIC_ENABLED_RESCUE_INTERFACES: "fake,no-rescue" - job: name: ironic-tempest-ipa-partition-bios-agent_ipmitool-indirect description: ironic-tempest-ipa-partition-bios-agent_ipmitool-indirect parent: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool timeout: 5400 vars: devstack_localrc: IRONIC_AGENT_IMAGE_DOWNLOAD_SOURCE: http IRONIC_TEMPEST_WHOLE_DISK_IMAGE: False IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_RESCUE_INTERFACE: no-rescue IRONIC_ENABLED_RESCUE_INTERFACES: "fake,no-rescue" IRONIC_DEFAULT_BOOT_OPTION: netboot - job: name: ironic-tempest-functional-python3 description: ironic-tempest-functional-python3 parent: ironic-base timeout: 5400 pre-run: playbooks/ci-workarounds/etc-neutron.yaml vars: tempest_test_regex: ironic_tempest_plugin.tests.api devstack_localrc: IRONIC_BAREMETAL_BASIC_OPS: False IRONIC_DEFAULT_DEPLOY_INTERFACE: "" IRONIC_DEFAULT_NETWORK_INTERFACE: noop IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_RPC_TRANSPORT: json-rpc devstack_services: rabbit: False g-api: False g-reg: False n-api: False n-api-meta: False n-cauth: False n-cond: False n-cpu: False n-novnc: False n-obj: False n-sch: False nova: False placement-api: False q-agt: False q-dhcp: False q-l3: False q-meta: False q-metering: False q-svc: False neutron-api: False neutron-agent: False neutron-dhcp: False neutron-l3: False neutron-metadata-agent: False neutron-metering: False - job: name: ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode description: ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode parent: tempest-multinode-full-py3 pre-run: playbooks/ci-workarounds/pre.yaml timeout: 10800 required-projects: - openstack/ironic - openstack/ironic-python-agent - openstack/ironic-python-agent-builder - openstack/ironic-tempest-plugin - openstack/virtualbmc - openstack/networking-generic-switch irrelevant-files: - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^driver-requirements.txt$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ roles: - zuul: opendev.org/zuul/zuul-jobs vars: tox_envlist: all tempest_concurrency: 3 tempest_test_regex: "(ironic_tempest_plugin.tests.scenario|test_schedule_to_all_nodes)" tempest_test_timeout: 2400 devstack_localrc: BUILD_TIMEOUT: 2400 DEFAULT_INSTANCE_TYPE: baremetal ENABLE_TENANT_TUNNELS: False ENABLE_TENANT_VLANS: True FORCE_CONFIG_DRIVE: True GENERIC_SWITCH_KEY_FILE: /opt/stack/.ssh/id_rsa HOST_TOPOLOGY: multinode HOST_TOPOLOGY_ROLE: primary INSTALL_TEMPEST: False # Don't install a tempest package globaly IRONIC_AUTOMATED_CLEAN_ENABLED: False HOST_TOPOLOGY_SUBNODES: "{{ hostvars['compute1']['nodepool']['public_ipv4'] }}" IRONIC_BAREMETAL_BASIC_OPS: True IRONIC_BUILD_DEPLOY_RAMDISK: False IRONIC_CALLBACK_TIMEOUT: 600 IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_DEFAULT_BOOT_OPTION: local IRONIC_DEPLOY_DRIVER: ipmi IRONIC_ENABLED_NETWORK_INTERFACES: flat,neutron IRONIC_INSPECTOR_BUILD_RAMDISK: False IRONIC_NETWORK_INTERFACE: neutron IRONIC_PROVISION_NETWORK_NAME: ironic-provision IRONIC_PROVISION_SUBNET_GATEWAY: 10.0.5.1 IRONIC_PROVISION_SUBNET_PREFIX: 10.0.5.0/24 IRONIC_RAMDISK_TYPE: tinyipa IRONIC_TEMPEST_BUILD_TIMEOUT: 600 IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_USE_LINK_LOCAL: True IRONIC_VM_COUNT: 6 IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_VM_LOG_DIR: '{{ devstack_base_dir }}/ironic-bm-logs' IRONIC_VM_SPECS_RAM: 384 IRONIC_VM_SPECS_DISK: 4 OVS_BRIDGE_MAPPINGS: 'mynetwork:brbm,public:br-infra' OVS_PHYSICAL_BRIDGE: brbm PHYSICAL_NETWORK: mynetwork PUBLIC_BRIDGE: br-infra Q_AGENT: openvswitch Q_ML2_TENANT_NETWORK_TYPE: vlan Q_PLUGIN: ml2 SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey TEMPEST_PLUGINS: "{{ ansible_user_dir }}/src/opendev.org/openstack/ironic-tempest-plugin" TENANT_VLAN_RANGE: 100:150 VIRT_DRIVER: ironic # We're using a lot of disk space in this job. Some testing nodes have # a small root partition, so use /opt which is mounted from a bigger # ephemeral partition on such nodes LIBVIRT_STORAGE_POOL_PATH: /opt/libvirt/images devstack_plugins: ironic: https://opendev.org/openstack/ironic networking-generic-switch: https://opendev.org/openstack/networking-generic-switch zuul_copy_output: '{{ devstack_base_dir }}/ironic-bm-logs': 'logs' '{{ devstack_base_dir }}/data/networking-generic-switch/netmiko_session.log': 'logs' devstack_services: c-api: False c-bak: False c-sch: False c-vol: False cinder: False s-account: True s-container: True s-object: True s-proxy: True dstat: True g-api: True g-reg: True key: True mysql: True n-api: True n-api-meta: True n-cauth: True n-cond: True n-cpu: True n-novnc: True n-obj: True n-sch: True placement-api: True q-agt: True q-dhcp: True q-l3: True q-meta: True q-metering: True q-svc: True rabbit: True group-vars: subnode: devstack_localrc: ENABLE_TENANT_TUNNELS: False ENABLE_TENANT_VLANS: True HOST_TOPOLOGY: multinode HOST_TOPOLOGY_ROLE: subnode IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_BAREMETAL_BASIC_OPS: True IRONIC_DEPLOY_DRIVER: ipmi IRONIC_DEFAULT_BOOT_OPTION: local IRONIC_ENABLED_NETWORK_INTERFACES: flat,neutron IRONIC_NETWORK_INTERFACE: neutron IRONIC_PROVISION_NETWORK_NAME: ironic-provision IRONIC_RAMDISK_TYPE: tinyipa IRONIC_USE_LINK_LOCAL: True IRONIC_VM_COUNT: 6 IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_VM_LOG_DIR: '{{ devstack_base_dir }}/ironic-bm-logs' IRONIC_VM_NETWORK_BRIDGE: sub1brbm IRONIC_VM_SPECS_RAM: 384 OVS_BRIDGE_MAPPINGS: 'mynetwork:sub1brbm,public:br-infra' OVS_PHYSICAL_BRIDGE: sub1brbm PHYSICAL_NETWORK: mynetwork Q_ML2_TENANT_NETWORK_TYPE: vlan VIRT_DRIVER: ironic PUBLIC_BRIDGE: br-infra LIBVIRT_STORAGE_POOL_PATH: /opt/libvirt/images devstack_services: c-api: False c-bak: False c-sch: False c-vol: False cinder: False q-agt: True n-cpu: True - job: name: ironic-tox-unit-with-driver-libs parent: tox description: | Run python 3 unit tests with driver dependencies installed. vars: tox_envlist: unit-with-driver-libs - job: name: ironic-inspector-tempest-discovery-fast-track description: ironic-inspector-tempest-discovery-fast-track parent: ironic-inspector-tempest-discovery vars: tempest_test_regex: BareMetalFastTrackTest devstack_localrc: IRONIC_INSPECTOR_POWER_OFF: False IRONIC_DEPLOY_FAST_TRACK: True IRONIC_DEPLOY_FAST_TRACK_CLEANING: True - job: name: ironic-tempest-ipa-partition-uefi-pxe-grub2 description: Ironic tempest scenario test utilizing PXE, UEFI, and Grub2 parent: ironic-base vars: devstack_localrc: IRONIC_ENABLED_HARDWARE_TYPES: ipmi IRONIC_ENABLED_BOOT_INTERFACES: pxe IRONIC_IPXE_ENABLED: False IRONIC_BOOT_MODE: uefi IRONIC_AUTOMATED_CLEAN_ENABLED: False IRONIC_DEFAULT_BOOT_OPTION: netboot - job: # Security testing for known issues name: ironic-tox-bandit parent: openstack-tox timeout: 2400 vars: tox_envlist: bandit required-projects: - openstack/ironic irrelevant-files: - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^driver-requirements.txt$ - ^install-guide/.*$ - ^ironic/locale/.*$ - ^ironic/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/(?!bandit\.yml).*$ - ^tox.ini$ - job: name: ironic-tempest-ipa-wholedisk-bios-ipmi-direct-dib parent: ironic-base timeout: 9600 vars: tempest_test_timeout: 2400 devstack_services: s-account: True s-container: True s-object: True s-proxy: True devstack_localrc: IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_DIB_RAMDISK_OS: centos8 IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_TEMPEST_BUILD_TIMEOUT: 900 IRONIC_VM_EPHEMERAL_DISK: 0 IRONIC_VM_INTERFACE_COUNT: 1 IRONIC_AUTOMATED_CLEAN_ENABLED: False SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey # NOTE(rpittau): OLD TINYIPA JOBS # Those jobs are used by other projects, we leave them here until # we can convert them to dib. # Used by devstack/ironic/nova/neutron - job: name: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa description: ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa parent: ironic-base timeout: 5400 vars: devstack_localrc: IRONIC_DEFAULT_DEPLOY_INTERFACE: direct IRONIC_DEFAULT_RESCUE_INTERFACE: agent IRONIC_ENABLED_RESCUE_INTERFACES: "fake,agent,no-rescue" IRONIC_RAMDISK_TYPE: tinyipa IRONIC_VM_SPECS_RAM: 384 IRONIC_TEMPEST_WHOLE_DISK_IMAGE: True IRONIC_VM_EPHEMERAL_DISK: 0 SWIFT_ENABLE_TEMPURLS: True SWIFT_TEMPURL_KEY: secretkey devstack_services: s-account: True s-container: True s-object: True s-proxy: True ironic-15.0.0/.mailmap0000664000175000017500000000032113652514273014527 0ustar zuulzuul00000000000000# Format is: # # Joe Gordon Aeva Black ironic-15.0.0/ironic/0000775000175000017500000000000013652514443014374 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/0000775000175000017500000000000013652514443014761 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/api.py0000664000175000017500000013424313652514273016114 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Base classes for storage engines """ import abc from oslo_config import cfg from oslo_db import api as db_api _BACKEND_MAPPING = {'sqlalchemy': 'ironic.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def get_instance(): """Return a DB API instance.""" return IMPL class Connection(object, metaclass=abc.ABCMeta): """Base class for storage system connections.""" @abc.abstractmethod def __init__(self): """Constructor.""" @abc.abstractmethod def get_nodeinfo_list(self, columns=None, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Get specific columns for matching nodes. Return a list of the specified columns for all nodes that match the specified filters. :param columns: List of column names to return. Defaults to 'id' column when columns == None. :param filters: Filters to apply. Defaults to None. :associated: True | False :reserved: True | False :reserved_by_any_of: [conductor1, conductor2] :maintenance: True | False :retired: True | False :chassis_uuid: uuid of chassis :driver: driver's name :provision_state: provision state of node :provisioned_before: nodes with provision_updated_at field before this interval in seconds :param limit: Maximum number of nodes to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) :returns: A list of tuples of the specified columns. """ @abc.abstractmethod def get_node_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of nodes. :param filters: Filters to apply. Defaults to None. :associated: True | False :reserved: True | False :maintenance: True | False :chassis_uuid: uuid of chassis :driver: driver's name :provision_state: provision state of node :provisioned_before: nodes with provision_updated_at field before this interval in seconds :param limit: Maximum number of nodes to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def check_node_list(self, idents): """Check a list of node identities and map it to UUIDs. This call takes a list of node names and/or UUIDs and tries to convert them to UUIDs. It fails early if any identities cannot possible be used as names or UUIDs. :param idents: List of identities. :returns: A mapping from requests identities to node UUIDs. :raises: NodeNotFound if some identities were not found or cannot be valid names or UUIDs. """ @abc.abstractmethod def reserve_node(self, tag, node_id): """Reserve a node. To prevent other ManagerServices from manipulating the given Node while a Task is performed, mark it reserved by this host. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :returns: A Node object. :raises: NodeNotFound if the node is not found. :raises: NodeLocked if the node is already reserved. """ @abc.abstractmethod def release_node(self, tag, node_id): """Release the reservation on a node. :param tag: A string uniquely identifying the reservation holder. :param node_id: A node id or uuid. :raises: NodeNotFound if the node is not found. :raises: NodeLocked if the node is reserved by another host. :raises: NodeNotLocked if the node was found to not have a reservation at all. """ @abc.abstractmethod def create_node(self, values): """Create a new node. :param values: A dict containing several items used to identify and track the node, and several dicts which are passed into the Drivers when managing this node. For example: :: { 'uuid': uuidutils.generate_uuid(), 'instance_uuid': None, 'power_state': states.POWER_OFF, 'provision_state': states.AVAILABLE, 'driver': 'ipmi', 'driver_info': { ... }, 'properties': { ... }, 'extra': { ... }, } :raises: InvalidParameterValue if 'values' contains 'tags' or 'traits'. :returns: A node. """ @abc.abstractmethod def get_node_by_id(self, node_id): """Return a node. :param node_id: The id of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_uuid(self, node_uuid): """Return a node. :param node_uuid: The uuid of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_name(self, node_name): """Return a node. :param node_name: The logical name of a node. :returns: A node. """ @abc.abstractmethod def get_node_by_instance(self, instance): """Return a node. :param instance: The instance uuid to search for. :returns: A node. :raises: InstanceNotFound if the instance is not found. :raises: InvalidUUID if the instance uuid is invalid. """ @abc.abstractmethod def destroy_node(self, node_id): """Destroy a node and its associated resources. Destroy a node, including any associated ports, port groups, tags, traits, volume connectors, and volume targets. :param node_id: The ID or UUID of a node. """ @abc.abstractmethod def update_node(self, node_id, values): """Update properties of a node. :param node_id: The id or uuid of a node. :param values: Dict of values to update. May be a partial list, eg. when setting the properties for a driver. For example: :: { 'driver_info': { 'my-field-1': val1, 'my-field-2': val2, } } :returns: A node. :raises: NodeAssociated :raises: NodeNotFound """ @abc.abstractmethod def get_port_by_id(self, port_id): """Return a network port representation. :param port_id: The id of a port. :returns: A port. """ @abc.abstractmethod def get_port_by_uuid(self, port_uuid): """Return a network port representation. :param port_uuid: The uuid of a port. :returns: A port. """ @abc.abstractmethod def get_port_by_address(self, address): """Return a network port representation. :param address: The MAC address of a port. :returns: A port. """ @abc.abstractmethod def get_port_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of ports. :param limit: Maximum number of ports to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def get_ports_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the ports for a given node. :param node_id: The integer node ID. :param limit: Maximum number of ports to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :returns: A list of ports. """ @abc.abstractmethod def get_ports_by_portgroup_id(self, portgroup_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the ports for a given portgroup. :param portgroup_id: The integer portgroup ID. :param limit: Maximum number of ports to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: Direction in which results should be sorted (asc, desc) :returns: A list of ports. """ @abc.abstractmethod def create_port(self, values): """Create a new port. :param values: Dict of values. """ @abc.abstractmethod def update_port(self, port_id, values): """Update properties of an port. :param port_id: The id or MAC of a port. :param values: Dict of values to update. :returns: A port. """ @abc.abstractmethod def destroy_port(self, port_id): """Destroy an port. :param port_id: The id or MAC of a port. """ @abc.abstractmethod def get_portgroup_by_id(self, portgroup_id): """Return a network portgroup representation. :param portgroup_id: The id of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_uuid(self, portgroup_uuid): """Return a network portgroup representation. :param portgroup_uuid: The uuid of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_address(self, address): """Return a network portgroup representation. :param address: The MAC address of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_by_name(self, name): """Return a network portgroup representation. :param name: The logical name of a portgroup. :returns: A portgroup. :raises: PortgroupNotFound """ @abc.abstractmethod def get_portgroup_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of portgroups. :param limit: Maximum number of portgroups to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of portgroups. """ @abc.abstractmethod def get_portgroups_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the portgroups for a given node. :param node_id: The integer node ID. :param limit: Maximum number of portgroups to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: Direction in which results should be sorted (asc, desc) :returns: A list of portgroups. """ @abc.abstractmethod def create_portgroup(self, values): """Create a new portgroup. :param values: Dict of values with the following keys: 'id' 'uuid' 'name' 'node_id' 'address' 'extra' 'created_at' 'updated_at' :returns: A portgroup :raises: PortgroupDuplicateName :raises: PortgroupMACAlreadyExists :raises: PortgroupAlreadyExists """ @abc.abstractmethod def update_portgroup(self, portgroup_id, values): """Update properties of a portgroup. :param portgroup_id: The UUID or MAC of a portgroup. :param values: Dict of values to update. May contain the following keys: 'uuid' 'name' 'node_id' 'address' 'extra' 'created_at' 'updated_at' :returns: A portgroup. :raises: InvalidParameterValue :raises: PortgroupNotFound :raises: PortgroupDuplicateName :raises: PortgroupMACAlreadyExists """ @abc.abstractmethod def destroy_portgroup(self, portgroup_id): """Destroy a portgroup. :param portgroup_id: The UUID or MAC of a portgroup. :raises: PortgroupNotEmpty :raises: PortgroupNotFound """ @abc.abstractmethod def create_chassis(self, values): """Create a new chassis. :param values: Dict of values. """ @abc.abstractmethod def get_chassis_by_id(self, chassis_id): """Return a chassis representation. :param chassis_id: The id of a chassis. :returns: A chassis. """ @abc.abstractmethod def get_chassis_by_uuid(self, chassis_uuid): """Return a chassis representation. :param chassis_uuid: The uuid of a chassis. :returns: A chassis. """ @abc.abstractmethod def get_chassis_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of chassis. :param limit: Maximum number of chassis to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def update_chassis(self, chassis_id, values): """Update properties of an chassis. :param chassis_id: The id or the uuid of a chassis. :param values: Dict of values to update. :returns: A chassis. """ @abc.abstractmethod def destroy_chassis(self, chassis_id): """Destroy a chassis. :param chassis_id: The id or the uuid of a chassis. """ @abc.abstractmethod def register_conductor(self, values, update_existing=False): """Register an active conductor with the cluster. :param values: A dict of values which must contain the following: :: { 'hostname': the unique hostname which identifies this Conductor service. 'drivers': a list of supported drivers. 'version': the version of the object.Conductor } :param update_existing: When false, registration will raise an exception when a conflicting online record is found. When true, will overwrite the existing record. Default: False. :returns: A conductor. :raises: ConductorAlreadyRegistered """ @abc.abstractmethod def get_conductor_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of conductors. :param limit: Maximum number of conductors to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) """ @abc.abstractmethod def get_conductor(self, hostname, online=True): """Retrieve a conductor's service record from the database. :param hostname: The hostname of the conductor service. :param online: Specify the filter value on the `online` field when querying conductors. The ``online`` field is ignored if this value is set to None. :returns: A conductor. :raises: ConductorNotFound if the conductor with given hostname does not exist or doesn't meet the specified online expectation. """ @abc.abstractmethod def unregister_conductor(self, hostname): """Remove this conductor from the service registry immediately. :param hostname: The hostname of this conductor service. :raises: ConductorNotFound """ @abc.abstractmethod def touch_conductor(self, hostname): """Mark a conductor as active by updating its 'updated_at' property. :param hostname: The hostname of this conductor service. :raises: ConductorNotFound """ @abc.abstractmethod def get_active_hardware_type_dict(self, use_groups=False): """Retrieve hardware types for the registered and active conductors. :param use_groups: Whether to factor conductor_group into the keys. :returns: A dict which maps hardware type names to the set of hosts which support them. For example: :: {hardware-type-a: set([host1, host2]), hardware-type-b: set([host2, host3])} """ @abc.abstractmethod def get_offline_conductors(self, field='hostname'): """Get a list conductors that are offline (dead). :param field: A field to return, hostname by default. :returns: A list of requested fields of offline conductors. """ @abc.abstractmethod def get_online_conductors(self): """Get a list conductor hostnames that are online and active. :returns: A list of conductor hostnames. """ @abc.abstractmethod def list_conductor_hardware_interfaces(self, conductor_id): """List all registered hardware interfaces for a conductor. :param conductor_id: Database ID of conductor. :returns: List of ``ConductorHardwareInterfaces`` objects. """ @abc.abstractmethod def list_hardware_type_interfaces(self, hardware_types): """List registered hardware interfaces for given hardware types. This is restricted to only active conductors. :param hardware_types: list of hardware types to filter by. :returns: list of ``ConductorHardwareInterfaces`` objects. """ @abc.abstractmethod def register_conductor_hardware_interfaces(self, conductor_id, hardware_type, interface_type, interfaces, default_interface): """Registers hardware interfaces for a conductor. :param conductor_id: Database ID of conductor to register for. :param hardware_type: Name of hardware type for the interfaces. :param interface_type: Type of interfaces, e.g. 'deploy' or 'boot'. :param interfaces: List of interface names to register. :param default_interface: String, the default interface for this hardware type and interface type. :raises: ConductorHardwareInterfacesAlreadyRegistered if at least one of the interfaces in the combination of all parameters is already registered. """ @abc.abstractmethod def unregister_conductor_hardware_interfaces(self, conductor_id): """Unregisters all hardware interfaces for a conductor. :param conductor_id: Database ID of conductor to unregister for. """ @abc.abstractmethod def touch_node_provisioning(self, node_id): """Mark the node's provisioning as running. Mark the node's provisioning as running by updating its 'provision_updated_at' property. :param node_id: The id of a node. :raises: NodeNotFound """ @abc.abstractmethod def set_node_tags(self, node_id, tags): """Replace all of the node tags with specified list of tags. This ignores duplicate tags in the specified list. :param node_id: The id of a node. :param tags: List of tags. :returns: A list of NodeTag objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def unset_node_tags(self, node_id): """Remove all tags of the node. :param node_id: The id of a node. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def get_node_tags_by_node_id(self, node_id): """Get node tags based on its id. :param node_id: The id of a node. :returns: A list of NodeTag objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def add_node_tag(self, node_id, tag): """Add tag to the node. If the node_id and tag pair already exists, this should still succeed. :param node_id: The id of a node. :param tag: A tag string. :returns: the NodeTag object. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def delete_node_tag(self, node_id, tag): """Delete specified tag from the node. :param node_id: The id of a node. :param tag: A tag string. :raises: NodeNotFound if the node is not found. :raises: NodeTagNotFound if the tag is not found. """ @abc.abstractmethod def node_tag_exists(self, node_id, tag): """Check if the specified tag exist on the node. :param node_id: The id of a node. :param tag: A tag string. :returns: True if the tag exists otherwise False. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def get_node_by_port_addresses(self, addresses): """Find a node by any matching port address. :param addresses: list of port addresses (e.g. MACs). :returns: Node object. :raises: NodeNotFound if none or several nodes are found. """ @abc.abstractmethod def get_volume_connector_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of volume connectors. :param limit: Maximum number of volume connectors to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of volume connectors. :raises: InvalidParameterValue If sort_key does not exist. """ @abc.abstractmethod def get_volume_connector_by_id(self, db_id): """Return a volume connector representation. :param db_id: The integer database ID of a volume connector. :returns: A volume connector with the specified ID. :raises: VolumeConnectorNotFound If a volume connector with the specified ID is not found. """ @abc.abstractmethod def get_volume_connector_by_uuid(self, connector_uuid): """Return a volume connector representation. :param connector_uuid: The UUID of a connector. :returns: A volume connector with the specified UUID. :raises: VolumeConnectorNotFound If a volume connector with the specified UUID is not found. """ @abc.abstractmethod def get_volume_connectors_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the volume connectors for a given node. :param node_id: The integer node ID. :param limit: Maximum number of volume connectors to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: Direction in which results should be sorted (asc, desc) :returns: A list of volume connectors. :raises: InvalidParameterValue If sort_key does not exist. """ @abc.abstractmethod def create_volume_connector(self, connector_info): """Create a new volume connector. :param connector_info: Dictionary containing information about the connector. Example:: { 'uuid': '000000-..', 'type': 'wwnn', 'connector_id': '00:01:02:03:04:05:06', 'node_id': 2 } :returns: A volume connector. :raises: VolumeConnectorTypeAndIdAlreadyExists If a connector already exists with a matching type and connector_id. :raises: VolumeConnectorAlreadyExists If a volume connector with the same UUID already exists. """ @abc.abstractmethod def update_volume_connector(self, ident, connector_info): """Update properties of a volume connector. :param ident: The UUID or integer ID of a volume connector. :param connector_info: Dictionary containing the information about connector to update. :returns: A volume connector. :raises: VolumeConnectorTypeAndIdAlreadyExists If another connector already exists with a matching type and connector_id field. :raises: VolumeConnectorNotFound If a volume connector with the specified ident does not exist. :raises: InvalidParameterValue When a UUID is included in connector_info. """ @abc.abstractmethod def destroy_volume_connector(self, ident): """Destroy a volume connector. :param ident: The UUID or integer ID of a volume connector. :raises: VolumeConnectorNotFound If a volume connector with the specified ident does not exist. """ @abc.abstractmethod def get_volume_target_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of volume targets. :param limit: Maximum number of volume targets to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: direction in which results should be sorted. (asc, desc) :returns: A list of volume targets. :raises: InvalidParameterValue if sort_key does not exist. """ @abc.abstractmethod def get_volume_target_by_id(self, db_id): """Return a volume target representation. :param db_id: The database primary key (integer) ID of a volume target. :returns: A volume target. :raises: VolumeTargetNotFound if no volume target with this ID exists. """ @abc.abstractmethod def get_volume_target_by_uuid(self, uuid): """Return a volume target representation. :param uuid: The UUID of a volume target. :returns: A volume target. :raises: VolumeTargetNotFound if no volume target with this UUID exists. """ @abc.abstractmethod def get_volume_targets_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the volume targets for a given node. :param node_id: The integer node ID. :param limit: Maximum number of volume targets to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :returns: A list of volume targets. :raises: InvalidParameterValue if sort_key does not exist. """ @abc.abstractmethod def get_volume_targets_by_volume_id(self, volume_id, limit=None, marker=None, sort_key=None, sort_dir=None): """List all the volume targets for a given volume id. :param volume_id: The UUID of the volume. :param limit: Maximum number of volume targets to return. :param marker: the last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :returns: A list of volume targets. :raises: InvalidParameterValue if sort_key does not exist. """ @abc.abstractmethod def create_volume_target(self, target_info): """Create a new volume target. :param target_info: Dictionary containing the information about the volume target. Example:: { 'uuid': '000000-..', 'node_id': 2, 'boot_index': 0, 'volume_id': '12345678-...' 'volume_type': 'some type', } :returns: A volume target. :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same boot index and node ID. :raises: VolumeTargetAlreadyExists if a volume target with the same UUID exists. """ @abc.abstractmethod def update_volume_target(self, ident, target_info): """Update information for a volume target. :param ident: The UUID or integer ID of a volume target. :param target_info: Dictionary containing the information about volume target to update. :returns: A volume target. :raises: InvalidParameterValue if a UUID is included in target_info. :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same boot index and node ID. :raises: VolumeTargetNotFound if no volume target with this ident exists. """ @abc.abstractmethod def destroy_volume_target(self, ident): """Destroy a volume target. :param ident: The UUID or integer ID of a volume target. :raises: VolumeTargetNotFound if a volume target with the specified ident does not exist. """ @abc.abstractmethod def get_not_versions(self, model_name, versions): """Returns objects with versions that are not the specified versions. :param model_name: the name of the model (class) of desired objects :param versions: list of versions of objects not to be returned :returns: list of the DB objects :raises: IronicException if there is no class associated with the name """ @abc.abstractmethod def check_versions(self, ignore_models=()): """Checks the whole database for incompatible objects. This scans all the tables in search of objects that are not supported; i.e., those that are not specified in `ironic.common.release_mappings.RELEASE_MAPPING`. :param ignore_models: List of model names to skip. :returns: A Boolean. True if all the objects have supported versions; False otherwise. """ @abc.abstractmethod def update_to_latest_versions(self, context, max_count): """Updates objects to their latest known versions. This scans all the tables and for objects that are not in their latest version, updates them to that version. :param context: the admin context :param max_count: The maximum number of objects to migrate. Must be >= 0. If zero, all the objects will be migrated. :returns: A 2-tuple, 1. the total number of objects that need to be migrated (at the beginning of this call) and 2. the number of migrated objects. """ @abc.abstractmethod def set_node_traits(self, node_id, traits, version): """Replace all of the node traits with specified list of traits. This ignores duplicate traits in the specified list. :param node_id: The id of a node. :param traits: List of traits. :param version: the version of the object.Trait. :returns: A list of NodeTrait objects. :raises: InvalidParameterValue if setting the traits would exceed the per-node traits limit. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def unset_node_traits(self, node_id): """Remove all traits of the node. :param node_id: The id of a node. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def get_node_traits_by_node_id(self, node_id): """Get node traits based on its id. :param node_id: The id of a node. :returns: A list of NodeTrait objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def add_node_trait(self, node_id, trait, version): """Add trait to the node. If the node_id and trait pair already exists, this should still succeed. :param node_id: The id of a node. :param trait: A trait string. :param version: the version of the object.Trait. :returns: the NodeTrait object. :raises: InvalidParameterValue if adding the trait would exceed the per-node traits limit. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def delete_node_trait(self, node_id, trait): """Delete specified trait from the node. :param node_id: The id of a node. :param trait: A trait string. :raises: NodeNotFound if the node is not found. :raises: NodeTraitNotFound if the trait is not found. """ @abc.abstractmethod def node_trait_exists(self, node_id, trait): """Check if the specified trait exists on the node. :param node_id: The id of a node. :param trait: A trait string. :returns: True if the trait exists otherwise False. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def create_bios_setting_list(self, node_id, settings, version): """Create a list of BIOSSetting records for a given node. :param node_id: The node id. :param settings: A list of BIOS Settings to be created. :: [ { 'name': String, 'value': String, }, { 'name': String, 'value': String, }, ... ] :param version: the version of the object.BIOSSetting. :returns: A list of BIOSSetting object. :raises: NodeNotFound if the node is not found. :raises: BIOSSettingAlreadyExists if any of the setting records already exists. """ @abc.abstractmethod def update_bios_setting_list(self, node_id, settings, version): """Update a list of BIOSSetting records. :param node_id: The node id. :param settings: A list of BIOS Settings to be updated. :: [ { 'name': String, 'value': String, }, { 'name': String, 'value': String, }, ... ] :param version: the version of the object.BIOSSetting. :returns: A list of BIOSSetting objects. :raises: NodeNotFound if the node is not found. :raises: BIOSSettingNotFound if any of the settings is not found. """ @abc.abstractmethod def delete_bios_setting_list(self, node_id, names): """Delete a list of BIOS settings. :param node_id: The node id. :param names: List of BIOS setting names to be deleted. :raises: NodeNotFound if the node is not found. :raises: BIOSSettingNotFound if any of BIOS setting name is not found. """ @abc.abstractmethod def get_bios_setting(self, node_id, name): """Retrieve BIOS setting value. :param node_id: The node id. :param name: String containing name of BIOS setting to be retrieved. :returns: The BIOSSetting object. :raises: NodeNotFound if the node is not found. :raises: BIOSSettingNotFound if the BIOS setting is not found. """ @abc.abstractmethod def get_bios_setting_list(self, node_id): """Retrieve BIOS settings of a given node. :param node_id: The node id. :returns: A list of BIOSSetting objects. :raises: NodeNotFound if the node is not found. """ @abc.abstractmethod def get_allocation_by_id(self, allocation_id): """Return an allocation representation. :param allocation_id: The id of an allocation. :returns: An allocation. :raises: AllocationNotFound """ @abc.abstractmethod def get_allocation_by_uuid(self, allocation_uuid): """Return an allocation representation. :param allocation_uuid: The uuid of an allocation. :returns: An allocation. :raises: AllocationNotFound """ @abc.abstractmethod def get_allocation_by_name(self, name): """Return an allocation representation. :param name: The logical name of an allocation. :returns: An allocation. :raises: AllocationNotFound """ @abc.abstractmethod def get_allocation_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of allocations. :param filters: Filters to apply. Defaults to None. :node_uuid: uuid of node :state: allocation state :resource_class: requested resource class :param limit: Maximum number of allocations to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of allocations. """ @abc.abstractmethod def create_allocation(self, values): """Create a new allocation. :param values: Dict of values to create an allocation with :returns: An allocation :raises: AllocationDuplicateName :raises: AllocationAlreadyExists """ @abc.abstractmethod def update_allocation(self, allocation_id, values, update_node=True): """Update properties of an allocation. :param allocation_id: Allocation ID :param values: Dict of values to update. :param update_node: If True and node_id is updated, update the node with instance_uuid and traits from the allocation :returns: An allocation. :raises: AllocationNotFound :raises: AllocationDuplicateName :raises: InstanceAssociated :raises: NodeAssociated """ @abc.abstractmethod def take_over_allocation(self, allocation_id, old_conductor_id, new_conductor_id): """Do a take over for an allocation. The allocation is only updated if the old conductor matches the provided value, thus guarding against races. :param allocation_id: Allocation ID :param old_conductor_id: The conductor ID we expect to be the current ``conductor_affinity`` of the allocation. :param new_conductor_id: The conductor ID of the new ``conductor_affinity``. :returns: True if the take over was successful, False otherwise. :raises: AllocationNotFound """ @abc.abstractmethod def destroy_allocation(self, allocation_id): """Destroy an allocation. :param allocation_id: Allocation ID :raises: AllocationNotFound """ @abc.abstractmethod def create_deploy_template(self, values): """Create a deployment template. :param values: A dict describing the deployment template. For example: :: { 'uuid': uuidutils.generate_uuid(), 'name': 'CUSTOM_DT1', } :raises: DeployTemplateDuplicateName if a deploy template with the same name exists. :raises: DeployTemplateAlreadyExists if a deploy template with the same UUID exists. :returns: A deploy template. """ @abc.abstractmethod def update_deploy_template(self, template_id, values): """Update a deployment template. :param template_id: ID of the deployment template to update. :param values: A dict describing the deployment template. For example: :: { 'uuid': uuidutils.generate_uuid(), 'name': 'CUSTOM_DT1', } :raises: DeployTemplateDuplicateName if a deploy template with the same name exists. :raises: DeployTemplateNotFound if the deploy template does not exist. :returns: A deploy template. """ @abc.abstractmethod def destroy_deploy_template(self, template_id): """Destroy a deployment template. :param template_id: ID of the deployment template to destroy. :raises: DeployTemplateNotFound if the deploy template does not exist. """ @abc.abstractmethod def get_deploy_template_by_id(self, template_id): """Retrieve a deployment template by ID. :param template_id: ID of the deployment template to retrieve. :raises: DeployTemplateNotFound if the deploy template does not exist. :returns: A deploy template. """ @abc.abstractmethod def get_deploy_template_by_uuid(self, template_uuid): """Retrieve a deployment template by UUID. :param template_uuid: UUID of the deployment template to retrieve. :raises: DeployTemplateNotFound if the deploy template does not exist. :returns: A deploy template. """ @abc.abstractmethod def get_deploy_template_by_name(self, template_name): """Retrieve a deployment template by name. :param template_name: name of the deployment template to retrieve. :raises: DeployTemplateNotFound if the deploy template does not exist. :returns: A deploy template. """ @abc.abstractmethod def get_deploy_template_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): """Retrieve a list of deployment templates. :param limit: Maximum number of deploy templates to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of deploy templates. """ @abc.abstractmethod def get_deploy_template_list_by_names(self, names): """Return a list of deployment templates with one of a list of names. :param names: List of names to filter by. :returns: A list of deploy templates. """ ironic-15.0.0/ironic/db/migration.py0000664000175000017500000000303013652514273017321 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" from oslo_config import cfg from stevedore import driver _IMPL = None def get_backend(): global _IMPL if not _IMPL: cfg.CONF.import_opt('backend', 'oslo_db.options', group='database') _IMPL = driver.DriverManager("ironic.database.migration_backend", cfg.CONF.database.backend).driver return _IMPL def upgrade(version=None): """Migrate the database to `version` or the most recent version.""" return get_backend().upgrade(version) def version(): return get_backend().version() def stamp(version): return get_backend().stamp(version) def revision(message, autogenerate): return get_backend().revision(message, autogenerate) def create_schema(): return get_backend().create_schema() ironic-15.0.0/ironic/db/sqlalchemy/0000775000175000017500000000000013652514443017123 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/sqlalchemy/api.py0000664000175000017500000023652713652514273020266 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SQLAlchemy storage backend.""" import collections import datetime import json import threading from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log from oslo_utils import netutils from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils from osprofiler import sqlalchemy as osp_sqlalchemy import sqlalchemy as sa from sqlalchemy.orm.exc import NoResultFound, MultipleResultsFound from sqlalchemy.orm import joinedload from sqlalchemy import sql from ironic.common import exception from ironic.common.i18n import _ from ironic.common import profiler from ironic.common import release_mappings from ironic.common import states from ironic.common import utils from ironic.conf import CONF from ironic.db import api from ironic.db.sqlalchemy import models LOG = log.getLogger(__name__) _CONTEXT = threading.local() # NOTE(mgoddard): We limit the number of traits per node to 50 as this is the # maximum number of traits per resource provider allowed in placement. MAX_TRAITS_PER_NODE = 50 def get_backend(): """The backend is this module itself.""" return Connection() def _session_for_read(): return _wrap_session(enginefacade.reader.using(_CONTEXT)) # Please add @oslo_db_api.retry_on_deadlock decorator to all methods using # _session_for_write (as deadlocks happen on write), so that oslo_db is able # to retry in case of deadlocks. def _session_for_write(): return _wrap_session(enginefacade.writer.using(_CONTEXT)) def _wrap_session(session): if CONF.profiler.enabled and CONF.profiler.trace_sqlalchemy: session = osp_sqlalchemy.wrap_session(sa, session) return session def _get_node_query_with_all(): """Return a query object for the Node joined with all relevant fields. :returns: a query object. """ return (model_query(models.Node) .options(joinedload('tags')) .options(joinedload('traits'))) def _get_deploy_template_query_with_steps(): """Return a query object for the DeployTemplate joined with steps. :returns: a query object. """ return model_query(models.DeployTemplate).options(joinedload('steps')) def model_query(model, *args, **kwargs): """Query helper for simpler session usage. :param session: if present, the session to use """ with _session_for_read() as session: query = session.query(model, *args) return query def add_identity_filter(query, value): """Adds an identity filter to a query. Filters results by ID, if supplied value is a valid integer. Otherwise attempts to filter results by UUID. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if strutils.is_int_like(value): return query.filter_by(id=value) elif uuidutils.is_uuid_like(value): return query.filter_by(uuid=value) else: raise exception.InvalidIdentity(identity=value) def add_port_filter(query, value): """Adds a port-specific filter to a query. Filters results by address, if supplied value is a valid MAC address. Otherwise attempts to filter results by identity. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if netutils.is_valid_mac(value): return query.filter_by(address=value) else: return add_identity_filter(query, value) def add_port_filter_by_node(query, value): if strutils.is_int_like(value): return query.filter_by(node_id=value) else: query = query.join(models.Node, models.Port.node_id == models.Node.id) return query.filter(models.Node.uuid == value) def add_port_filter_by_node_owner(query, value): query = query.join(models.Node, models.Port.node_id == models.Node.id) return query.filter(models.Node.owner == value) def add_portgroup_filter(query, value): """Adds a portgroup-specific filter to a query. Filters results by address, if supplied value is a valid MAC address. Otherwise attempts to filter results by identity. :param query: Initial query to add filter to. :param value: Value for filtering results by. :return: Modified query. """ if netutils.is_valid_mac(value): return query.filter_by(address=value) else: return add_identity_filter(query, value) def add_portgroup_filter_by_node(query, value): if strutils.is_int_like(value): return query.filter_by(node_id=value) else: query = query.join(models.Node, models.Portgroup.node_id == models.Node.id) return query.filter(models.Node.uuid == value) def add_port_filter_by_portgroup(query, value): if strutils.is_int_like(value): return query.filter_by(portgroup_id=value) else: query = query.join(models.Portgroup, models.Port.portgroup_id == models.Portgroup.id) return query.filter(models.Portgroup.uuid == value) def add_node_filter_by_chassis(query, value): if strutils.is_int_like(value): return query.filter_by(chassis_id=value) else: query = query.join(models.Chassis, models.Node.chassis_id == models.Chassis.id) return query.filter(models.Chassis.uuid == value) def add_allocation_filter_by_node(query, value): if strutils.is_int_like(value): return query.filter_by(node_id=value) else: query = query.join(models.Node, models.Allocation.node_id == models.Node.id) return query.filter(models.Node.uuid == value) def add_allocation_filter_by_conductor(query, value): if strutils.is_int_like(value): return query.filter_by(conductor_affinity=value) else: # Assume hostname and join with the conductor table query = query.join( models.Conductor, models.Allocation.conductor_affinity == models.Conductor.id) return query.filter(models.Conductor.hostname == value) def _paginate_query(model, limit=None, marker=None, sort_key=None, sort_dir=None, query=None): if not query: query = model_query(model) sort_keys = ['id'] if sort_key and sort_key not in sort_keys: sort_keys.insert(0, sort_key) try: query = db_utils.paginate_query(query, model, limit, sort_keys, marker=marker, sort_dir=sort_dir) except db_exc.InvalidSortKey: raise exception.InvalidParameterValue( _('The sort_key value "%(key)s" is an invalid field for sorting') % {'key': sort_key}) return query.all() def _filter_active_conductors(query, interval=None): if interval is None: interval = CONF.conductor.heartbeat_timeout limit = timeutils.utcnow() - datetime.timedelta(seconds=interval) query = (query.filter(models.Conductor.online.is_(True)) .filter(models.Conductor.updated_at >= limit)) return query def _zip_matching(a, b, key): """Zip two unsorted lists, yielding matching items or None. Each zipped item is a tuple taking one of three forms: (a[i], b[j]) if a[i] and b[j] are equal. (a[i], None) if a[i] is less than b[j] or b is empty. (None, b[j]) if a[i] is greater than b[j] or a is empty. Note that the returned list may be longer than either of the two lists. Adapted from https://stackoverflow.com/a/11426702. :param a: the first list. :param b: the second list. :param key: a function that generates a key used to compare items. """ a = collections.deque(sorted(a, key=key)) b = collections.deque(sorted(b, key=key)) while a and b: k_a = key(a[0]) k_b = key(b[0]) if k_a == k_b: yield a.popleft(), b.popleft() elif k_a < k_b: yield a.popleft(), None else: yield None, b.popleft() # Consume any remaining items in each deque. for i in a: yield i, None for i in b: yield None, i @profiler.trace_cls("db_api") class Connection(api.Connection): """SqlAlchemy connection.""" _NODE_QUERY_FIELDS = {'console_enabled', 'maintenance', 'retired', 'driver', 'resource_class', 'provision_state', 'uuid', 'id', 'fault', 'conductor_group', 'owner', 'lessee'} _NODE_IN_QUERY_FIELDS = {'%s_in' % field: field for field in ('uuid', 'provision_state')} _NODE_NON_NULL_FILTERS = {'associated': 'instance_uuid', 'reserved': 'reservation', 'with_power_state': 'power_state'} _NODE_FILTERS = ({'chassis_uuid', 'reserved_by_any_of', 'provisioned_before', 'inspection_started_before', 'description_contains', 'project'} | _NODE_QUERY_FIELDS | set(_NODE_IN_QUERY_FIELDS) | set(_NODE_NON_NULL_FILTERS)) def __init__(self): pass def _validate_nodes_filters(self, filters): if filters is None: filters = dict() unsupported_filters = set(filters).difference(self._NODE_FILTERS) if unsupported_filters: msg = _("SqlAlchemy API does not support " "filtering by %s") % ', '.join(unsupported_filters) raise ValueError(msg) return filters def _add_nodes_filters(self, query, filters): filters = self._validate_nodes_filters(filters) for field in self._NODE_QUERY_FIELDS: if field in filters: query = query.filter_by(**{field: filters[field]}) for key, field in self._NODE_IN_QUERY_FIELDS.items(): if key in filters: query = query.filter( getattr(models.Node, field).in_(filters[key])) for key, field in self._NODE_NON_NULL_FILTERS.items(): if key in filters: column = getattr(models.Node, field) if filters[key]: query = query.filter(column != sql.null()) else: query = query.filter(column == sql.null()) if 'chassis_uuid' in filters: # get_chassis_by_uuid() to raise an exception if the chassis # is not found chassis_obj = self.get_chassis_by_uuid(filters['chassis_uuid']) query = query.filter_by(chassis_id=chassis_obj.id) if 'reserved_by_any_of' in filters: query = query.filter(models.Node.reservation.in_( filters['reserved_by_any_of'])) if 'provisioned_before' in filters: limit = (timeutils.utcnow() - datetime.timedelta( seconds=filters['provisioned_before'])) query = query.filter(models.Node.provision_updated_at < limit) if 'inspection_started_before' in filters: limit = ((timeutils.utcnow()) - (datetime.timedelta( seconds=filters['inspection_started_before']))) query = query.filter(models.Node.inspection_started_at < limit) if 'description_contains' in filters: keyword = filters['description_contains'] if keyword is not None: query = query.filter( models.Node.description.like(r'%{}%'.format(keyword))) if 'project' in filters: project = filters['project'] query = query.filter((models.Node.owner == project) | (models.Node.lessee == project)) return query def _add_allocations_filters(self, query, filters): if filters is None: filters = dict() supported_filters = {'state', 'resource_class', 'node_uuid', 'conductor_affinity', 'owner'} unsupported_filters = set(filters).difference(supported_filters) if unsupported_filters: msg = _("SqlAlchemy API does not support " "filtering by %s") % ', '.join(unsupported_filters) raise ValueError(msg) try: node_uuid = filters.pop('node_uuid') except KeyError: pass else: query = add_allocation_filter_by_node(query, node_uuid) try: conductor = filters.pop('conductor_affinity') except KeyError: pass else: query = add_allocation_filter_by_conductor(query, conductor) if filters: query = query.filter_by(**filters) return query def get_nodeinfo_list(self, columns=None, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): # list-ify columns default values because it is bad form # to include a mutable list in function definitions. if columns is None: columns = [models.Node.id] else: columns = [getattr(models.Node, c) for c in columns] query = model_query(*columns, base_model=models.Node) query = self._add_nodes_filters(query, filters) return _paginate_query(models.Node, limit, marker, sort_key, sort_dir, query) def get_node_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): query = _get_node_query_with_all() query = self._add_nodes_filters(query, filters) return _paginate_query(models.Node, limit, marker, sort_key, sort_dir, query) def check_node_list(self, idents): mapping = {} if idents: idents = set(idents) else: return mapping uuids = {i for i in idents if uuidutils.is_uuid_like(i)} names = {i for i in idents if not uuidutils.is_uuid_like(i) and utils.is_valid_logical_name(i)} missing = idents - set(uuids) - set(names) if missing: # Such nodes cannot exist, bailing out early raise exception.NodeNotFound( _("Nodes cannot be found: %s") % ', '.join(missing)) query = model_query(models.Node.uuid, models.Node.name).filter( sql.or_(models.Node.uuid.in_(uuids), models.Node.name.in_(names)) ) for row in query: if row[0] in idents: mapping[row[0]] = row[0] if row[1] and row[1] in idents: mapping[row[1]] = row[0] missing = idents - set(mapping) if missing: raise exception.NodeNotFound( _("Nodes cannot be found: %s") % ', '.join(missing)) return mapping @oslo_db_api.retry_on_deadlock def reserve_node(self, tag, node_id): with _session_for_write(): query = _get_node_query_with_all() query = add_identity_filter(query, node_id) # be optimistic and assume we usually create a reservation count = query.filter_by(reservation=None).update( {'reservation': tag}, synchronize_session=False) try: node = query.one() if count != 1: # Nothing updated and node exists. Must already be # locked. raise exception.NodeLocked(node=node.uuid, host=node['reservation']) return node except NoResultFound: raise exception.NodeNotFound(node=node_id) @oslo_db_api.retry_on_deadlock def release_node(self, tag, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) # be optimistic and assume we usually release a reservation count = query.filter_by(reservation=tag).update( {'reservation': None}, synchronize_session=False) try: if count != 1: node = query.one() if node['reservation'] is None: raise exception.NodeNotLocked(node=node.uuid) else: raise exception.NodeLocked(node=node.uuid, host=node['reservation']) except NoResultFound: raise exception.NodeNotFound(node=node_id) @oslo_db_api.retry_on_deadlock def create_node(self, values): # ensure defaults are present for new nodes if 'uuid' not in values: values['uuid'] = uuidutils.generate_uuid() if 'power_state' not in values: values['power_state'] = states.NOSTATE if 'provision_state' not in values: values['provision_state'] = states.ENROLL # TODO(zhenguo): Support creating node with tags if 'tags' in values: msg = _("Cannot create node with tags.") raise exception.InvalidParameterValue(err=msg) # TODO(mgoddard): Support creating node with traits if 'traits' in values: msg = _("Cannot create node with traits.") raise exception.InvalidParameterValue(err=msg) node = models.Node() node.update(values) with _session_for_write() as session: try: session.add(node) session.flush() except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.DuplicateName(name=values['name']) elif 'instance_uuid' in exc.columns: raise exception.InstanceAssociated( instance_uuid=values['instance_uuid'], node=values['uuid']) raise exception.NodeAlreadyExists(uuid=values['uuid']) # Set tags & traits to [] for new created node # NOTE(mgoddard): We need to set the tags and traits fields in the # session context, otherwise SQLAlchemy will try and fail to lazy # load the attributes, resulting in an exception being raised. node['tags'] = [] node['traits'] = [] return node def get_node_by_id(self, node_id): query = _get_node_query_with_all() query = query.filter_by(id=node_id) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_id) def get_node_by_uuid(self, node_uuid): query = _get_node_query_with_all() query = query.filter_by(uuid=node_uuid) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_uuid) def get_node_by_name(self, node_name): query = _get_node_query_with_all() query = query.filter_by(name=node_name) try: return query.one() except NoResultFound: raise exception.NodeNotFound(node=node_name) def get_node_by_instance(self, instance): if not uuidutils.is_uuid_like(instance): raise exception.InvalidUUID(uuid=instance) query = _get_node_query_with_all() query = query.filter_by(instance_uuid=instance) try: result = query.one() except NoResultFound: raise exception.InstanceNotFound(instance=instance) return result @oslo_db_api.retry_on_deadlock def destroy_node(self, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) try: node_ref = query.one() except NoResultFound: raise exception.NodeNotFound(node=node_id) # Get node ID, if an UUID was supplied. The ID is # required for deleting all ports, attached to the node. if uuidutils.is_uuid_like(node_id): node_id = node_ref['id'] port_query = model_query(models.Port) port_query = add_port_filter_by_node(port_query, node_id) port_query.delete() portgroup_query = model_query(models.Portgroup) portgroup_query = add_portgroup_filter_by_node(portgroup_query, node_id) portgroup_query.delete() # Delete all tags attached to the node tag_query = model_query(models.NodeTag).filter_by(node_id=node_id) tag_query.delete() # Delete all traits attached to the node trait_query = model_query( models.NodeTrait).filter_by(node_id=node_id) trait_query.delete() volume_connector_query = model_query( models.VolumeConnector).filter_by(node_id=node_id) volume_connector_query.delete() volume_target_query = model_query( models.VolumeTarget).filter_by(node_id=node_id) volume_target_query.delete() # delete all bios attached to the node bios_settings_query = model_query( models.BIOSSetting).filter_by(node_id=node_id) bios_settings_query.delete() # delete all allocations for this node allocation_query = model_query( models.Allocation).filter_by(node_id=node_id) allocation_query.delete() query.delete() def update_node(self, node_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Node.") raise exception.InvalidParameterValue(err=msg) try: return self._do_update_node(node_id, values) except db_exc.DBDuplicateEntry as e: if 'name' in e.columns: raise exception.DuplicateName(name=values['name']) elif 'uuid' in e.columns: raise exception.NodeAlreadyExists(uuid=values['uuid']) elif 'instance_uuid' in e.columns: raise exception.InstanceAssociated( instance_uuid=values['instance_uuid'], node=node_id) else: raise @oslo_db_api.retry_on_deadlock def _do_update_node(self, node_id, values): with _session_for_write(): # NOTE(mgoddard): Don't issue a joined query for the update as this # does not work with PostgreSQL. query = model_query(models.Node) query = add_identity_filter(query, node_id) try: ref = query.with_for_update().one() except NoResultFound: raise exception.NodeNotFound(node=node_id) if 'provision_state' in values: values['provision_updated_at'] = timeutils.utcnow() if values['provision_state'] == states.INSPECTING: values['inspection_started_at'] = timeutils.utcnow() values['inspection_finished_at'] = None elif (ref.provision_state == states.INSPECTING and values['provision_state'] == states.MANAGEABLE): values['inspection_finished_at'] = timeutils.utcnow() values['inspection_started_at'] = None elif (ref.provision_state == states.INSPECTING and values['provision_state'] == states.INSPECTFAIL): values['inspection_started_at'] = None ref.update(values) # Return the updated node model joined with all relevant fields. query = _get_node_query_with_all() query = add_identity_filter(query, node_id) return query.one() def get_port_by_id(self, port_id): query = model_query(models.Port).filter_by(id=port_id) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=port_id) def get_port_by_uuid(self, port_uuid): query = model_query(models.Port).filter_by(uuid=port_uuid) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=port_uuid) def get_port_by_address(self, address, owner=None): query = model_query(models.Port).filter_by(address=address) if owner: query = add_port_filter_by_node_owner(query, owner) try: return query.one() except NoResultFound: raise exception.PortNotFound(port=address) def get_port_list(self, limit=None, marker=None, sort_key=None, sort_dir=None, owner=None): query = model_query(models.Port) if owner: query = add_port_filter_by_node_owner(query, owner) return _paginate_query(models.Port, limit, marker, sort_key, sort_dir, query) def get_ports_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None, owner=None): query = model_query(models.Port) query = query.filter_by(node_id=node_id) if owner: query = add_port_filter_by_node_owner(query, owner) return _paginate_query(models.Port, limit, marker, sort_key, sort_dir, query) def get_ports_by_portgroup_id(self, portgroup_id, limit=None, marker=None, sort_key=None, sort_dir=None, owner=None): query = model_query(models.Port) query = query.filter_by(portgroup_id=portgroup_id) if owner: query = add_port_filter_by_node_owner(query, owner) return _paginate_query(models.Port, limit, marker, sort_key, sort_dir, query) @oslo_db_api.retry_on_deadlock def create_port(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() port = models.Port() port.update(values) with _session_for_write() as session: try: session.add(port) session.flush() except db_exc.DBDuplicateEntry as exc: if 'address' in exc.columns: raise exception.MACAlreadyExists(mac=values['address']) raise exception.PortAlreadyExists(uuid=values['uuid']) return port @oslo_db_api.retry_on_deadlock def update_port(self, port_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Port.") raise exception.InvalidParameterValue(err=msg) try: with _session_for_write() as session: query = model_query(models.Port) query = add_port_filter(query, port_id) ref = query.one() ref.update(values) session.flush() except NoResultFound: raise exception.PortNotFound(port=port_id) except db_exc.DBDuplicateEntry: raise exception.MACAlreadyExists(mac=values['address']) return ref @oslo_db_api.retry_on_deadlock def destroy_port(self, port_id): with _session_for_write(): query = model_query(models.Port) query = add_port_filter(query, port_id) count = query.delete() if count == 0: raise exception.PortNotFound(port=port_id) def get_portgroup_by_id(self, portgroup_id): query = model_query(models.Portgroup).filter_by(id=portgroup_id) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_id) def get_portgroup_by_uuid(self, portgroup_uuid): query = model_query(models.Portgroup).filter_by(uuid=portgroup_uuid) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_uuid) def get_portgroup_by_address(self, address): query = model_query(models.Portgroup).filter_by(address=address) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=address) def get_portgroup_by_name(self, name): query = model_query(models.Portgroup).filter_by(name=name) try: return query.one() except NoResultFound: raise exception.PortgroupNotFound(portgroup=name) def get_portgroup_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Portgroup, limit, marker, sort_key, sort_dir) def get_portgroups_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.Portgroup) query = query.filter_by(node_id=node_id) return _paginate_query(models.Portgroup, limit, marker, sort_key, sort_dir, query) @oslo_db_api.retry_on_deadlock def create_portgroup(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() if not values.get('mode'): values['mode'] = CONF.default_portgroup_mode portgroup = models.Portgroup() portgroup.update(values) with _session_for_write() as session: try: session.add(portgroup) session.flush() except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.PortgroupDuplicateName(name=values['name']) elif 'address' in exc.columns: raise exception.PortgroupMACAlreadyExists( mac=values['address']) raise exception.PortgroupAlreadyExists(uuid=values['uuid']) return portgroup @oslo_db_api.retry_on_deadlock def update_portgroup(self, portgroup_id, values): if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing portgroup.") raise exception.InvalidParameterValue(err=msg) with _session_for_write() as session: try: query = model_query(models.Portgroup) query = add_portgroup_filter(query, portgroup_id) ref = query.one() ref.update(values) session.flush() except NoResultFound: raise exception.PortgroupNotFound(portgroup=portgroup_id) except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.PortgroupDuplicateName(name=values['name']) elif 'address' in exc.columns: raise exception.PortgroupMACAlreadyExists( mac=values['address']) else: raise return ref @oslo_db_api.retry_on_deadlock def destroy_portgroup(self, portgroup_id): def portgroup_not_empty(session): """Checks whether the portgroup does not have ports.""" query = model_query(models.Port) query = add_port_filter_by_portgroup(query, portgroup_id) return query.count() != 0 with _session_for_write() as session: if portgroup_not_empty(session): raise exception.PortgroupNotEmpty(portgroup=portgroup_id) query = model_query(models.Portgroup, session=session) query = add_identity_filter(query, portgroup_id) count = query.delete() if count == 0: raise exception.PortgroupNotFound(portgroup=portgroup_id) def get_chassis_by_id(self, chassis_id): query = model_query(models.Chassis).filter_by(id=chassis_id) try: return query.one() except NoResultFound: raise exception.ChassisNotFound(chassis=chassis_id) def get_chassis_by_uuid(self, chassis_uuid): query = model_query(models.Chassis).filter_by(uuid=chassis_uuid) try: return query.one() except NoResultFound: raise exception.ChassisNotFound(chassis=chassis_uuid) def get_chassis_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Chassis, limit, marker, sort_key, sort_dir) @oslo_db_api.retry_on_deadlock def create_chassis(self, values): if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() chassis = models.Chassis() chassis.update(values) with _session_for_write() as session: try: session.add(chassis) session.flush() except db_exc.DBDuplicateEntry: raise exception.ChassisAlreadyExists(uuid=values['uuid']) return chassis @oslo_db_api.retry_on_deadlock def update_chassis(self, chassis_id, values): # NOTE(dtantsur): this can lead to very strange errors if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing Chassis.") raise exception.InvalidParameterValue(err=msg) with _session_for_write(): query = model_query(models.Chassis) query = add_identity_filter(query, chassis_id) count = query.update(values) if count != 1: raise exception.ChassisNotFound(chassis=chassis_id) ref = query.one() return ref @oslo_db_api.retry_on_deadlock def destroy_chassis(self, chassis_id): def chassis_not_empty(): """Checks whether the chassis does not have nodes.""" query = model_query(models.Node) query = add_node_filter_by_chassis(query, chassis_id) return query.count() != 0 with _session_for_write(): if chassis_not_empty(): raise exception.ChassisNotEmpty(chassis=chassis_id) query = model_query(models.Chassis) query = add_identity_filter(query, chassis_id) count = query.delete() if count != 1: raise exception.ChassisNotFound(chassis=chassis_id) @oslo_db_api.retry_on_deadlock def register_conductor(self, values, update_existing=False): with _session_for_write() as session: query = (model_query(models.Conductor) .filter_by(hostname=values['hostname'])) try: ref = query.one() if ref.online is True and not update_existing: raise exception.ConductorAlreadyRegistered( conductor=values['hostname']) except NoResultFound: ref = models.Conductor() session.add(ref) ref.update(values) # always set online and updated_at fields when registering # a conductor, especially when updating an existing one ref.update({'updated_at': timeutils.utcnow(), 'online': True}) return ref def get_conductor_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.Conductor, limit, marker, sort_key, sort_dir) def get_conductor(self, hostname, online=True): try: query = model_query(models.Conductor).filter_by(hostname=hostname) if online is not None: query = query.filter_by(online=online) return query.one() except NoResultFound: raise exception.ConductorNotFound(conductor=hostname) @oslo_db_api.retry_on_deadlock def unregister_conductor(self, hostname): with _session_for_write(): query = (model_query(models.Conductor) .filter_by(hostname=hostname, online=True)) count = query.update({'online': False}) if count == 0: raise exception.ConductorNotFound(conductor=hostname) @oslo_db_api.retry_on_deadlock def touch_conductor(self, hostname): with _session_for_write(): query = (model_query(models.Conductor) .filter_by(hostname=hostname)) # since we're not changing any other field, manually set updated_at # and since we're heartbeating, make sure that online=True count = query.update({'updated_at': timeutils.utcnow(), 'online': True}) if count == 0: raise exception.ConductorNotFound(conductor=hostname) @oslo_db_api.retry_on_deadlock def clear_node_reservations_for_conductor(self, hostname): nodes = [] with _session_for_write(): query = (model_query(models.Node) .filter(models.Node.reservation.ilike(hostname))) nodes = [node['uuid'] for node in query] query.update({'reservation': None}, synchronize_session=False) if nodes: nodes = ', '.join(nodes) LOG.warning( 'Cleared reservations held by %(hostname)s: ' '%(nodes)s', {'hostname': hostname, 'nodes': nodes}) @oslo_db_api.retry_on_deadlock def clear_node_target_power_state(self, hostname): nodes = [] with _session_for_write(): query = (model_query(models.Node) .filter(models.Node.reservation.ilike(hostname))) query = query.filter(models.Node.target_power_state != sql.null()) nodes = [node['uuid'] for node in query] query.update({'target_power_state': None, 'last_error': _("Pending power operation was " "aborted due to conductor " "restart")}, synchronize_session=False) if nodes: nodes = ', '.join(nodes) LOG.warning( 'Cleared target_power_state of the locked nodes in ' 'powering process, their power state can be incorrect: ' '%(nodes)s', {'nodes': nodes}) def get_active_hardware_type_dict(self, use_groups=False): query = (model_query(models.ConductorHardwareInterfaces, models.Conductor) .join(models.Conductor)) result = _filter_active_conductors(query) d2c = collections.defaultdict(set) for iface_row, cdr_row in result: hw_type = iface_row['hardware_type'] if use_groups: key = '%s:%s' % (cdr_row['conductor_group'], hw_type) else: key = hw_type d2c[key].add(cdr_row['hostname']) return d2c def get_offline_conductors(self, field='hostname'): field = getattr(models.Conductor, field) interval = CONF.conductor.heartbeat_timeout limit = timeutils.utcnow() - datetime.timedelta(seconds=interval) result = (model_query(field) .filter(models.Conductor.updated_at < limit)) return [row[0] for row in result] def get_online_conductors(self): query = model_query(models.Conductor.hostname) query = _filter_active_conductors(query) return [row[0] for row in query] def list_conductor_hardware_interfaces(self, conductor_id): query = (model_query(models.ConductorHardwareInterfaces) .filter_by(conductor_id=conductor_id)) return query.all() def list_hardware_type_interfaces(self, hardware_types): query = (model_query(models.ConductorHardwareInterfaces) .filter(models.ConductorHardwareInterfaces.hardware_type .in_(hardware_types))) query = _filter_active_conductors(query) return query.all() @oslo_db_api.retry_on_deadlock def register_conductor_hardware_interfaces(self, conductor_id, hardware_type, interface_type, interfaces, default_interface): with _session_for_write() as session: try: for iface in interfaces: conductor_hw_iface = models.ConductorHardwareInterfaces() conductor_hw_iface['conductor_id'] = conductor_id conductor_hw_iface['hardware_type'] = hardware_type conductor_hw_iface['interface_type'] = interface_type conductor_hw_iface['interface_name'] = iface is_default = (iface == default_interface) conductor_hw_iface['default'] = is_default session.add(conductor_hw_iface) session.flush() except db_exc.DBDuplicateEntry: raise exception.ConductorHardwareInterfacesAlreadyRegistered( hardware_type=hardware_type, interface_type=interface_type, interfaces=interfaces) @oslo_db_api.retry_on_deadlock def unregister_conductor_hardware_interfaces(self, conductor_id): with _session_for_write(): query = (model_query(models.ConductorHardwareInterfaces) .filter_by(conductor_id=conductor_id)) query.delete() @oslo_db_api.retry_on_deadlock def touch_node_provisioning(self, node_id): with _session_for_write(): query = model_query(models.Node) query = add_identity_filter(query, node_id) count = query.update({'provision_updated_at': timeutils.utcnow()}) if count == 0: raise exception.NodeNotFound(node=node_id) def _check_node_exists(self, node_id): if not model_query(models.Node).filter_by(id=node_id).scalar(): raise exception.NodeNotFound(node=node_id) @oslo_db_api.retry_on_deadlock def set_node_tags(self, node_id, tags): # remove duplicate tags tags = set(tags) with _session_for_write() as session: self.unset_node_tags(node_id) node_tags = [] for tag in tags: node_tag = models.NodeTag(tag=tag, node_id=node_id) session.add(node_tag) node_tags.append(node_tag) return node_tags @oslo_db_api.retry_on_deadlock def unset_node_tags(self, node_id): self._check_node_exists(node_id) with _session_for_write(): model_query(models.NodeTag).filter_by(node_id=node_id).delete() def get_node_tags_by_node_id(self, node_id): self._check_node_exists(node_id) result = (model_query(models.NodeTag) .filter_by(node_id=node_id) .all()) return result @oslo_db_api.retry_on_deadlock def add_node_tag(self, node_id, tag): node_tag = models.NodeTag(tag=tag, node_id=node_id) self._check_node_exists(node_id) try: with _session_for_write() as session: session.add(node_tag) session.flush() except db_exc.DBDuplicateEntry: # NOTE(zhenguo): ignore tags duplicates pass return node_tag @oslo_db_api.retry_on_deadlock def delete_node_tag(self, node_id, tag): self._check_node_exists(node_id) with _session_for_write(): result = model_query(models.NodeTag).filter_by( node_id=node_id, tag=tag).delete() if not result: raise exception.NodeTagNotFound(node_id=node_id, tag=tag) def node_tag_exists(self, node_id, tag): self._check_node_exists(node_id) q = model_query(models.NodeTag).filter_by(node_id=node_id, tag=tag) return model_query(q.exists()).scalar() def get_node_by_port_addresses(self, addresses): q = _get_node_query_with_all() q = q.distinct().join(models.Port) q = q.filter(models.Port.address.in_(addresses)) try: return q.one() except NoResultFound: raise exception.NodeNotFound( _('Node with port addresses %s was not found') % addresses) except MultipleResultsFound: raise exception.NodeNotFound( _('Multiple nodes with port addresses %s were found') % addresses) def get_volume_connector_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.VolumeConnector, limit, marker, sort_key, sort_dir) def get_volume_connector_by_id(self, db_id): query = model_query(models.VolumeConnector).filter_by(id=db_id) try: return query.one() except NoResultFound: raise exception.VolumeConnectorNotFound(connector=db_id) def get_volume_connector_by_uuid(self, connector_uuid): query = model_query(models.VolumeConnector).filter_by( uuid=connector_uuid) try: return query.one() except NoResultFound: raise exception.VolumeConnectorNotFound(connector=connector_uuid) def get_volume_connectors_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.VolumeConnector).filter_by(node_id=node_id) return _paginate_query(models.VolumeConnector, limit, marker, sort_key, sort_dir, query) @oslo_db_api.retry_on_deadlock def create_volume_connector(self, connector_info): if 'uuid' not in connector_info: connector_info['uuid'] = uuidutils.generate_uuid() connector = models.VolumeConnector() connector.update(connector_info) with _session_for_write() as session: try: session.add(connector) session.flush() except db_exc.DBDuplicateEntry as exc: if 'type' in exc.columns: raise exception.VolumeConnectorTypeAndIdAlreadyExists( type=connector_info['type'], connector_id=connector_info['connector_id']) raise exception.VolumeConnectorAlreadyExists( uuid=connector_info['uuid']) return connector @oslo_db_api.retry_on_deadlock def update_volume_connector(self, ident, connector_info): if 'uuid' in connector_info: msg = _("Cannot overwrite UUID for an existing Volume Connector.") raise exception.InvalidParameterValue(err=msg) try: with _session_for_write() as session: query = model_query(models.VolumeConnector) query = add_identity_filter(query, ident) ref = query.one() orig_type = ref['type'] orig_connector_id = ref['connector_id'] ref.update(connector_info) session.flush() except db_exc.DBDuplicateEntry: raise exception.VolumeConnectorTypeAndIdAlreadyExists( type=connector_info.get('type', orig_type), connector_id=connector_info.get('connector_id', orig_connector_id)) except NoResultFound: raise exception.VolumeConnectorNotFound(connector=ident) return ref @oslo_db_api.retry_on_deadlock def destroy_volume_connector(self, ident): with _session_for_write(): query = model_query(models.VolumeConnector) query = add_identity_filter(query, ident) count = query.delete() if count == 0: raise exception.VolumeConnectorNotFound(connector=ident) def get_volume_target_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): return _paginate_query(models.VolumeTarget, limit, marker, sort_key, sort_dir) def get_volume_target_by_id(self, db_id): query = model_query(models.VolumeTarget).filter_by(id=db_id) try: return query.one() except NoResultFound: raise exception.VolumeTargetNotFound(target=db_id) def get_volume_target_by_uuid(self, uuid): query = model_query(models.VolumeTarget).filter_by(uuid=uuid) try: return query.one() except NoResultFound: raise exception.VolumeTargetNotFound(target=uuid) def get_volume_targets_by_node_id(self, node_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.VolumeTarget).filter_by(node_id=node_id) return _paginate_query(models.VolumeTarget, limit, marker, sort_key, sort_dir, query) def get_volume_targets_by_volume_id(self, volume_id, limit=None, marker=None, sort_key=None, sort_dir=None): query = model_query(models.VolumeTarget).filter_by(volume_id=volume_id) return _paginate_query(models.VolumeTarget, limit, marker, sort_key, sort_dir, query) @oslo_db_api.retry_on_deadlock def create_volume_target(self, target_info): if 'uuid' not in target_info: target_info['uuid'] = uuidutils.generate_uuid() target = models.VolumeTarget() target.update(target_info) with _session_for_write() as session: try: session.add(target) session.flush() except db_exc.DBDuplicateEntry as exc: if 'boot_index' in exc.columns: raise exception.VolumeTargetBootIndexAlreadyExists( boot_index=target_info['boot_index']) raise exception.VolumeTargetAlreadyExists( uuid=target_info['uuid']) return target @oslo_db_api.retry_on_deadlock def update_volume_target(self, ident, target_info): if 'uuid' in target_info: msg = _("Cannot overwrite UUID for an existing Volume Target.") raise exception.InvalidParameterValue(err=msg) try: with _session_for_write() as session: query = model_query(models.VolumeTarget) query = add_identity_filter(query, ident) ref = query.one() orig_boot_index = ref['boot_index'] ref.update(target_info) session.flush() except db_exc.DBDuplicateEntry: raise exception.VolumeTargetBootIndexAlreadyExists( boot_index=target_info.get('boot_index', orig_boot_index)) except NoResultFound: raise exception.VolumeTargetNotFound(target=ident) return ref @oslo_db_api.retry_on_deadlock def destroy_volume_target(self, ident): with _session_for_write(): query = model_query(models.VolumeTarget) query = add_identity_filter(query, ident) count = query.delete() if count == 0: raise exception.VolumeTargetNotFound(target=ident) def get_not_versions(self, model_name, versions): """Returns objects with versions that are not the specified versions. This returns objects with versions that are not the specified versions. Objects with null versions (there shouldn't be any) are also returned. :param model_name: the name of the model (class) of desired objects :param versions: list of versions of objects not to be returned :returns: list of the DB objects :raises: IronicException if there is no class associated with the name """ if not versions: return [] model = models.get_class(model_name) # NOTE(rloo): .notin_ does not handle null: # http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.operators.ColumnOperators.notin_ query = model_query(model).filter( sql.or_(model.version == sql.null(), model.version.notin_(versions))) return query.all() def check_versions(self, ignore_models=()): """Checks the whole database for incompatible objects. This scans all the tables in search of objects that are not supported; i.e., those that are not specified in `ironic.common.release_mappings.RELEASE_MAPPING`. This includes objects that have null 'version' values. :param ignore_models: List of model names to skip. :returns: A Boolean. True if all the objects have supported versions; False otherwise. """ object_versions = release_mappings.get_object_versions() for model in models.Base.__subclasses__(): if model.__name__ not in object_versions: continue if model.__name__ in ignore_models: continue supported_versions = object_versions[model.__name__] if not supported_versions: continue # NOTE(mgagne): Additional safety check to detect old database # version which does not have the 'version' columns available. # This usually means a skip version upgrade is attempted # from a version earlier than Pike which added # those columns required for the next check. engine = enginefacade.reader.get_engine() if not db_utils.column_exists(engine, model.__tablename__, model.version.name): raise exception.DatabaseVersionTooOld() # NOTE(rloo): we use model.version, not model, because we # know that the object has a 'version' column # but we don't know whether the entire object is # compatible with its (old) DB representation. # NOTE(rloo): .notin_ does not handle null: # http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.operators.ColumnOperators.notin_ query = model_query(model.version).filter( sql.or_(model.version == sql.null(), model.version.notin_(supported_versions))) if query.count(): return False return True @oslo_db_api.retry_on_deadlock def update_to_latest_versions(self, context, max_count): """Updates objects to their latest known versions. This scans all the tables and for objects that are not in their latest version, updates them to that version. :param context: the admin context :param max_count: The maximum number of objects to migrate. Must be >= 0. If zero, all the objects will be migrated. :returns: A 2-tuple, 1. the total number of objects that need to be migrated (at the beginning of this call) and 2. the number of migrated objects. """ # NOTE(rloo): 'master' has the most recent (latest) versions. mapping = release_mappings.RELEASE_MAPPING['master']['objects'] total_to_migrate = 0 total_migrated = 0 sql_models = [model for model in models.Base.__subclasses__() if model.__name__ in mapping] for model in sql_models: version = mapping[model.__name__][0] query = model_query(model).filter(model.version != version) total_to_migrate += query.count() if not total_to_migrate: return total_to_migrate, 0 # NOTE(xek): Each of these operations happen in different transactions. # This is to ensure a minimal load on the database, but at the same # time it can cause an inconsistency in the amount of total and # migrated objects returned (total could be > migrated). This is # because some objects may have already migrated or been deleted from # the database between the time the total was computed (above) to the # time we do the updating (below). # # By the time this script is run, only the new release version is # running, so the impact of this error will be minimal - e.g. the # operator will run this script more than once to ensure that all # data have been migrated. # If max_count is zero, we want to migrate all the objects. max_to_migrate = max_count or total_to_migrate for model in sql_models: version = mapping[model.__name__][0] num_migrated = 0 with _session_for_write(): query = model_query(model).filter(model.version != version) # NOTE(rloo) Caution here; after doing query.count(), it is # possible that the value is different in the # next invocation of the query. if max_to_migrate < query.count(): # Only want to update max_to_migrate objects; cannot use # sql's limit(), so we generate a new query with # max_to_migrate objects. ids = [] for obj in query.slice(0, max_to_migrate): ids.append(obj['id']) num_migrated = ( model_query(model). filter(sql.and_(model.id.in_(ids), model.version != version)). update({model.version: version}, synchronize_session=False)) else: num_migrated = ( model_query(model). filter(model.version != version). update({model.version: version}, synchronize_session=False)) total_migrated += num_migrated max_to_migrate -= num_migrated if max_to_migrate <= 0: break return total_to_migrate, total_migrated @staticmethod def _verify_max_traits_per_node(node_id, num_traits): """Verify that an operation would not exceed the per-node trait limit. :param node_id: The ID of a node. :param num_traits: The number of traits the node would have after the operation. :raises: InvalidParameterValue if the operation would exceed the per-node trait limit. """ if num_traits > MAX_TRAITS_PER_NODE: msg = _("Could not modify traits for node %(node_id)s as it would " "exceed the maximum number of traits per node " "(%(num_traits)d vs. %(max_traits)d)") raise exception.InvalidParameterValue( msg, node_id=node_id, num_traits=num_traits, max_traits=MAX_TRAITS_PER_NODE) @oslo_db_api.retry_on_deadlock def set_node_traits(self, node_id, traits, version): # Remove duplicate traits traits = set(traits) self._verify_max_traits_per_node(node_id, len(traits)) with _session_for_write() as session: # NOTE(mgoddard): Node existence is checked in unset_node_traits. self.unset_node_traits(node_id) node_traits = [] for trait in traits: node_trait = models.NodeTrait(trait=trait, node_id=node_id, version=version) session.add(node_trait) node_traits.append(node_trait) return node_traits @oslo_db_api.retry_on_deadlock def unset_node_traits(self, node_id): self._check_node_exists(node_id) with _session_for_write(): model_query(models.NodeTrait).filter_by(node_id=node_id).delete() def get_node_traits_by_node_id(self, node_id): self._check_node_exists(node_id) result = (model_query(models.NodeTrait) .filter_by(node_id=node_id) .all()) return result @oslo_db_api.retry_on_deadlock def add_node_trait(self, node_id, trait, version): node_trait = models.NodeTrait(trait=trait, node_id=node_id, version=version) self._check_node_exists(node_id) try: with _session_for_write() as session: session.add(node_trait) session.flush() num_traits = (model_query(models.NodeTrait) .filter_by(node_id=node_id).count()) self._verify_max_traits_per_node(node_id, num_traits) except db_exc.DBDuplicateEntry: # NOTE(mgoddard): Ignore traits duplicates pass return node_trait @oslo_db_api.retry_on_deadlock def delete_node_trait(self, node_id, trait): self._check_node_exists(node_id) with _session_for_write(): result = model_query(models.NodeTrait).filter_by( node_id=node_id, trait=trait).delete() if not result: raise exception.NodeTraitNotFound(node_id=node_id, trait=trait) def node_trait_exists(self, node_id, trait): self._check_node_exists(node_id) q = model_query( models.NodeTrait).filter_by(node_id=node_id, trait=trait) return model_query(q.exists()).scalar() @oslo_db_api.retry_on_deadlock def create_bios_setting_list(self, node_id, settings, version): self._check_node_exists(node_id) bios_settings = [] with _session_for_write() as session: try: for setting in settings: bios_setting = models.BIOSSetting( node_id=node_id, name=setting['name'], value=setting['value'], version=version) bios_settings.append(bios_setting) session.add(bios_setting) session.flush() except db_exc.DBDuplicateEntry: raise exception.BIOSSettingAlreadyExists( node=node_id, name=setting['name']) return bios_settings @oslo_db_api.retry_on_deadlock def update_bios_setting_list(self, node_id, settings, version): self._check_node_exists(node_id) bios_settings = [] with _session_for_write() as session: try: for setting in settings: query = model_query(models.BIOSSetting).filter_by( node_id=node_id, name=setting['name']) ref = query.one() ref.update({'value': setting['value'], 'version': version}) bios_settings.append(ref) session.flush() except NoResultFound: raise exception.BIOSSettingNotFound( node=node_id, name=setting['name']) return bios_settings @oslo_db_api.retry_on_deadlock def delete_bios_setting_list(self, node_id, names): self._check_node_exists(node_id) missing_bios_settings = [] with _session_for_write(): for name in names: count = model_query(models.BIOSSetting).filter_by( node_id=node_id, name=name).delete() if count == 0: missing_bios_settings.append(name) if len(missing_bios_settings) > 0: raise exception.BIOSSettingListNotFound( node=node_id, names=','.join(missing_bios_settings)) def get_bios_setting(self, node_id, name): self._check_node_exists(node_id) query = model_query(models.BIOSSetting).filter_by( node_id=node_id, name=name) try: ref = query.one() except NoResultFound: raise exception.BIOSSettingNotFound(node=node_id, name=name) return ref def get_bios_setting_list(self, node_id): self._check_node_exists(node_id) result = (model_query(models.BIOSSetting) .filter_by(node_id=node_id) .all()) return result def get_allocation_by_id(self, allocation_id): """Return an allocation representation. :param allocation_id: The id of an allocation. :returns: An allocation. :raises: AllocationNotFound """ query = model_query(models.Allocation).filter_by(id=allocation_id) try: return query.one() except NoResultFound: raise exception.AllocationNotFound(allocation=allocation_id) def get_allocation_by_uuid(self, allocation_uuid): """Return an allocation representation. :param allocation_uuid: The uuid of an allocation. :returns: An allocation. :raises: AllocationNotFound """ query = model_query(models.Allocation).filter_by(uuid=allocation_uuid) try: return query.one() except NoResultFound: raise exception.AllocationNotFound(allocation=allocation_uuid) def get_allocation_by_name(self, name): """Return an allocation representation. :param name: The logical name of an allocation. :returns: An allocation. :raises: AllocationNotFound """ query = model_query(models.Allocation).filter_by(name=name) try: return query.one() except NoResultFound: raise exception.AllocationNotFound(allocation=name) def get_allocation_list(self, filters=None, limit=None, marker=None, sort_key=None, sort_dir=None): """Return a list of allocations. :param filters: Filters to apply. Defaults to None. :node_uuid: uuid of node :state: allocation state :resource_class: requested resource class :param limit: Maximum number of allocations to return. :param marker: The last item of the previous page; we return the next result set. :param sort_key: Attribute by which results should be sorted. :param sort_dir: Direction in which results should be sorted. (asc, desc) :returns: A list of allocations. """ query = self._add_allocations_filters(model_query(models.Allocation), filters) return _paginate_query(models.Allocation, limit, marker, sort_key, sort_dir, query) @oslo_db_api.retry_on_deadlock def create_allocation(self, values): """Create a new allocation. :param values: Dict of values to create an allocation with :returns: An allocation :raises: AllocationDuplicateName :raises: AllocationAlreadyExists """ if not values.get('uuid'): values['uuid'] = uuidutils.generate_uuid() if not values.get('state'): values['state'] = states.ALLOCATING allocation = models.Allocation() allocation.update(values) with _session_for_write() as session: try: session.add(allocation) session.flush() except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.AllocationDuplicateName( name=values['name']) else: raise exception.AllocationAlreadyExists( uuid=values['uuid']) return allocation @oslo_db_api.retry_on_deadlock def update_allocation(self, allocation_id, values, update_node=True): """Update properties of an allocation. :param allocation_id: Allocation ID :param values: Dict of values to update. :param update_node: If True and node_id is updated, update the node with instance_uuid and traits from the allocation :returns: An allocation. :raises: AllocationNotFound :raises: AllocationDuplicateName :raises: InstanceAssociated :raises: NodeAssociated """ if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing allocation.") raise exception.InvalidParameterValue(err=msg) # These values are used in exception handling. They should always be # initialized, but set them to None just in case. instance_uuid = node_uuid = None with _session_for_write() as session: try: query = model_query(models.Allocation, session=session) query = add_identity_filter(query, allocation_id) ref = query.one() ref.update(values) instance_uuid = ref.uuid if values.get('node_id') and update_node: node = model_query(models.Node, session=session).filter_by( id=ref.node_id).with_for_update().one() node_uuid = node.uuid if node.instance_uuid and node.instance_uuid != ref.uuid: raise exception.NodeAssociated( node=node.uuid, instance=node.instance_uuid) iinfo = node.instance_info.copy() iinfo['traits'] = ref.traits or [] node.update({'allocation_id': ref.id, 'instance_uuid': instance_uuid, 'instance_info': iinfo}) session.flush() except NoResultFound: raise exception.AllocationNotFound(allocation=allocation_id) except db_exc.DBDuplicateEntry as exc: if 'name' in exc.columns: raise exception.AllocationDuplicateName( name=values['name']) elif 'instance_uuid' in exc.columns: # Case when the allocation UUID is already used on some # node as instance_uuid. raise exception.InstanceAssociated( instance_uuid=instance_uuid, node=node_uuid) else: raise return ref @oslo_db_api.retry_on_deadlock def take_over_allocation(self, allocation_id, old_conductor_id, new_conductor_id): """Do a take over for an allocation. The allocation is only updated if the old conductor matches the provided value, thus guarding against races. :param allocation_id: Allocation ID :param old_conductor_id: The conductor ID we expect to be the current ``conductor_affinity`` of the allocation. :param new_conductor_id: The conductor ID of the new ``conductor_affinity``. :returns: True if the take over was successful, False otherwise. :raises: AllocationNotFound """ with _session_for_write() as session: try: query = model_query(models.Allocation, session=session) query = add_identity_filter(query, allocation_id) # NOTE(dtantsur): the FOR UPDATE clause locks the allocation ref = query.with_for_update().one() if ref.conductor_affinity != old_conductor_id: # Race detected, bailing out return False ref.update({'conductor_affinity': new_conductor_id}) session.flush() except NoResultFound: raise exception.AllocationNotFound(allocation=allocation_id) else: return True @oslo_db_api.retry_on_deadlock def destroy_allocation(self, allocation_id): """Destroy an allocation. :param allocation_id: Allocation ID or UUID :raises: AllocationNotFound """ with _session_for_write() as session: query = model_query(models.Allocation) query = add_identity_filter(query, allocation_id) try: ref = query.one() except NoResultFound: raise exception.AllocationNotFound(allocation=allocation_id) allocation_id = ref['id'] node_query = model_query(models.Node, session=session).filter_by( allocation_id=allocation_id) node_query.update({'allocation_id': None, 'instance_uuid': None}) query.delete() @staticmethod def _get_deploy_template_steps(steps, deploy_template_id=None): results = [] for values in steps: step = models.DeployTemplateStep() step.update(values) if deploy_template_id: step['deploy_template_id'] = deploy_template_id results.append(step) return results @oslo_db_api.retry_on_deadlock def create_deploy_template(self, values): steps = values.get('steps', []) values['steps'] = self._get_deploy_template_steps(steps) template = models.DeployTemplate() template.update(values) with _session_for_write() as session: try: session.add(template) session.flush() except db_exc.DBDuplicateEntry as e: if 'name' in e.columns: raise exception.DeployTemplateDuplicateName( name=values['name']) raise exception.DeployTemplateAlreadyExists( uuid=values['uuid']) return template def _update_deploy_template_steps(self, session, template_id, steps): """Update the steps for a deploy template. :param session: DB session object. :param template_id: deploy template ID. :param steps: list of steps that should exist for the deploy template. """ def _step_key(step): """Compare two deploy template steps.""" # NOTE(mgoddard): In python 3, dicts are not orderable so cannot be # used as a sort key. Serialise the step arguments to a JSON string # for comparison. Taken from https://stackoverflow.com/a/22003440. sortable_args = json.dumps(step.args, sort_keys=True) return step.interface, step.step, sortable_args, step.priority # List all existing steps for the template. current_steps = (model_query(models.DeployTemplateStep) .filter_by(deploy_template_id=template_id)) # List the new steps for the template. new_steps = self._get_deploy_template_steps(steps, template_id) # The following is an efficient way to ensure that the steps in the # database match those that have been requested. We compare the current # and requested steps in a single pass using the _zip_matching # function. steps_to_create = [] step_ids_to_delete = [] for current_step, new_step in _zip_matching(current_steps, new_steps, _step_key): if current_step is None: # No matching current step found for this new step - create. steps_to_create.append(new_step) elif new_step is None: # No matching new step found for this current step - delete. step_ids_to_delete.append(current_step.id) # else: steps match, no work required. # Delete and create steps in bulk as necessary. if step_ids_to_delete: ((model_query(models.DeployTemplateStep) .filter(models.DeployTemplateStep.id.in_(step_ids_to_delete))) .delete(synchronize_session=False)) if steps_to_create: session.bulk_save_objects(steps_to_create) @oslo_db_api.retry_on_deadlock def update_deploy_template(self, template_id, values): if 'uuid' in values: msg = _("Cannot overwrite UUID for an existing deploy template.") raise exception.InvalidParameterValue(err=msg) try: with _session_for_write() as session: # NOTE(mgoddard): Don't issue a joined query for the update as # this does not work with PostgreSQL. query = model_query(models.DeployTemplate) query = add_identity_filter(query, template_id) try: ref = query.with_for_update().one() except NoResultFound: raise exception.DeployTemplateNotFound( template=template_id) # First, update non-step columns. steps = values.pop('steps', None) ref.update(values) # If necessary, update steps. if steps is not None: self._update_deploy_template_steps(session, ref.id, steps) # Return the updated template joined with all relevant fields. query = _get_deploy_template_query_with_steps() query = add_identity_filter(query, template_id) return query.one() except db_exc.DBDuplicateEntry as e: if 'name' in e.columns: raise exception.DeployTemplateDuplicateName( name=values['name']) raise @oslo_db_api.retry_on_deadlock def destroy_deploy_template(self, template_id): with _session_for_write(): model_query(models.DeployTemplateStep).filter_by( deploy_template_id=template_id).delete() count = model_query(models.DeployTemplate).filter_by( id=template_id).delete() if count == 0: raise exception.DeployTemplateNotFound(template=template_id) def _get_deploy_template(self, field, value): """Helper method for retrieving a deploy template.""" query = (_get_deploy_template_query_with_steps() .filter_by(**{field: value})) try: return query.one() except NoResultFound: raise exception.DeployTemplateNotFound(template=value) def get_deploy_template_by_id(self, template_id): return self._get_deploy_template('id', template_id) def get_deploy_template_by_uuid(self, template_uuid): return self._get_deploy_template('uuid', template_uuid) def get_deploy_template_by_name(self, template_name): return self._get_deploy_template('name', template_name) def get_deploy_template_list(self, limit=None, marker=None, sort_key=None, sort_dir=None): query = _get_deploy_template_query_with_steps() return _paginate_query(models.DeployTemplate, limit, marker, sort_key, sort_dir, query) def get_deploy_template_list_by_names(self, names): query = (_get_deploy_template_query_with_steps() .filter(models.DeployTemplate.name.in_(names))) return query.all() ironic-15.0.0/ironic/db/sqlalchemy/alembic/0000775000175000017500000000000013652514443020517 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/script.py.mako0000664000175000017500000000053513652514273023327 0ustar zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/0000775000175000017500000000000013652514443022367 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/28c44432c9c3_add_node_description.py0000664000175000017500000000171313652514273030643 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add node description Revision ID: 28c44432c9c3 Revises: dd67b91a1981 Create Date: 2019-01-23 13:54:08.850421 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '28c44432c9c3' down_revision = '9cbeefa3763f' def upgrade(): op.add_column('nodes', sa.Column('description', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/dd67b91a1981_add_allocations_table.py0000664000175000017500000000454613652514273031066 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Allocations table Revision ID: dd67b91a1981 Revises: f190f9d00a11 Create Date: 2018-12-10 15:24:30.555995 """ from alembic import op from oslo_db.sqlalchemy import types import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'dd67b91a1981' down_revision = 'f190f9d00a11' def upgrade(): op.create_table( 'allocations', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('version', sa.String(length=15), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('state', sa.String(length=15), nullable=False), sa.Column('last_error', sa.Text(), nullable=True), sa.Column('resource_class', sa.String(length=80), nullable=True), sa.Column('traits', types.JsonEncodedList(), nullable=True), sa.Column('candidate_nodes', types.JsonEncodedList(), nullable=True), sa.Column('extra', types.JsonEncodedDict(), nullable=True), sa.Column('conductor_affinity', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['conductor_affinity'], ['conductors.id'], ), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', name='uniq_allocations0name'), sa.UniqueConstraint('uuid', name='uniq_allocations0uuid') ) op.add_column('nodes', sa.Column('allocation_id', sa.Integer(), nullable=True)) op.create_foreign_key(None, 'nodes', 'allocations', ['allocation_id'], ['id']) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/b4130a7fc904_create_nodetraits_table.py0000664000175000017500000000277213652514273031427 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Create node_traits table Revision ID: b4130a7fc904 Revises: 405cfe08f18d Create Date: 2017-12-20 10:20:07.911788 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'b4130a7fc904' down_revision = '405cfe08f18d' def upgrade(): op.create_table( 'node_traits', sa.Column('version', sa.String(length=15), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('node_id', sa.Integer(), nullable=False, autoincrement=False), sa.Column('trait', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('node_id', 'trait'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_index('node_traits_idx', 'node_traits', ['trait'], unique=False) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/ce6c4b3cf5a2_add_allocation_owner.py0000664000175000017500000000175613652514273031154 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add allocation owner Revision ID: ce6c4b3cf5a2 Revises: 1e15e7122cc9 Create Date: 2019-11-21 20:46:09.106592 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'ce6c4b3cf5a2' down_revision = '1e15e7122cc9' def upgrade(): op.add_column('allocations', sa.Column('owner', sa.String(255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/c14cef6dfedf_populate_node_network_interface.py0000664000175000017500000000255413652514273033603 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Populate node.network_interface Revision ID: c14cef6dfedf Revises: dd34e1f1303b Create Date: 2016-08-01 14:05:24.197314 """ from alembic import op from sqlalchemy import String from sqlalchemy.sql import table, column, null from ironic.conf import CONF # revision identifiers, used by Alembic. revision = 'c14cef6dfedf' down_revision = 'dd34e1f1303b' node = table('nodes', column('uuid', String(36)), column('network_interface', String(255))) def upgrade(): network_iface = (CONF.default_network_interface or ('flat' if CONF.dhcp.dhcp_provider == 'neutron' else 'noop')) op.execute( node.update().where( node.c.network_interface == null()).values( {'network_interface': network_iface})) ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1d6951876d68_add_storage_interface_db_field_and_.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1d6951876d68_add_storage_interface_db_field_and_0000664000175000017500000000202013652514273033072 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add storage_interface DB field and object Revision ID: 1d6951876d68 Revises: 493d8f27f235 Create Date: 2016-07-26 10:33:22.830739 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '1d6951876d68' down_revision = '493d8f27f235' def upgrade(): op.add_column('nodes', sa.Column('storage_interface', sa.String(255), nullable=True)) ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/493d8f27f235_add_portgroup_configuration_fields.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/493d8f27f235_add_portgroup_configuration_fields.0000664000175000017500000000244213652514273033273 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add portgroup configuration fields Revision ID: 493d8f27f235 Revises: 60cf717201bc Create Date: 2016-11-15 18:09:31.362613 """ from alembic import op import sqlalchemy as sa from sqlalchemy import sql from ironic.conf import CONF # revision identifiers, used by Alembic. revision = '493d8f27f235' down_revision = '1a59178ebdf6' def upgrade(): op.add_column('portgroups', sa.Column('properties', sa.Text(), nullable=True)) op.add_column('portgroups', sa.Column('mode', sa.String(255))) portgroups = sql.table('portgroups', sql.column('mode', sa.String(255))) op.execute( portgroups.update().values({'mode': CONF.default_portgroup_mode})) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/5674c57409b9_replace_nostate_with_available.py0000664000175000017500000000276613652514273032654 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """replace NOSTATE with AVAILABLE Revision ID: 5674c57409b9 Revises: 242cc6a923b3 Create Date: 2015-01-14 16:55:44.718196 """ from alembic import op from sqlalchemy import String from sqlalchemy.sql import table, column, null # revision identifiers, used by Alembic. revision = '5674c57409b9' down_revision = '242cc6a923b3' node = table('nodes', column('uuid', String(36)), column('provision_state', String(15))) # NOTE(tenbrae): We must represent the states as static strings in this # migration file, rather than import ironic.common.states, because that file # may change in the future. This migration script must still be able to be # run with future versions of the code and still produce the same results. AVAILABLE = 'available' def upgrade(): op.execute( node.update().where( node.c.provision_state == null()).values( {'provision_state': op.inline_literal(AVAILABLE)})) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/f190f9d00a11_add_node_owner.py0000664000175000017500000000173413652514273027523 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_node_owner Revision ID: f190f9d00a11 Revises: 93706939026c Create Date: 2018-11-12 00:33:58.575100 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'f190f9d00a11' down_revision = '93706939026c' def upgrade(): op.add_column('nodes', sa.Column('owner', sa.String(255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/bb59b63f55a_add_node_driver_internal_info.py0000664000175000017500000000200013652514273032566 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_node_driver_internal_info Revision ID: bb59b63f55a Revises: 5674c57409b9 Create Date: 2015-01-28 14:28:22.212790 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'bb59b63f55a' down_revision = '5674c57409b9' def upgrade(): op.add_column('nodes', sa.Column('driver_internal_info', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/21b331f883ef_add_provision_updated_at.py0000664000175000017500000000171113652514273031613 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add provision_updated_at Revision ID: 21b331f883ef Revises: 2581ebaf0cb2 Create Date: 2014-02-19 13:45:30.150632 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '21b331f883ef' down_revision = '2581ebaf0cb2' def upgrade(): op.add_column('nodes', sa.Column('provision_updated_at', sa.DateTime(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/4f399b21ae71_add_node_clean_step.py0000664000175000017500000000166613652514273030526 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add node.clean_step Revision ID: 4f399b21ae71 Revises: 1e1d5ace7dc6 Create Date: 2015-02-18 01:21:46.062311 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '4f399b21ae71' down_revision = '1e1d5ace7dc6' def upgrade(): op.add_column('nodes', sa.Column('clean_step', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/10b163d4481e_add_port_portgroup_internal_info.py0000664000175000017500000000225013652514273033313 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add port portgroup internal info Revision ID: 10b163d4481e Revises: e294876e8028 Create Date: 2016-07-06 17:43:55.846837 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '10b163d4481e' down_revision = 'e294876e8028' def upgrade(): op.add_column('ports', sa.Column('internal_info', sa.Text(), nullable=True)) op.add_column('portgroups', sa.Column('internal_info', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/242cc6a923b3_add_node_maintenance_reason.py0000664000175000017500000000177613652514273032232 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Node.maintenance_reason Revision ID: 242cc6a923b3 Revises: 487deb87cc9d Create Date: 2014-10-15 23:00:43.164061 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '242cc6a923b3' down_revision = '487deb87cc9d' def upgrade(): op.add_column('nodes', sa.Column('maintenance_reason', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1e15e7122cc9_add_extra_column_to_deploy_templates.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1e15e7122cc9_add_extra_column_to_deploy_template0000664000175000017500000000171513652514273033413 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add extra column to deploy_templates Revision ID: 1e15e7122cc9 Revises: 2aac7e0872f6 Create Date: 2019-02-26 15:08:18.419157 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '1e15e7122cc9' down_revision = '2aac7e0872f6' def upgrade(): op.add_column('deploy_templates', sa.Column('extra', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/5ea1b0d310e_added_port_group_table_and_altered_ports.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/5ea1b0d310e_added_port_group_table_and_altered_p0000664000175000017500000000466513652514273033451 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added portgroups table and altered ports Revision ID: 5ea1b0d310e Revises: 48d6c242bb9b Create Date: 2015-06-30 14:14:26.972368 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '5ea1b0d310e' down_revision = '48d6c242bb9b' def upgrade(): op.create_table('portgroups', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('address', sa.String(length=18), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_portgroups0uuid'), sa.UniqueConstraint('address', name='uniq_portgroups0address'), sa.UniqueConstraint('name', name='uniq_portgroups0name'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8') op.add_column(u'ports', sa.Column('local_link_connection', sa.Text(), nullable=True)) op.add_column(u'ports', sa.Column('portgroup_id', sa.Integer(), nullable=True)) op.add_column(u'ports', sa.Column('pxe_enabled', sa.Boolean(), default=True)) op.create_foreign_key('fk_portgroups_ports', 'ports', 'portgroups', ['portgroup_id'], ['id']) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/dd34e1f1303b_add_resource_class_to_node.py0000664000175000017500000000176113652514273032165 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add resource_class to node Revision ID: dd34e1f1303b Revises: 10b163d4481e Create Date: 2016-07-20 21:48:12.475320 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'dd34e1f1303b' down_revision = '10b163d4481e' def upgrade(): op.add_column('nodes', sa.Column('resource_class', sa.String(80), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/3cb628139ea4_nodes_add_console_enabled.py0000664000175000017500000000164113652514273031676 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Nodes add console enabled Revision ID: 3cb628139ea4 Revises: 21b331f883ef Create Date: 2014-02-26 11:24:11.318023 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '3cb628139ea4' down_revision = '21b331f883ef' def upgrade(): op.add_column('nodes', sa.Column('console_enabled', sa.Boolean)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/48d6c242bb9b_add_node_tags.py0000664000175000017500000000264013652514273027416 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add node tags Revision ID: 48d6c242bb9b Revises: 516faf1bb9b1 Create Date: 2015-10-08 10:07:33.779516 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '48d6c242bb9b' down_revision = '516faf1bb9b1' def upgrade(): op.create_table( 'node_tags', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('node_id', sa.Integer(), nullable=False, autoincrement=False), sa.Column('tag', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('node_id', 'tag'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_index('node_tags_idx', 'node_tags', ['tag'], unique=False) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/d2b036ae9378_add_automated_clean_field.py0000664000175000017500000000172713652514273031672 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add automated_clean field Revision ID: d2b036ae9378 Revises: 664f85c2f622 Create Date: 2018-07-25 15:30:20.860792 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd2b036ae9378' down_revision = '664f85c2f622' def upgrade(): op.add_column('nodes', sa.Column('automated_clean', sa.Boolean(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/e294876e8028_add_node_network_interface.py0000664000175000017500000000173413652514273031774 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add-node-network-interface Revision ID: e294876e8028 Revises: f6fdb920c182 Create Date: 2016-03-02 14:30:54.402864 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'e294876e8028' down_revision = 'f6fdb920c182' def upgrade(): op.add_column('nodes', sa.Column('network_interface', sa.String(255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1a59178ebdf6_add_volume_targets_table.py0000664000175000017500000000426113652514273031672 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add volume_targets table Revision ID: 1a59178ebdf6 Revises: daa1ba02d98 Create Date: 2016-02-25 11:25:29.836535 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '1a59178ebdf6' down_revision = 'daa1ba02d98' def upgrade(): op.create_table('volume_targets', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('volume_type', sa.String(length=64), nullable=True), sa.Column('properties', sa.Text(), nullable=True), sa.Column('boot_index', sa.Integer(), nullable=True), sa.Column('volume_id', sa.String(length=36), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('node_id', 'boot_index', name='uniq_volumetargets0node_id0' 'boot_index'), sa.UniqueConstraint('uuid', name='uniq_volumetargets0uuid'), mysql_charset='utf8', mysql_engine='InnoDB') ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/3ae36a5f5131_add_logical_name.py0000664000175000017500000000177413652514273030004 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_logical_name Revision ID: 3ae36a5f5131 Revises: bb59b63f55a Create Date: 2014-12-10 14:27:26.323540 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '3ae36a5f5131' down_revision = 'bb59b63f55a' def upgrade(): op.add_column('nodes', sa.Column('name', sa.String(length=63), nullable=True)) op.create_unique_constraint('uniq_nodes0name', 'nodes', ['name']) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/dbefd6bdaa2c_add_default_column_to_.py0000664000175000017500000000227013652514273031664 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add default column to ConductorHardwareInterfaces Revision ID: dbefd6bdaa2c Revises: 2353895ecfae Create Date: 2017-01-17 15:28:04.653738 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'dbefd6bdaa2c' down_revision = '2353895ecfae' def upgrade(): op.add_column('conductor_hardware_interfaces', sa.Column('default', sa.Boolean, nullable=False, default=False)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/868cb606a74a_add_version_field_in_base_class.py0000664000175000017500000000377413652514273033105 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add version field in base class Revision ID: 868cb606a74a Revises: 3d86a077a3f2 Create Date: 2016-12-15 12:31:31.629237 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '868cb606a74a' down_revision = '3d86a077a3f2' def upgrade(): # NOTE(rloo): In db.sqlalchemy.models, we added the 'version' column # to IronicBase class. All inherited classes/tables have # this new column. op.add_column('chassis', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('conductors', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('node_tags', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('nodes', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('portgroups', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('ports', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('volume_connectors', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('volume_targets', sa.Column('version', sa.String(length=15), nullable=True)) op.add_column('conductor_hardware_interfaces', sa.Column('version', sa.String(length=15), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/bcdd431ba0bf_add_fields_for_all_interfaces.py0000664000175000017500000000361113652514273033021 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add fields for all interfaces Revision ID: bcdd431ba0bf Revises: 60cf717201bc Create Date: 2016-11-11 16:44:52.823881 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'bcdd431ba0bf' down_revision = '60cf717201bc' def upgrade(): op.add_column('nodes', sa.Column('boot_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('console_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('deploy_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('inspect_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('management_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('power_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('raid_interface', sa.String(length=255), nullable=True)) op.add_column('nodes', sa.Column('vendor_interface', sa.String(length=255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/60cf717201bc_add_standalone_ports_supported.py0000664000175000017500000000175513652514273033053 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_standalone_ports_supported_to_portgroup Revision ID: 60cf717201bc Revises: c14cef6dfedf Create Date: 2016-08-25 07:00:56.662645 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '60cf717201bc' down_revision = 'c14cef6dfedf' def upgrade(): op.add_column('portgroups', sa.Column('standalone_ports_supported', sa.Boolean)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/f6fdb920c182_set_pxe_enabled_true.py0000664000175000017500000000223113652514273031020 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Set Port.pxe_enabled to True if NULL Revision ID: f6fdb920c182 Revises: 5ea1b0d310e Create Date: 2016-02-12 16:53:21.008580 """ from alembic import op from sqlalchemy import Boolean, String from sqlalchemy.sql import table, column, null # revision identifiers, used by Alembic. revision = 'f6fdb920c182' down_revision = '5ea1b0d310e' port = table('ports', column('uuid', String(36)), column('pxe_enabled', Boolean())) def upgrade(): op.execute( port.update().where( port.c.pxe_enabled == null()).values( {'pxe_enabled': True})) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/daa1ba02d98_add_volume_connectors_table.py0000664000175000017500000000401113652514273032341 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add volume_connectors table Revision ID: daa1ba02d98 Revises: c14cef6dfedf Create Date: 2015-11-26 17:19:22.074989 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'daa1ba02d98' down_revision = 'bcdd431ba0bf' def upgrade(): op.create_table('volume_connectors', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('type', sa.String(length=32), nullable=True), sa.Column('connector_id', sa.String(length=255), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('type', 'connector_id', name='uniq_volumeconnectors0type0' 'connector_id'), sa.UniqueConstraint('uuid', name='uniq_volumeconnectors0uuid'), mysql_charset='utf8', mysql_engine='InnoDB') ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2aac7e0872f6_add_deploy_templates.py0000664000175000017500000000523313652514273031026 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Create deploy_templates and deploy_template_steps tables. Revision ID: 2aac7e0872f6 Revises: 28c44432c9c3 Create Date: 2018-12-27 11:49:15.029650 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '2aac7e0872f6' down_revision = '28c44432c9c3' def upgrade(): op.create_table( 'deploy_templates', sa.Column('version', sa.String(length=15), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False, autoincrement=True), sa.Column('uuid', sa.String(length=36)), sa.Column('name', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_deploytemplates0uuid'), sa.UniqueConstraint('name', name='uniq_deploytemplates0name'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_table( 'deploy_template_steps', sa.Column('version', sa.String(length=15), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False, autoincrement=True), sa.Column('deploy_template_id', sa.Integer(), nullable=False, autoincrement=False), sa.Column('interface', sa.String(length=255), nullable=False), sa.Column('step', sa.String(length=255), nullable=False), sa.Column('args', sa.Text, nullable=False), sa.Column('priority', sa.Integer, nullable=False), sa.PrimaryKeyConstraint('id'), sa.ForeignKeyConstraint(['deploy_template_id'], ['deploy_templates.id']), sa.Index('deploy_template_id', 'deploy_template_id'), sa.Index('deploy_template_steps_interface_idx', 'interface'), sa.Index('deploy_template_steps_step_idx', 'step'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/fb3f10dd262e_add_fault_to_node_table.py0000664000175000017500000000172413652514273031517 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add fault to node table Revision ID: fb3f10dd262e Revises: 2d13bc3d6bba Create Date: 2018-03-23 14:10:52.142016 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'fb3f10dd262e' down_revision = '2d13bc3d6bba' def upgrade(): op.add_column('nodes', sa.Column('fault', sa.String(length=255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/405cfe08f18d_add_rescue_interface_to_node.py0000664000175000017500000000200313652514273032463 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add rescue interface to nodes Revision ID: 405cfe08f18d Revises: 868cb606a74a Create Date: 2017-02-01 16:32:32.098742 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '405cfe08f18d' down_revision = '868cb606a74a' def upgrade(): op.add_column('nodes', sa.Column('rescue_interface', sa.String(255), nullable=True)) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/664f85c2f622_add_conductor_group_to_nodes_conductors.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/664f85c2f622_add_conductor_group_to_nodes_conduc0000664000175000017500000000222213652514273033331 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add conductor_group to nodes/conductors Revision ID: 664f85c2f622 Revises: fb3f10dd262e Create Date: 2018-07-02 13:21:54.847245 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '664f85c2f622' down_revision = 'b9117ac17882' def upgrade(): op.add_column('conductors', sa.Column('conductor_group', sa.String(length=255), server_default='', nullable=False)) op.add_column('nodes', sa.Column('conductor_group', sa.String(length=255), server_default='', nullable=False)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2fb93ffd2af1_increase_node_name_length.py0000664000175000017500000000210013652514273032131 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """increase-node-name-length Revision ID: 2fb93ffd2af1 Revises: 4f399b21ae71 Create Date: 2015-03-18 17:08:11.470791 """ from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import mysql # revision identifiers, used by Alembic. revision = '2fb93ffd2af1' down_revision = '4f399b21ae71' def upgrade(): op.alter_column('nodes', 'name', existing_type=mysql.VARCHAR(length=63), type_=sa.String(length=255), existing_nullable=True) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/31baaf680d2b_add_node_instance_info.py0000664000175000017500000000211013652514273031333 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Node instance info Revision ID: 31baaf680d2b Revises: 3cb628139ea4 Create Date: 2014-03-05 21:09:32.372463 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '31baaf680d2b' down_revision = '3cb628139ea4' def upgrade(): # commands auto generated by Alembic - please adjust op.add_column('nodes', sa.Column('instance_info', sa.Text(), nullable=True)) # end Alembic commands ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/3bea56f25597_add_unique_constraint_to_instance_uuid.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/3bea56f25597_add_unique_constraint_to_instance_u0000664000175000017500000000206213652514273033436 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add unique constraint to instance_uuid Revision ID: 3bea56f25597 Revises: 31baaf680d2b Create Date: 2014-06-05 11:45:07.046670 """ from alembic import op # revision identifiers, used by Alembic. revision = '3bea56f25597' down_revision = '31baaf680d2b' def upgrade(): op.create_unique_constraint("uniq_nodes0instance_uuid", "nodes", ["instance_uuid"]) op.drop_index('node_instance_uuid', 'nodes') ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/93706939026c_add_node_protected_field.py0000664000175000017500000000214513652514273031325 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add Node.protected field Revision ID: 93706939026c Revises: d2b036ae9378 Create Date: 2018-10-18 14:55:12.489170 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '93706939026c' down_revision = 'd2b036ae9378' def upgrade(): op.add_column('nodes', sa.Column('protected', sa.Boolean(), nullable=False, server_default=sa.false())) op.add_column('nodes', sa.Column('protected_reason', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/1e1d5ace7dc6_add_inspection_started_at_and_.py0000664000175000017500000000230613652514273033155 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add inspection_started_at and inspection_finished_at Revision ID: 1e1d5ace7dc6 Revises: 3ae36a5f5131 Create Date: 2015-02-26 10:46:46.861927 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '1e1d5ace7dc6' down_revision = '3ae36a5f5131' def upgrade(): op.add_column('nodes', sa.Column('inspection_started_at', sa.DateTime(), nullable=True)) op.add_column('nodes', sa.Column('inspection_finished_at', sa.DateTime(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/cd2c80feb331_add_node_retired_field.py0000664000175000017500000000213713652514273031332 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add nodes.retired field Revision ID: cd2c80feb331 Revises: ce6c4b3cf5a2 Create Date: 2020-01-16 12:51:13.866882 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'cd2c80feb331' down_revision = 'ce6c4b3cf5a2' def upgrade(): op.add_column('nodes', sa.Column('retired', sa.Boolean(), nullable=True, server_default=sa.false())) op.add_column('nodes', sa.Column('retired_reason', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/487deb87cc9d_add_conductor_affinity_and_online.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/487deb87cc9d_add_conductor_affinity_and_online.p0000664000175000017500000000227013652514273033440 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add conductor_affinity and online Revision ID: 487deb87cc9d Revises: 3bea56f25597 Create Date: 2014-09-26 16:16:30.988900 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '487deb87cc9d' down_revision = '3bea56f25597' def upgrade(): op.add_column( 'conductors', sa.Column('online', sa.Boolean(), default=True)) op.add_column( 'nodes', sa.Column('conductor_affinity', sa.Integer(), sa.ForeignKey('conductors.id', name='nodes_conductor_affinity_fk'), nullable=True)) ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/e918ff30eb42_resize_column_nodes_instance_info.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/e918ff30eb42_resize_column_nodes_instance_info.p0000664000175000017500000000207113652514273033420 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """resize column nodes instance_info Revision ID: e918ff30eb42 Revises: b4130a7fc904 Create Date: 2016-06-28 13:30:19.396203 """ from alembic import op from oslo_db.sqlalchemy import types as db_types # revision identifiers, used by Alembic. revision = 'e918ff30eb42' down_revision = 'b4130a7fc904' def upgrade(): op.alter_column('nodes', 'instance_info', existing_type=db_types.JsonEncodedDict.impl, type_=db_types.JsonEncodedDict(mysql_as_long=True).impl) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2581ebaf0cb2_initial_migration.py0000664000175000017500000001035313652514273030412 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """initial migration Revision ID: 2581ebaf0cb2 Revises: None Create Date: 2014-01-17 12:14:07.754448 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '2581ebaf0cb2' down_revision = None def upgrade(): # commands auto generated by Alembic - please adjust! op.create_table( 'conductors', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('hostname', sa.String(length=255), nullable=False), sa.Column('drivers', sa.Text(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('hostname', name='uniq_conductors0hostname'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_table( 'chassis', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.Column('description', sa.String(length=255), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_chassis0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_table( 'nodes', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('instance_uuid', sa.String(length=36), nullable=True), sa.Column('chassis_id', sa.Integer(), nullable=True), sa.Column('power_state', sa.String(length=15), nullable=True), sa.Column('target_power_state', sa.String(length=15), nullable=True), sa.Column('provision_state', sa.String(length=15), nullable=True), sa.Column('target_provision_state', sa.String(length=15), nullable=True), sa.Column('last_error', sa.Text(), nullable=True), sa.Column('properties', sa.Text(), nullable=True), sa.Column('driver', sa.String(length=15), nullable=True), sa.Column('driver_info', sa.Text(), nullable=True), sa.Column('reservation', sa.String(length=255), nullable=True), sa.Column('maintenance', sa.Boolean(), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['chassis_id'], ['chassis.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_nodes0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) op.create_index('node_instance_uuid', 'nodes', ['instance_uuid'], unique=False) op.create_table( 'ports', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.Column('address', sa.String(length=18), nullable=True), sa.Column('node_id', sa.Integer(), nullable=True), sa.Column('extra', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('address', name='uniq_ports0address'), sa.UniqueConstraint('uuid', name='uniq_ports0uuid'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) # end Alembic commands ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/789acc877671_add_raid_config.py0000664000175000017500000000207413652514273027603 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add node.raid_config and node.target_raid_config Revision ID: 789acc877671 Revises: 2fb93ffd2af1 Create Date: 2015-06-26 01:21:46.062311 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '789acc877671' down_revision = '2fb93ffd2af1' def upgrade(): op.add_column('nodes', sa.Column('raid_config', sa.Text(), nullable=True)) op.add_column('nodes', sa.Column('target_raid_config', sa.Text(), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/9cbeefa3763f_add_port_is_smartnic.py0000664000175000017500000000173113652514273031202 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add is_smartnic port attribute Revision ID: 9cbeefa3763f Revises: dd67b91a1981 Create Date: 2019-01-13 09:31:13.336479 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '9cbeefa3763f' down_revision = 'dd67b91a1981' def upgrade(): op.add_column('ports', sa.Column('is_smartnic', sa.Boolean(), default=False)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/b9117ac17882_add_node_deploy_step.py0000664000175000017500000000174713652514273030660 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add deploy_step to node Revision ID: b9117ac17882 Revises: fb3f10dd262e Create Date: 2018-06-19 22:31:45.668156 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'b9117ac17882' down_revision = 'fb3f10dd262e' def upgrade(): op.add_column('nodes', sa.Column('deploy_step', sa.Text(), nullable=True)) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2353895ecfae_add_conductor_hardware_interfaces_table.pyironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2353895ecfae_add_conductor_hardware_interfaces_t0000664000175000017500000000401413652514273033433 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add conductor_hardware_interfaces table Revision ID: 2353895ecfae Revises: 1a59178ebdf6 Create Date: 2016-12-12 15:17:22.065056 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '2353895ecfae' down_revision = '1d6951876d68' def upgrade(): op.create_table('conductor_hardware_interfaces', sa.Column('id', sa.Integer(), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('conductor_id', sa.Integer(), nullable=False), sa.Column('hardware_type', sa.String(length=255), nullable=False), sa.Column('interface_type', sa.String(length=16), nullable=False), sa.Column('interface_name', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['conductor_id'], ['conductors.id']), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint( 'conductor_id', 'hardware_type', 'interface_type', 'interface_name', name='uniq_conductorhardwareinterfaces0'), mysql_charset='utf8', mysql_engine='InnoDB') ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/3d86a077a3f2_add_port_physical_network.py0000664000175000017500000000177613652514273032035 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add port physical network Revision ID: 3d86a077a3f2 Revises: dbefd6bdaa2c Create Date: 2017-04-30 17:11:49.384851 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '3d86a077a3f2' down_revision = 'dbefd6bdaa2c' def upgrade(): op.add_column('ports', sa.Column('physical_network', sa.String(64), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/2d13bc3d6bba_add_bios_config_and_interface.py0000664000175000017500000000173013652514273032716 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add bios interface Revision ID: 2d13bc3d6bba Revises: 82c315d60161 Create Date: 2017-09-27 14:42:42.107321 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '2d13bc3d6bba' down_revision = '82c315d60161' def upgrade(): op.add_column('nodes', sa.Column('bios_interface', sa.String(length=255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/516faf1bb9b1_resizing_column_nodes_driver.py0000664000175000017500000000175713652514273032677 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Resizing column nodes.driver Revision ID: 516faf1bb9b1 Revises: 789acc877671 Create Date: 2015-08-05 13:27:31.808919 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '516faf1bb9b1' down_revision = '789acc877671' def upgrade(): op.alter_column('nodes', 'driver', existing_type=sa.String(length=15), type_=sa.String(length=255)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/b2ad35726bb0_add_node_lessee.py0000664000175000017500000000173613652514273027731 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add node lessee Revision ID: b2ad35726bb0 Revises: ce6c4b3cf5a2 Create Date: 2020-01-07 20:49:50.851441 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'b2ad35726bb0' down_revision = 'cd2c80feb331' def upgrade(): op.add_column('nodes', sa.Column('lessee', sa.String(255), nullable=True)) ironic-15.0.0/ironic/db/sqlalchemy/alembic/versions/82c315d60161_add_bios_settings.py0000664000175000017500000000266313652514273030104 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add bios settings Revision ID: 82c315d60161 Revises: e918ff30eb42 Create Date: 2017-10-11 14:56:47.813290 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '82c315d60161' down_revision = 'e918ff30eb42' def upgrade(): op.create_table( 'bios_settings', sa.Column('node_id', sa.Integer(), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('name', sa.String(length=255), nullable=False), sa.Column('value', sa.Text(), nullable=True), sa.Column('version', sa.String(length=15), nullable=True), sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ), sa.PrimaryKeyConstraint('node_id', 'name'), mysql_ENGINE='InnoDB', mysql_DEFAULT_CHARSET='UTF8' ) ironic-15.0.0/ironic/db/sqlalchemy/alembic/README0000664000175000017500000000066213652514273021404 0ustar zuulzuul00000000000000Please see https://alembic.readthedocs.org/en/latest/index.html for general documentation To create alembic migrations use: $ ironic-dbsync revision --message --autogenerate Stamp db with most recent migration version, without actually running migrations $ ironic-dbsync stamp --revision head Upgrade can be performed by: $ ironic-dbsync - for backward compatibility $ ironic-dbsync upgrade # ironic-dbsync upgrade --revision head ironic-15.0.0/ironic/db/sqlalchemy/alembic/env.py0000664000175000017500000000366313652514273021672 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from logging import config as log_config from alembic import context from oslo_db.sqlalchemy import enginefacade try: # NOTE(whaom): This is to register the DB2 alembic code which # is an optional runtime dependency. from ibm_db_alembic.ibm_db import IbmDbImpl # noqa except ImportError: pass from ironic.db.sqlalchemy import models # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # Interpret the config file for Python logging. # This line sets up loggers basically. log_config.fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel target_metadata = models.Base.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = enginefacade.writer.get_engine() with engine.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata) with context.begin_transaction(): context.run_migrations() run_migrations_online() ironic-15.0.0/ironic/db/sqlalchemy/models.py0000664000175000017500000003623013652514273020765 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for baremetal data. """ from os import path from urllib import parse as urlparse from oslo_db import options as db_options from oslo_db.sqlalchemy import models from oslo_db.sqlalchemy import types as db_types from sqlalchemy import Boolean, Column, DateTime, false, Index from sqlalchemy import ForeignKey, Integer from sqlalchemy import schema, String, Text from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import orm from ironic.common import exception from ironic.common.i18n import _ from ironic.conf import CONF _DEFAULT_SQL_CONNECTION = 'sqlite:///' + path.join('$state_path', 'ironic.sqlite') db_options.set_defaults(CONF, connection=_DEFAULT_SQL_CONNECTION) def table_args(): engine_name = urlparse.urlparse(CONF.database.connection).scheme if engine_name == 'mysql': return {'mysql_engine': CONF.database.mysql_engine, 'mysql_charset': "utf8"} return None class IronicBase(models.TimestampMixin, models.ModelBase): metadata = None version = Column(String(15), nullable=True) def as_dict(self): d = {} for c in self.__table__.columns: d[c.name] = self[c.name] return d Base = declarative_base(cls=IronicBase) class Chassis(Base): """Represents a hardware chassis.""" __tablename__ = 'chassis' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_chassis0uuid'), table_args() ) id = Column(Integer, primary_key=True) uuid = Column(String(36)) extra = Column(db_types.JsonEncodedDict) description = Column(String(255), nullable=True) class Conductor(Base): """Represents a conductor service entry.""" __tablename__ = 'conductors' __table_args__ = ( schema.UniqueConstraint('hostname', name='uniq_conductors0hostname'), table_args() ) id = Column(Integer, primary_key=True) hostname = Column(String(255), nullable=False) drivers = Column(db_types.JsonEncodedList) online = Column(Boolean, default=True) conductor_group = Column(String(255), nullable=False, default='', server_default='') class ConductorHardwareInterfaces(Base): """Internal table used to track what is loaded on each conductor.""" __tablename__ = 'conductor_hardware_interfaces' __table_args__ = ( schema.UniqueConstraint( 'conductor_id', 'hardware_type', 'interface_type', 'interface_name', name='uniq_conductorhardwareinterfaces0'), table_args()) id = Column(Integer, primary_key=True) conductor_id = Column(Integer, ForeignKey('conductors.id'), nullable=False) hardware_type = Column(String(255), nullable=False) interface_type = Column(String(16), nullable=False) interface_name = Column(String(255), nullable=False) default = Column(Boolean, default=False, nullable=False) class Node(Base): """Represents a bare metal node.""" __tablename__ = 'nodes' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_nodes0uuid'), schema.UniqueConstraint('instance_uuid', name='uniq_nodes0instance_uuid'), schema.UniqueConstraint('name', name='uniq_nodes0name'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) # NOTE(tenbrae): we store instance_uuid directly on the node so that we can # filter on it more efficiently, even though it is # user-settable, and would otherwise be in node.properties. instance_uuid = Column(String(36), nullable=True) name = Column(String(255), nullable=True) chassis_id = Column(Integer, ForeignKey('chassis.id'), nullable=True) power_state = Column(String(15), nullable=True) target_power_state = Column(String(15), nullable=True) provision_state = Column(String(15), nullable=True) target_provision_state = Column(String(15), nullable=True) provision_updated_at = Column(DateTime, nullable=True) last_error = Column(Text, nullable=True) instance_info = Column(db_types.JsonEncodedDict(mysql_as_long=True)) properties = Column(db_types.JsonEncodedDict) driver = Column(String(255)) driver_info = Column(db_types.JsonEncodedDict) driver_internal_info = Column(db_types.JsonEncodedDict) clean_step = Column(db_types.JsonEncodedDict) deploy_step = Column(db_types.JsonEncodedDict) resource_class = Column(String(80), nullable=True) raid_config = Column(db_types.JsonEncodedDict) target_raid_config = Column(db_types.JsonEncodedDict) # NOTE(tenbrae): this is the host name of the conductor which has # acquired a TaskManager lock on the node. # We should use an INT FK (conductors.id) in the future. reservation = Column(String(255), nullable=True) # NOTE(tenbrae): this is the id of the last conductor which prepared local # state for the node (eg, a PXE config file). # When affinity and the hash ring's mapping do not match, # this indicates that a conductor should rebuild local state. conductor_affinity = Column(Integer, ForeignKey('conductors.id', name='nodes_conductor_affinity_fk'), nullable=True) conductor_group = Column(String(255), nullable=False, default='', server_default='') maintenance = Column(Boolean, default=False) maintenance_reason = Column(Text, nullable=True) fault = Column(String(255), nullable=True) console_enabled = Column(Boolean, default=False) inspection_finished_at = Column(DateTime, nullable=True) inspection_started_at = Column(DateTime, nullable=True) extra = Column(db_types.JsonEncodedDict) automated_clean = Column(Boolean, nullable=True) protected = Column(Boolean, nullable=False, default=False, server_default=false()) protected_reason = Column(Text, nullable=True) owner = Column(String(255), nullable=True) lessee = Column(String(255), nullable=True) allocation_id = Column(Integer, ForeignKey('allocations.id'), nullable=True) description = Column(Text, nullable=True) bios_interface = Column(String(255), nullable=True) boot_interface = Column(String(255), nullable=True) console_interface = Column(String(255), nullable=True) deploy_interface = Column(String(255), nullable=True) inspect_interface = Column(String(255), nullable=True) management_interface = Column(String(255), nullable=True) network_interface = Column(String(255), nullable=True) raid_interface = Column(String(255), nullable=True) rescue_interface = Column(String(255), nullable=True) retired = Column(Boolean, nullable=True, default=False, server_default=false()) retired_reason = Column(Text, nullable=True) storage_interface = Column(String(255), nullable=True) power_interface = Column(String(255), nullable=True) vendor_interface = Column(String(255), nullable=True) class Port(Base): """Represents a network port of a bare metal node.""" __tablename__ = 'ports' __table_args__ = ( schema.UniqueConstraint('address', name='uniq_ports0address'), schema.UniqueConstraint('uuid', name='uniq_ports0uuid'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) address = Column(String(18)) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) extra = Column(db_types.JsonEncodedDict) local_link_connection = Column(db_types.JsonEncodedDict) portgroup_id = Column(Integer, ForeignKey('portgroups.id'), nullable=True) pxe_enabled = Column(Boolean, default=True) internal_info = Column(db_types.JsonEncodedDict) physical_network = Column(String(64), nullable=True) is_smartnic = Column(Boolean, nullable=True, default=False) class Portgroup(Base): """Represents a group of network ports of a bare metal node.""" __tablename__ = 'portgroups' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_portgroups0uuid'), schema.UniqueConstraint('address', name='uniq_portgroups0address'), schema.UniqueConstraint('name', name='uniq_portgroups0name'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) name = Column(String(255), nullable=True) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) address = Column(String(18)) extra = Column(db_types.JsonEncodedDict) internal_info = Column(db_types.JsonEncodedDict) standalone_ports_supported = Column(Boolean, default=True) mode = Column(String(255)) properties = Column(db_types.JsonEncodedDict) class NodeTag(Base): """Represents a tag of a bare metal node.""" __tablename__ = 'node_tags' __table_args__ = ( Index('node_tags_idx', 'tag'), table_args()) node_id = Column(Integer, ForeignKey('nodes.id'), primary_key=True, nullable=False) tag = Column(String(255), primary_key=True, nullable=False) node = orm.relationship( "Node", backref='tags', primaryjoin='and_(NodeTag.node_id == Node.id)', foreign_keys=node_id ) class VolumeConnector(Base): """Represents a volume connector of a bare metal node.""" __tablename__ = 'volume_connectors' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_volumeconnectors0uuid'), schema.UniqueConstraint( 'type', 'connector_id', name='uniq_volumeconnectors0type0connector_id'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) type = Column(String(32)) connector_id = Column(String(255)) extra = Column(db_types.JsonEncodedDict) class VolumeTarget(Base): """Represents a volume target of a bare metal node.""" __tablename__ = 'volume_targets' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_volumetargets0uuid'), schema.UniqueConstraint('node_id', 'boot_index', name='uniq_volumetargets0node_id0boot_index'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) volume_type = Column(String(64)) properties = Column(db_types.JsonEncodedDict) boot_index = Column(Integer) volume_id = Column(String(36)) extra = Column(db_types.JsonEncodedDict) class NodeTrait(Base): """Represents a trait of a bare metal node.""" __tablename__ = 'node_traits' __table_args__ = ( Index('node_traits_idx', 'trait'), table_args()) node_id = Column(Integer, ForeignKey('nodes.id'), primary_key=True, nullable=False) trait = Column(String(255), primary_key=True, nullable=False) node = orm.relationship( "Node", backref='traits', primaryjoin='and_(NodeTrait.node_id == Node.id)', foreign_keys=node_id ) class BIOSSetting(Base): """Represents a bios setting of a bare metal node.""" __tablename__ = 'bios_settings' __table_args__ = (table_args()) node_id = Column(Integer, ForeignKey('nodes.id'), primary_key=True, nullable=False) name = Column(String(255), primary_key=True, nullable=False) value = Column(Text, nullable=True) class Allocation(Base): """Represents an allocation of a node for deployment.""" __tablename__ = 'allocations' __table_args__ = ( schema.UniqueConstraint('name', name='uniq_allocations0name'), schema.UniqueConstraint('uuid', name='uniq_allocations0uuid'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36), nullable=False) name = Column(String(255), nullable=True) node_id = Column(Integer, ForeignKey('nodes.id'), nullable=True) state = Column(String(15), nullable=False) owner = Column(String(255), nullable=True) last_error = Column(Text, nullable=True) resource_class = Column(String(80), nullable=True) traits = Column(db_types.JsonEncodedList) candidate_nodes = Column(db_types.JsonEncodedList) extra = Column(db_types.JsonEncodedDict) # The last conductor to handle this allocation (internal field). conductor_affinity = Column(Integer, ForeignKey('conductors.id'), nullable=True) class DeployTemplate(Base): """Represents a deployment template.""" __tablename__ = 'deploy_templates' __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_deploytemplates0uuid'), schema.UniqueConstraint('name', name='uniq_deploytemplates0name'), table_args()) id = Column(Integer, primary_key=True) uuid = Column(String(36)) name = Column(String(255), nullable=False) extra = Column(db_types.JsonEncodedDict) class DeployTemplateStep(Base): """Represents a deployment step in a deployment template.""" __tablename__ = 'deploy_template_steps' __table_args__ = ( Index('deploy_template_id', 'deploy_template_id'), Index('deploy_template_steps_interface_idx', 'interface'), Index('deploy_template_steps_step_idx', 'step'), table_args()) id = Column(Integer, primary_key=True) deploy_template_id = Column(Integer, ForeignKey('deploy_templates.id'), nullable=False) interface = Column(String(255), nullable=False) step = Column(String(255), nullable=False) args = Column(db_types.JsonEncodedDict, nullable=False) priority = Column(Integer, nullable=False) deploy_template = orm.relationship( "DeployTemplate", backref='steps', primaryjoin=( 'and_(DeployTemplateStep.deploy_template_id == ' 'DeployTemplate.id)'), foreign_keys=deploy_template_id ) def get_class(model_name): """Returns the model class with the specified name. :param model_name: the name of the class :returns: the class with the specified name :raises: Exception if there is no class associated with the name """ for model in Base.__subclasses__(): if model.__name__ == model_name: return model raise exception.IronicException( _("Cannot find model with name: %s") % model_name) ironic-15.0.0/ironic/db/sqlalchemy/migration.py0000664000175000017500000000714013652514273021471 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import alembic from alembic import config as alembic_config import alembic.migration as alembic_migration from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from ironic.db.sqlalchemy import models def _alembic_config(): path = os.path.join(os.path.dirname(__file__), 'alembic.ini') config = alembic_config.Config(path) return config def version(config=None, engine=None): """Current database version. :returns: Database version :rtype: string """ if engine is None: engine = enginefacade.writer.get_engine() with engine.connect() as conn: context = alembic_migration.MigrationContext.configure(conn) return context.get_current_revision() def upgrade(revision, config=None): """Used for upgrading database. :param version: Desired database version :type version: string """ revision = revision or 'head' config = config or _alembic_config() alembic.command.upgrade(config, revision or 'head') def create_schema(config=None, engine=None): """Create database schema from models description. Can be used for initial installation instead of upgrade('head'). """ if engine is None: engine = enginefacade.writer.get_engine() # NOTE(viktors): If we will use metadata.create_all() for non empty db # schema, it will only add the new tables, but leave # existing as is. So we should avoid of this situation. if version(engine=engine) is not None: raise db_exc.DBMigrationError("DB schema is already under version" " control. Use upgrade() instead") models.Base.metadata.create_all(engine) stamp('head', config=config) def downgrade(revision, config=None): """Used for downgrading database. :param version: Desired database version :type version: string """ revision = revision or 'base' config = config or _alembic_config() return alembic.command.downgrade(config, revision) def stamp(revision, config=None): """Stamps database with provided revision. Don't run any migrations. :param revision: Should match one from repository or head - to stamp database with most recent revision :type revision: string """ config = config or _alembic_config() return alembic.command.stamp(config, revision=revision) def revision(message=None, autogenerate=False, config=None): """Creates template for migration. :param message: Text that will be used for migration title :type message: string :param autogenerate: If True - generates diff based on current database state :type autogenerate: bool """ config = config or _alembic_config() return alembic.command.revision(config, message=message, autogenerate=autogenerate) ironic-15.0.0/ironic/db/sqlalchemy/alembic.ini0000664000175000017500000000171713652514273021227 0ustar zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s/alembic # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false #sqlalchemy.url = driver://user:pass@localhost/dbname # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S ironic-15.0.0/ironic/db/sqlalchemy/__init__.py0000664000175000017500000000000013652514273021223 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/db/__init__.py0000664000175000017500000000000013652514273017061 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/0000775000175000017500000000000013652514443015536 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/functional/0000775000175000017500000000000013652514443017700 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/functional/__init__.py0000664000175000017500000000000013652514273022000 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/0000775000175000017500000000000013652514443016515 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/db/0000775000175000017500000000000013652514443017102 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/db/test_node_traits.py0000664000175000017500000001663313652514273023040 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating NodeTraits via the DB API""" from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbNodeTraitTestCase(base.DbTestCase): def setUp(self): super(DbNodeTraitTestCase, self).setUp() self.node = db_utils.create_test_node() def test_set_node_traits(self): result = self.dbapi.set_node_traits(self.node.id, ['trait1', 'trait2'], '1.0') self.assertEqual(self.node.id, result[0].node_id) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in result]) result = self.dbapi.set_node_traits(self.node.id, [], '1.0') self.assertEqual([], result) def test_set_node_traits_duplicate(self): result = self.dbapi.set_node_traits(self.node.id, ['trait1', 'trait2', 'trait2'], '1.0') self.assertEqual(self.node.id, result[0].node_id) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in result]) def test_set_node_traits_at_limit(self): traits = ['trait%d' % n for n in range(50)] result = self.dbapi.set_node_traits(self.node.id, traits, '1.0') self.assertEqual(self.node.id, result[0].node_id) self.assertItemsEqual(traits, [trait.trait for trait in result]) def test_set_node_traits_over_limit(self): traits = ['trait%d' % n for n in range(51)] self.assertRaises(exception.InvalidParameterValue, self.dbapi.set_node_traits, self.node.id, traits, '1.0') # Ensure the traits were not set. result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual([], result) def test_set_node_traits_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.set_node_traits, '1234', ['trait1', 'trait2'], '1.0') def test_get_node_traits_by_node_id(self): db_utils.create_test_node_traits(node_id=self.node.id, traits=['trait1', 'trait2']) result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual(self.node.id, result[0].node_id) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in result]) def test_get_node_traits_empty(self): result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual([], result) def test_get_node_traits_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_traits_by_node_id, '123') def test_unset_node_traits(self): db_utils.create_test_node_traits(node_id=self.node.id, traits=['trait1', 'trait2']) self.dbapi.unset_node_traits(self.node.id) result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual([], result) def test_unset_empty_node_traits(self): self.dbapi.unset_node_traits(self.node.id) result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual([], result) def test_unset_node_traits_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.unset_node_traits, '123') def test_add_node_trait(self): result = self.dbapi.add_node_trait(self.node.id, 'trait1', '1.0') self.assertEqual(self.node.id, result.node_id) self.assertEqual('trait1', result.trait) def test_add_node_trait_duplicate(self): self.dbapi.add_node_trait(self.node.id, 'trait1', '1.0') result = self.dbapi.add_node_trait(self.node.id, 'trait1', '1.0') self.assertEqual(self.node.id, result.node_id) self.assertEqual('trait1', result.trait) result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual(['trait1'], [trait.trait for trait in result]) def test_add_node_trait_at_limit(self): traits = ['trait%d' % n for n in range(49)] db_utils.create_test_node_traits(node_id=self.node.id, traits=traits) result = self.dbapi.add_node_trait(self.node.id, 'trait49', '1.0') self.assertEqual(self.node.id, result.node_id) self.assertEqual('trait49', result.trait) def test_add_node_trait_duplicate_at_limit(self): traits = ['trait%d' % n for n in range(50)] db_utils.create_test_node_traits(node_id=self.node.id, traits=traits) result = self.dbapi.add_node_trait(self.node.id, 'trait49', '1.0') self.assertEqual(self.node.id, result.node_id) self.assertEqual('trait49', result.trait) def test_add_node_trait_over_limit(self): traits = ['trait%d' % n for n in range(50)] db_utils.create_test_node_traits(node_id=self.node.id, traits=traits) self.assertRaises(exception.InvalidParameterValue, self.dbapi.add_node_trait, self.node.id, 'trait50', '1.0') # Ensure the trait was not added. result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertNotIn('trait50', [trait.trait for trait in result]) def test_add_node_trait_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.add_node_trait, '123', 'trait1', '1.0') def test_delete_node_trait(self): db_utils.create_test_node_traits(node_id=self.node.id, traits=['trait1', 'trait2']) self.dbapi.delete_node_trait(self.node.id, 'trait1') result = self.dbapi.get_node_traits_by_node_id(self.node.id) self.assertEqual(1, len(result)) self.assertEqual('trait2', result[0].trait) def test_delete_node_trait_not_found(self): self.assertRaises(exception.NodeTraitNotFound, self.dbapi.delete_node_trait, self.node.id, 'trait1') def test_delete_node_trait_node_not_found(self): self.assertRaises(exception.NodeNotFound, self.dbapi.delete_node_trait, '123', 'trait1') def test_node_trait_exists(self): db_utils.create_test_node_traits(node_id=self.node.id, traits=['trait1', 'trait2']) result = self.dbapi.node_trait_exists(self.node.id, 'trait1') self.assertTrue(result) def test_node_trait_not_exists(self): result = self.dbapi.node_trait_exists(self.node.id, 'trait1') self.assertFalse(result) def test_node_trait_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.node_trait_exists, '123', 'trait1') ironic-15.0.0/ironic/tests/unit/db/test_nodes.py0000664000175000017500000011326113652514273021630 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Nodes via the DB API""" import datetime import mock from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.common import exception from ironic.common import states from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbNodeTestCase(base.DbTestCase): def test_create_node(self): node = utils.create_test_node() self.assertEqual([], node.tags) self.assertEqual([], node.traits) def test_create_node_with_tags(self): self.assertRaises(exception.InvalidParameterValue, utils.create_test_node, tags=['tag1', 'tag2']) def test_create_node_with_traits(self): self.assertRaises(exception.InvalidParameterValue, utils.create_test_node, traits=['trait1', 'trait2']) def test_create_node_already_exists(self): utils.create_test_node() self.assertRaises(exception.NodeAlreadyExists, utils.create_test_node) def test_create_node_instance_already_associated(self): instance = uuidutils.generate_uuid() utils.create_test_node(uuid=uuidutils.generate_uuid(), instance_uuid=instance) self.assertRaises(exception.InstanceAssociated, utils.create_test_node, uuid=uuidutils.generate_uuid(), instance_uuid=instance) def test_create_node_name_duplicate(self): node = utils.create_test_node(name='spam') self.assertRaises(exception.DuplicateName, utils.create_test_node, name=node.name) def test_get_node_by_id(self): node = utils.create_test_node() self.dbapi.set_node_tags(node.id, ['tag1', 'tag2']) utils.create_test_node_traits(node_id=node.id, traits=['trait1', 'trait2']) res = self.dbapi.get_node_by_id(node.id) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in res.tags]) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in res.traits]) def test_get_node_by_uuid(self): node = utils.create_test_node() self.dbapi.set_node_tags(node.id, ['tag1', 'tag2']) utils.create_test_node_traits(node_id=node.id, traits=['trait1', 'trait2']) res = self.dbapi.get_node_by_uuid(node.uuid) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in res.tags]) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in res.traits]) def test_get_node_by_name(self): node = utils.create_test_node() self.dbapi.set_node_tags(node.id, ['tag1', 'tag2']) utils.create_test_node_traits(node_id=node.id, traits=['trait1', 'trait2']) res = self.dbapi.get_node_by_name(node.name) self.assertEqual(node.id, res.id) self.assertEqual(node.uuid, res.uuid) self.assertEqual(node.name, res.name) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in res.tags]) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in res.traits]) def test_get_node_that_does_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_id, 99) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, '12345678-9999-0000-aaaa-123456789012') self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_name, 'spam-eggs-bacon-spam') def test_get_nodeinfo_list_defaults(self): node_id_list = [] for i in range(1, 6): node = utils.create_test_node(uuid=uuidutils.generate_uuid()) node_id_list.append(node.id) res = [i[0] for i in self.dbapi.get_nodeinfo_list()] self.assertEqual(sorted(res), sorted(node_id_list)) def test_get_nodeinfo_list_with_cols(self): uuids = {} extras = {} for i in range(1, 6): uuid = uuidutils.generate_uuid() extra = {'foo': i} node = utils.create_test_node(extra=extra, uuid=uuid) uuids[node.id] = uuid extras[node.id] = extra res = self.dbapi.get_nodeinfo_list(columns=['id', 'extra', 'uuid']) self.assertEqual(extras, dict((r[0], r[1]) for r in res)) self.assertEqual(uuids, dict((r[0], r[2]) for r in res)) def test_get_nodeinfo_list_with_filters(self): node1 = utils.create_test_node( driver='driver-one', instance_uuid=uuidutils.generate_uuid(), reservation='fake-host', uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node( driver='driver-two', uuid=uuidutils.generate_uuid(), maintenance=True, fault='boom', resource_class='foo', conductor_group='group1') node3 = utils.create_test_node( driver='driver-one', uuid=uuidutils.generate_uuid(), reservation='another-fake-host') res = self.dbapi.get_nodeinfo_list(filters={'driver': 'driver-one'}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'driver': 'bad-driver'}) self.assertEqual([], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'associated': True}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'associated': False}) self.assertEqual(sorted([node2.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'reserved': True}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r[0] for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'reserved': False}) self.assertEqual([node2.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'maintenance': True}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'maintenance': False}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r.id for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'fault': 'boom'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'fault': 'moob'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'resource_class': 'foo'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list( filters={'conductor_group': 'group1'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list( filters={'conductor_group': 'group2'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list( filters={'reserved_by_any_of': ['fake-host', 'another-fake-host']}) self.assertEqual(sorted([node1.id, node3.id]), sorted([r.id for r in res])) res = self.dbapi.get_nodeinfo_list(filters={'id': node1.id}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'uuid': node1.uuid}) self.assertEqual([node1.id], [r.id for r in res]) # ensure unknown filters explode filters = {'bad_filter': 'foo'} self.assertRaisesRegex(ValueError, 'bad_filter', self.dbapi.get_nodeinfo_list, filters=filters) # even with good filters present filters = {'bad_filter': 'foo', 'id': node1.id} self.assertRaisesRegex(ValueError, 'bad_filter', self.dbapi.get_nodeinfo_list, filters=filters) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_nodeinfo_list_provision(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) next = past + datetime.timedelta(minutes=8) present = past + datetime.timedelta(minutes=10) mock_utcnow.return_value = past # node with provision_updated timeout node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_updated_at=past, provision_state=states.DEPLOYING) # node with None in provision_updated_at node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_state=states.DEPLOYWAIT) # node without timeout utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_updated_at=next) mock_utcnow.return_value = present res = self.dbapi.get_nodeinfo_list(filters={'provisioned_before': 300}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'provision_state': states.DEPLOYWAIT}) self.assertEqual([node2.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list( filters={'provision_state_in': [states.ACTIVE, states.DEPLOYING]}) self.assertEqual([node1.id], [r[0] for r in res]) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_nodeinfo_list_inspection(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) next = past + datetime.timedelta(minutes=8) present = past + datetime.timedelta(minutes=10) mock_utcnow.return_value = past # node with provision_updated timeout node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=past) # node with None in provision_updated_at node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), provision_state=states.INSPECTING) # node without timeout utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=next) mock_utcnow.return_value = present res = self.dbapi.get_nodeinfo_list( filters={'inspection_started_before': 300}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'provision_state': states.INSPECTING}) self.assertEqual([node2.id], [r[0] for r in res]) def test_get_nodeinfo_list_description(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), description='Hello') node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), description='World!') res = self.dbapi.get_nodeinfo_list( filters={'description_contains': 'Hello'}) self.assertEqual([node1.id], [r[0] for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'description_contains': 'World!'}) self.assertEqual([node2.id], [r[0] for r in res]) def test_get_node_list(self): uuids = [] for i in range(1, 6): node = utils.create_test_node(uuid=uuidutils.generate_uuid()) uuids.append(str(node['uuid'])) res = self.dbapi.get_node_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) for r in res: self.assertEqual([], r.tags) self.assertEqual([], r.traits) def test_get_node_list_with_filters(self): ch1 = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) ch2 = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) node1 = utils.create_test_node( driver='driver-one', instance_uuid=uuidutils.generate_uuid(), reservation='fake-host', uuid=uuidutils.generate_uuid(), chassis_id=ch1['id']) node2 = utils.create_test_node( driver='driver-two', uuid=uuidutils.generate_uuid(), chassis_id=ch2['id'], maintenance=True, fault='boom', resource_class='foo', conductor_group='group1', power_state='power on') res = self.dbapi.get_node_list(filters={'chassis_uuid': ch1['uuid']}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'chassis_uuid': ch2['uuid']}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'driver': 'driver-one'}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'driver': 'bad-driver'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'associated': True}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'associated': False}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'reserved': True}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'reserved': False}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': True}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'maintenance': False}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'fault': 'boom'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_nodeinfo_list(filters={'fault': 'moob'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'resource_class': 'foo'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'conductor_group': 'group1'}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'conductor_group': 'group2'}) self.assertEqual([], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'id': node1.id}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'uuid': node1.uuid}) self.assertEqual([node1.id], [r.id for r in res]) uuids = [uuidutils.generate_uuid(), node1.uuid, uuidutils.generate_uuid()] res = self.dbapi.get_node_list(filters={'uuid_in': uuids}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'with_power_state': True}) self.assertEqual([node2.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'with_power_state': False}) self.assertEqual([node1.id], [r.id for r in res]) # ensure unknown filters explode filters = {'bad_filter': 'foo'} self.assertRaisesRegex(ValueError, 'bad_filter', self.dbapi.get_node_list, filters=filters) # even with good filters present filters = {'bad_filter': 'foo', 'id': node1.id} self.assertRaisesRegex(ValueError, 'bad_filter', self.dbapi.get_node_list, filters=filters) def test_get_node_list_filter_by_project(self): utils.create_test_node(uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node( uuid=uuidutils.generate_uuid(), owner='project1', lessee='project2', ) node3 = utils.create_test_node( uuid=uuidutils.generate_uuid(), owner='project2', ) node4 = utils.create_test_node( uuid=uuidutils.generate_uuid(), owner='project1', lessee='project3', ) res = self.dbapi.get_node_list(filters={'project': 'project1'}) self.assertEqual([node2.id, node4.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'project': 'project2'}) self.assertEqual([node2.id, node3.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'project': 'project3'}) self.assertEqual([node4.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={'project': 'flargle'}) self.assertEqual([], [r.id for r in res]) def test_get_node_list_description(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), description='Hello') node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), description='World!') res = self.dbapi.get_node_list(filters={ 'description_contains': 'Hello'}) self.assertEqual([node1.id], [r.id for r in res]) res = self.dbapi.get_node_list(filters={ 'description_contains': 'World!'}) self.assertEqual([node2.id], [r.id for r in res]) def test_get_node_list_chassis_not_found(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.get_node_list, {'chassis_uuid': uuidutils.generate_uuid()}) def test_get_node_by_instance(self): node = utils.create_test_node( instance_uuid='12345678-9999-0000-aaaa-123456789012') self.dbapi.set_node_tags(node.id, ['tag1', 'tag2']) utils.create_test_node_traits(node_id=node.id, traits=['trait1', 'trait2']) res = self.dbapi.get_node_by_instance(node.instance_uuid) self.assertEqual(node.uuid, res.uuid) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in res.tags]) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in res.traits]) def test_get_node_by_instance_wrong_uuid(self): utils.create_test_node( instance_uuid='12345678-9999-0000-aaaa-123456789012') self.assertRaises(exception.InstanceNotFound, self.dbapi.get_node_by_instance, '12345678-9999-0000-bbbb-123456789012') def test_get_node_by_instance_invalid_uuid(self): self.assertRaises(exception.InvalidUUID, self.dbapi.get_node_by_instance, 'fake_uuid') def test_destroy_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_id, node.id) def test_destroy_node_by_uuid(self): node = utils.create_test_node() self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_that_does_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.destroy_node, '12345678-9999-0000-aaaa-123456789012') def test_ports_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() port = utils.create_test_port(node_id=node.id) self.dbapi.destroy_node(node.id) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_id, port.id) def test_ports_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() port = utils.create_test_port(node_id=node.id) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_id, port.id) def test_tags_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() tag = utils.create_test_node_tag(node_id=node.id) self.assertTrue(self.dbapi.node_tag_exists(node.id, tag.tag)) self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.node_tag_exists, node.id, tag.tag) def test_tags_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() tag = utils.create_test_node_tag(node_id=node.id) self.assertTrue(self.dbapi.node_tag_exists(node.id, tag.tag)) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.node_tag_exists, node.id, tag.tag) def test_volume_connector_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() connector = utils.create_test_volume_connector(node_id=node.id) self.dbapi.destroy_node(node.id) self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_id, connector.id) def test_volume_connector_get_destroyed_after_destroying_a_node_uuid(self): node = utils.create_test_node() connector = utils.create_test_volume_connector(node_id=node.id) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_id, connector.id) def test_volume_target_gets_destroyed_after_destroying_a_node(self): node = utils.create_test_node() target = utils.create_test_volume_target(node_id=node.id) self.dbapi.destroy_node(node.id) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_id, target.id) def test_volume_target_gets_destroyed_after_destroying_a_node_uuid(self): node = utils.create_test_node() target = utils.create_test_volume_target(node_id=node.id) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_id, target.id) def test_traits_get_destroyed_after_destroying_a_node(self): node = utils.create_test_node() trait = utils.create_test_node_trait(node_id=node.id) self.assertTrue(self.dbapi.node_trait_exists(node.id, trait.trait)) self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.node_trait_exists, node.id, trait.trait) def test_traits_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() trait = utils.create_test_node_trait(node_id=node.id) self.assertTrue(self.dbapi.node_trait_exists(node.id, trait.trait)) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.node_trait_exists, node.id, trait.trait) def test_allocations_get_destroyed_after_destroying_a_node_by_uuid(self): node = utils.create_test_node() allocation = utils.create_test_allocation(node_id=node.id) self.dbapi.destroy_node(node.uuid) self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_id, allocation.id) def test_update_node(self): node = utils.create_test_node() old_extra = node.extra new_extra = {'foo': 'bar'} self.assertNotEqual(old_extra, new_extra) res = self.dbapi.update_node(node.id, {'extra': new_extra}) self.assertEqual(new_extra, res.extra) self.assertEqual([], res.tags) self.assertEqual([], res.traits) def test_update_node_with_tags(self): node = utils.create_test_node() tag = utils.create_test_node_tag(node_id=node.id) old_extra = node.extra new_extra = {'foo': 'bar'} self.assertNotEqual(old_extra, new_extra) res = self.dbapi.update_node(node.id, {'extra': new_extra}) self.assertEqual([tag.tag], [t.tag for t in res.tags]) def test_update_node_with_traits(self): node = utils.create_test_node() trait = utils.create_test_node_trait(node_id=node.id) old_extra = node.extra new_extra = {'foo': 'bar'} self.assertNotEqual(old_extra, new_extra) res = self.dbapi.update_node(node.id, {'extra': new_extra}) self.assertEqual([trait.trait], [t.trait for t in res.traits]) def test_update_node_not_found(self): node_uuid = uuidutils.generate_uuid() new_extra = {'foo': 'bar'} self.assertRaises(exception.NodeNotFound, self.dbapi.update_node, node_uuid, {'extra': new_extra}) def test_update_node_uuid(self): node = utils.create_test_node() self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_node, node.id, {'uuid': ''}) def test_update_node_associate_and_disassociate(self): node = utils.create_test_node() new_i_uuid = uuidutils.generate_uuid() res = self.dbapi.update_node(node.id, {'instance_uuid': new_i_uuid}) self.assertEqual(new_i_uuid, res.instance_uuid) res = self.dbapi.update_node(node.id, {'instance_uuid': None}) self.assertIsNone(res.instance_uuid) def test_update_node_instance_already_associated(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid()) new_i_uuid = uuidutils.generate_uuid() self.dbapi.update_node(node1.id, {'instance_uuid': new_i_uuid}) node2 = utils.create_test_node(uuid=uuidutils.generate_uuid()) self.assertRaises(exception.InstanceAssociated, self.dbapi.update_node, node2.id, {'instance_uuid': new_i_uuid}) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_provision(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node() res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) self.assertEqual(mocked_time, timeutils.normalize_time(res['provision_updated_at'])) def test_update_node_name_duplicate(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid(), name='spam') node2 = utils.create_test_node(uuid=uuidutils.generate_uuid()) self.assertRaises(exception.DuplicateName, self.dbapi.update_node, node2.id, {'name': node1.name}) def test_update_node_no_provision(self): node = utils.create_test_node() res = self.dbapi.update_node(node.id, {'extra': {'foo': 'bar'}}) self.assertIsNone(res['provision_updated_at']) self.assertIsNone(res['inspection_started_at']) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_inspection_started_at(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_started_at=mocked_time) res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) result = res['inspection_started_at'] self.assertEqual(mocked_time, timeutils.normalize_time(result)) self.assertIsNone(res['inspection_finished_at']) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_update_node_inspection_finished_at(self, mock_utcnow): mocked_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = mocked_time node = utils.create_test_node(uuid=uuidutils.generate_uuid(), inspection_finished_at=mocked_time) res = self.dbapi.update_node(node.id, {'provision_state': 'fake'}) result = res['inspection_finished_at'] self.assertEqual(mocked_time, timeutils.normalize_time(result)) self.assertIsNone(res['inspection_started_at']) def test_reserve_node(self): node = utils.create_test_node() self.dbapi.set_node_tags(node.id, ['tag1', 'tag2']) utils.create_test_node_traits(node_id=node.id, traits=['trait1', 'trait2']) uuid = node.uuid r1 = 'fake-reservation' # reserve the node res = self.dbapi.reserve_node(r1, uuid) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in res.tags]) self.assertItemsEqual(['trait1', 'trait2'], [trait.trait for trait in res.traits]) # check reservation res = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(r1, res.reservation) def test_release_reservation(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' self.dbapi.reserve_node(r1, uuid) # release reservation self.dbapi.release_node(r1, uuid) res = self.dbapi.get_node_by_uuid(uuid) self.assertIsNone(res.reservation) def test_reservation_of_reserved_node_fails(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' r2 = 'another-reservation' # reserve the node self.dbapi.reserve_node(r1, uuid) # another host fails to reserve or release self.assertRaises(exception.NodeLocked, self.dbapi.reserve_node, r2, uuid) self.assertRaises(exception.NodeLocked, self.dbapi.release_node, r2, uuid) def test_reservation_after_release(self): node = utils.create_test_node() uuid = node.uuid r1 = 'fake-reservation' r2 = 'another-reservation' self.dbapi.reserve_node(r1, uuid) self.dbapi.release_node(r1, uuid) # another host succeeds self.dbapi.reserve_node(r2, uuid) res = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(r2, res.reservation) def test_reservation_in_exception_message(self): node = utils.create_test_node() uuid = node.uuid r = 'fake-reservation' self.dbapi.reserve_node(r, uuid) exc = self.assertRaises(exception.NodeLocked, self.dbapi.reserve_node, 'another', uuid) self.assertIn(r, str(exc)) def test_reservation_non_existent_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.reserve_node, 'fake', node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.reserve_node, 'fake', node.uuid) def test_release_non_existent_node(self): node = utils.create_test_node() self.dbapi.destroy_node(node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.release_node, 'fake', node.id) self.assertRaises(exception.NodeNotFound, self.dbapi.release_node, 'fake', node.uuid) def test_release_non_locked_node(self): node = utils.create_test_node() self.assertIsNone(node.reservation) self.assertRaises(exception.NodeNotLocked, self.dbapi.release_node, 'fake', node.id) self.assertRaises(exception.NodeNotLocked, self.dbapi.release_node, 'fake', node.uuid) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_touch_node_provisioning(self, mock_utcnow): test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time node = utils.create_test_node() # assert provision_updated_at is None self.assertIsNone(node.provision_updated_at) self.dbapi.touch_node_provisioning(node.uuid) node = self.dbapi.get_node_by_uuid(node.uuid) # assert provision_updated_at has been updated self.assertEqual(test_time, timeutils.normalize_time(node.provision_updated_at)) def test_touch_node_provisioning_not_found(self): self.assertRaises( exception.NodeNotFound, self.dbapi.touch_node_provisioning, uuidutils.generate_uuid()) def test_get_node_by_port_addresses(self): wrong_node = utils.create_test_node( driver='driver-one', uuid=uuidutils.generate_uuid()) node = utils.create_test_node( driver='driver-two', uuid=uuidutils.generate_uuid()) addresses = [] for i in (1, 2, 3): address = '52:54:00:cf:2d:4%s' % i utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=node.id, address=address) if i > 1: addresses.append(address) utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=wrong_node.id, address='aa:bb:cc:dd:ee:ff') res = self.dbapi.get_node_by_port_addresses(addresses) self.assertEqual(node.uuid, res.uuid) self.assertEqual([], res.traits) def test_get_node_by_port_addresses_not_found(self): node = utils.create_test_node( driver='driver', uuid=uuidutils.generate_uuid()) utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=node.id, address='aa:bb:cc:dd:ee:ff') self.assertRaisesRegex(exception.NodeNotFound, 'was not found', self.dbapi.get_node_by_port_addresses, ['11:22:33:44:55:66']) def test_get_node_by_port_addresses_multiple_found(self): node1 = utils.create_test_node( driver='driver', uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node( driver='driver', uuid=uuidutils.generate_uuid()) addresses = ['52:54:00:cf:2d:4%s' % i for i in (1, 2)] utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=node1.id, address=addresses[0]) utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=node2.id, address=addresses[1]) self.assertRaisesRegex(exception.NodeNotFound, 'Multiple nodes', self.dbapi.get_node_by_port_addresses, addresses) def test_check_node_list(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), name='node_2') node3 = utils.create_test_node(uuid=uuidutils.generate_uuid(), name='node_3') mapping = self.dbapi.check_node_list([node1.uuid, node2.name, node3.uuid]) self.assertEqual({node1.uuid: node1.uuid, node2.name: node2.uuid, node3.uuid: node3.uuid}, mapping) def test_check_node_list_non_existing(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid()) node2 = utils.create_test_node(uuid=uuidutils.generate_uuid(), name='node_2') uuid = uuidutils.generate_uuid() exc = self.assertRaises(exception.NodeNotFound, self.dbapi.check_node_list, [node1.uuid, uuid, 'could-be-a-name', node2.name]) self.assertIn(uuid, str(exc)) self.assertIn('could-be-a-name', str(exc)) def test_check_node_list_impossible(self): node1 = utils.create_test_node(uuid=uuidutils.generate_uuid()) exc = self.assertRaises(exception.NodeNotFound, self.dbapi.check_node_list, [node1.uuid, 'this/cannot/be/a/name']) self.assertIn('this/cannot/be/a/name', str(exc)) ironic-15.0.0/ironic/tests/unit/db/test_ports.py0000664000175000017500000001702613652514273021671 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Ports via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbPortTestCase(base.DbTestCase): def setUp(self): # This method creates a port for every test and # replaces a test for creating a port. super(DbPortTestCase, self).setUp() self.node = db_utils.create_test_node(owner='12345') self.portgroup = db_utils.create_test_portgroup(node_id=self.node.id) self.port = db_utils.create_test_port(node_id=self.node.id, portgroup_id=self.portgroup.id) def test_get_port_by_id(self): res = self.dbapi.get_port_by_id(self.port.id) self.assertEqual(self.port.address, res.address) def test_get_port_by_uuid(self): res = self.dbapi.get_port_by_uuid(self.port.uuid) self.assertEqual(self.port.id, res.id) def test_get_port_by_address(self): res = self.dbapi.get_port_by_address(self.port.address) self.assertEqual(self.port.id, res.id) def test_get_port_by_address_filter_by_owner(self): res = self.dbapi.get_port_by_address(self.port.address, owner=self.node.owner) self.assertEqual(self.port.id, res.id) def test_get_port_by_address_filter_by_owner_no_match(self): self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_address, self.port.address, owner='54321') def test_get_port_list(self): uuids = [] for i in range(1, 6): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:4%s' % i) uuids.append(str(port.uuid)) # Also add the uuid for the port created in setUp() uuids.append(str(self.port.uuid)) res = self.dbapi.get_port_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_port_list_sorted(self): uuids = [] for i in range(1, 6): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:4%s' % i) uuids.append(str(port.uuid)) # Also add the uuid for the port created in setUp() uuids.append(str(self.port.uuid)) res = self.dbapi.get_port_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_port_list, sort_key='foo') def test_get_port_list_filter_by_node_owner(self): uuids = [] for i in range(1, 3): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:4%s' % i) for i in range(4, 6): port = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=self.node.id, address='52:54:00:cf:2d:4%s' % i) uuids.append(str(port.uuid)) # Also add the uuid for the port created in setUp() uuids.append(str(self.port.uuid)) res = self.dbapi.get_port_list(owner=self.node.owner) res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_ports_by_node_id(self): res = self.dbapi.get_ports_by_node_id(self.node.id) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_node_id_filter_by_node_owner(self): res = self.dbapi.get_ports_by_node_id(self.node.id, owner=self.node.owner) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_node_id_filter_by_node_owner_no_match(self): res = self.dbapi.get_ports_by_node_id(self.node.id, owner='54321') self.assertEqual([], res) def test_get_ports_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_ports_by_node_id(99)) def test_get_ports_by_portgroup_id(self): res = self.dbapi.get_ports_by_portgroup_id(self.portgroup.id) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_portgroup_id_filter_by_node_owner(self): res = self.dbapi.get_ports_by_portgroup_id(self.portgroup.id, owner=self.node.owner) self.assertEqual(self.port.address, res[0].address) def test_get_ports_by_portgroup_id_filter_by_node_owner_no_match(self): res = self.dbapi.get_ports_by_portgroup_id(self.portgroup.id, owner='54321') self.assertEqual([], res) def test_get_ports_by_portgroup_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_ports_by_portgroup_id(99)) def test_destroy_port(self): self.dbapi.destroy_port(self.port.id) self.assertRaises(exception.PortNotFound, self.dbapi.destroy_port, self.port.id) def test_update_port(self): old_address = self.port.address new_address = 'ff.ee.dd.cc.bb.aa' self.assertNotEqual(old_address, new_address) res = self.dbapi.update_port(self.port.id, {'address': new_address}) self.assertEqual(new_address, res.address) def test_update_port_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_port, self.port.id, {'uuid': ''}) def test_update_port_duplicated_address(self): address1 = self.port.address address2 = 'aa-bb-cc-11-22-33' port2 = db_utils.create_test_port(uuid=uuidutils.generate_uuid(), node_id=self.node.id, address=address2) self.assertRaises(exception.MACAlreadyExists, self.dbapi.update_port, port2.id, {'address': address1}) def test_create_port_duplicated_address(self): self.assertRaises(exception.MACAlreadyExists, db_utils.create_test_port, uuid=uuidutils.generate_uuid(), node_id=self.node.id, address=self.port.address) def test_create_port_duplicated_uuid(self): self.assertRaises(exception.PortAlreadyExists, db_utils.create_test_port, uuid=self.port.uuid, node_id=self.node.id, address='aa-bb-cc-33-11-22') ironic-15.0.0/ironic/tests/unit/db/test_bios_settings.py0000664000175000017500000001420513652514273023372 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating BIOSSetting via the DB API""" from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbBIOSSettingTestCase(base.DbTestCase): def setUp(self): super(DbBIOSSettingTestCase, self).setUp() self.node = db_utils.create_test_node() def test_get_bios_setting(self): db_utils.create_test_bios_setting(node_id=self.node.id) result = self.dbapi.get_bios_setting(self.node.id, 'virtualization') self.assertEqual(result['node_id'], self.node.id) self.assertEqual(result['name'], 'virtualization') self.assertEqual(result['value'], 'on') self.assertEqual(result['version'], '1.0') def test_get_bios_setting_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_bios_setting, '456', 'virtualization') def test_get_bios_setting_setting_not_exist(self): db_utils.create_test_bios_setting(node_id=self.node.id) self.assertRaises(exception.BIOSSettingNotFound, self.dbapi.get_bios_setting, self.node.id, 'bios_name') def test_get_bios_setting_list(self): db_utils.create_test_bios_setting(node_id=self.node.id) result = self.dbapi.get_bios_setting_list( node_id=self.node.id) self.assertEqual(result[0]['node_id'], self.node.id) self.assertEqual(result[0]['name'], 'virtualization') self.assertEqual(result[0]['value'], 'on') self.assertEqual(result[0]['version'], '1.0') self.assertEqual(len(result), 1) def test_get_bios_setting_list_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_bios_setting_list, '456') def test_create_bios_setting_list(self): settings = db_utils.get_test_bios_setting_setting_list() result = self.dbapi.create_bios_setting_list( self.node.id, settings, '1.0') self.assertItemsEqual(['virtualization', 'hyperthread', 'numlock'], [setting.name for setting in result]) self.assertItemsEqual(['on', 'enabled', 'off'], [setting.value for setting in result]) def test_create_bios_setting_list_duplicate(self): settings = db_utils.get_test_bios_setting_setting_list() self.dbapi.create_bios_setting_list(self.node.id, settings, '1.0') self.assertRaises(exception.BIOSSettingAlreadyExists, self.dbapi.create_bios_setting_list, self.node.id, settings, '1.0') def test_create_bios_setting_list_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.create_bios_setting_list, '456', [], '1.0') def test_update_bios_setting_list(self): settings = db_utils.get_test_bios_setting_setting_list() self.dbapi.create_bios_setting_list(self.node.id, settings, '1.0') settings = [{'name': 'virtualization', 'value': 'off'}, {'name': 'hyperthread', 'value': 'disabled'}, {'name': 'numlock', 'value': 'on'}] result = self.dbapi.update_bios_setting_list( self.node.id, settings, '1.0') self.assertItemsEqual(['off', 'disabled', 'on'], [setting.value for setting in result]) def test_update_bios_setting_list_setting_not_exist(self): settings = db_utils.get_test_bios_setting_setting_list() self.dbapi.create_bios_setting_list(self.node.id, settings, '1.0') for setting in settings: setting['name'] = 'bios_name' self.assertRaises(exception.BIOSSettingNotFound, self.dbapi.update_bios_setting_list, self.node.id, settings, '1.0') def test_update_bios_setting_list_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.update_bios_setting_list, '456', [], '1.0') def test_delete_bios_setting_list(self): settings = db_utils.get_test_bios_setting_setting_list() self.dbapi.create_bios_setting_list(self.node.id, settings, '1.0') name_list = [setting['name'] for setting in settings] self.dbapi.delete_bios_setting_list(self.node.id, name_list) self.assertRaises(exception.BIOSSettingNotFound, self.dbapi.get_bios_setting, self.node.id, 'virtualization') self.assertRaises(exception.BIOSSettingNotFound, self.dbapi.get_bios_setting, self.node.id, 'hyperthread') self.assertRaises(exception.BIOSSettingNotFound, self.dbapi.get_bios_setting, self.node.id, 'numlock') def test_delete_bios_setting_list_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.delete_bios_setting_list, '456', ['virtualization']) def test_delete_bios_setting_list_setting_not_exist(self): settings = db_utils.get_test_bios_setting_setting_list() self.dbapi.create_bios_setting_list(self.node.id, settings, '1.0') self.assertRaises(exception.BIOSSettingListNotFound, self.dbapi.delete_bios_setting_list, self.node.id, ['fake-bios-option']) ironic-15.0.0/ironic/tests/unit/db/test_allocations.py0000664000175000017500000003056613652514273023036 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating allocations via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.db import api as db_api from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class AllocationsTestCase(base.DbTestCase): def setUp(self): super(AllocationsTestCase, self).setUp() self.node = db_utils.create_test_node() self.allocation = db_utils.create_test_allocation(name='host1') def test_create(self): dbapi = db_api.get_instance() allocation = dbapi.create_allocation({'resource_class': 'bm'}) self.assertIsNotNone(allocation.uuid) self.assertEqual('allocating', allocation.state) def _create_test_allocation_range(self, count, start_idx=0, **kw): """Create the specified number of test allocation entries in DB It uses create_test_allocation method. And returns List of Allocation DB objects. :param count: Specifies the number of allocations to be created :returns: List of Allocation DB objects """ return [db_utils.create_test_allocation(uuid=uuidutils.generate_uuid(), name='allocation' + str(i), **kw).uuid for i in range(start_idx, count + start_idx)] def test_get_allocation_by_id(self): res = self.dbapi.get_allocation_by_id(self.allocation.id) self.assertEqual(self.allocation.uuid, res.uuid) def test_get_allocation_by_id_that_does_not_exist(self): self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_id, 99) def test_get_allocation_by_uuid(self): res = self.dbapi.get_allocation_by_uuid(self.allocation.uuid) self.assertEqual(self.allocation.id, res.id) def test_get_allocation_by_uuid_that_does_not_exist(self): self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_uuid, 'EEEEEEEE-EEEE-EEEE-EEEE-EEEEEEEEEEEE') def test_get_allocation_by_name(self): res = self.dbapi.get_allocation_by_name(self.allocation.name) self.assertEqual(self.allocation.id, res.id) def test_get_allocation_by_name_that_does_not_exist(self): self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_name, 'testfail') def test_get_allocation_list(self): uuids = self._create_test_allocation_range(6) # Also add the uuid for the allocation created in setUp() uuids.append(self.allocation.uuid) res = self.dbapi.get_allocation_list() self.assertEqual(set(uuids), {r.uuid for r in res}) def test_get_allocation_list_sorted(self): uuids = self._create_test_allocation_range(6) # Also add the uuid for the allocation created in setUp() uuids.append(self.allocation.uuid) res = self.dbapi.get_allocation_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) def test_get_allocation_list_filter_by_state(self): self._create_test_allocation_range(6, state='error') res = self.dbapi.get_allocation_list(filters={'state': 'allocating'}) self.assertEqual([self.allocation.uuid], [r.uuid for r in res]) res = self.dbapi.get_allocation_list(filters={'state': 'error'}) self.assertEqual(6, len(res)) def test_get_allocation_list_filter_by_node(self): self._create_test_allocation_range(6) self.dbapi.update_allocation(self.allocation.id, {'node_id': self.node.id}) res = self.dbapi.get_allocation_list( filters={'node_uuid': self.node.uuid}) self.assertEqual([self.allocation.uuid], [r.uuid for r in res]) def test_get_allocation_list_filter_by_rsc(self): self._create_test_allocation_range(6) self.dbapi.update_allocation(self.allocation.id, {'resource_class': 'very-large'}) res = self.dbapi.get_allocation_list( filters={'resource_class': 'very-large'}) self.assertEqual([self.allocation.uuid], [r.uuid for r in res]) def test_get_allocation_list_filter_by_conductor_affinity(self): db_utils.create_test_conductor(id=1, hostname='host1') db_utils.create_test_conductor(id=2, hostname='host2') in_host1 = self._create_test_allocation_range(2, conductor_affinity=1) in_host2 = self._create_test_allocation_range(2, conductor_affinity=2, start_idx=2) res = self.dbapi.get_allocation_list( filters={'conductor_affinity': 1}) self.assertEqual(set(in_host1), {r.uuid for r in res}) res = self.dbapi.get_allocation_list( filters={'conductor_affinity': 'host2'}) self.assertEqual(set(in_host2), {r.uuid for r in res}) def test_get_allocation_list_invalid_fields(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_allocation_list, sort_key='foo') self.assertRaises(ValueError, self.dbapi.get_allocation_list, filters={'foo': 42}) def test_destroy_allocation(self): self.dbapi.destroy_allocation(self.allocation.id) self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_id, self.allocation.id) def test_destroy_allocation_with_node(self): self.dbapi.update_node(self.node.id, {'allocation_id': self.allocation.id, 'instance_uuid': uuidutils.generate_uuid(), 'instance_info': {'traits': ['foo']}}) self.dbapi.destroy_allocation(self.allocation.id) self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_id, self.allocation.id) node = self.dbapi.get_node_by_id(self.node.id) self.assertIsNone(node.allocation_id) self.assertIsNone(node.instance_uuid) # NOTE(dtantsur): currently we do not clean up instance_info contents # on deallocation. It may be changed in the future. self.assertEqual(node.instance_info, {'traits': ['foo']}) def test_destroy_allocation_that_does_not_exist(self): self.assertRaises(exception.AllocationNotFound, self.dbapi.destroy_allocation, 99) def test_destroy_allocation_uuid(self): self.dbapi.destroy_allocation(self.allocation.uuid) def test_update_allocation(self): old_name = self.allocation.name new_name = 'newname' self.assertNotEqual(old_name, new_name) res = self.dbapi.update_allocation(self.allocation.id, {'name': new_name}) self.assertEqual(new_name, res.name) def test_update_allocation_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_allocation, self.allocation.id, {'uuid': ''}) def test_update_allocation_not_found(self): id_2 = 99 self.assertNotEqual(self.allocation.id, id_2) self.assertRaises(exception.AllocationNotFound, self.dbapi.update_allocation, id_2, {'name': 'newname'}) def test_update_allocation_duplicated_name(self): name1 = self.allocation.name allocation2 = db_utils.create_test_allocation( uuid=uuidutils.generate_uuid(), name='name2') self.assertRaises(exception.AllocationDuplicateName, self.dbapi.update_allocation, allocation2.id, {'name': name1}) def test_update_allocation_with_node_id(self): res = self.dbapi.update_allocation(self.allocation.id, {'name': 'newname', 'traits': ['foo'], 'node_id': self.node.id}) self.assertEqual('newname', res.name) self.assertEqual(['foo'], res.traits) self.assertEqual(self.node.id, res.node_id) node = self.dbapi.get_node_by_id(self.node.id) self.assertEqual(res.id, node.allocation_id) self.assertEqual(res.uuid, node.instance_uuid) self.assertEqual(['foo'], node.instance_info['traits']) def test_update_allocation_node_already_associated(self): existing_uuid = uuidutils.generate_uuid() self.dbapi.update_node(self.node.id, {'instance_uuid': existing_uuid}) self.assertRaises(exception.NodeAssociated, self.dbapi.update_allocation, self.allocation.id, {'node_id': self.node.id, 'traits': ['foo']}) # Make sure we do not see partial updates allocation = self.dbapi.get_allocation_by_id(self.allocation.id) self.assertEqual([], allocation.traits) self.assertIsNone(allocation.node_id) node = self.dbapi.get_node_by_id(self.node.id) self.assertIsNone(node.allocation_id) self.assertEqual(existing_uuid, node.instance_uuid) self.assertNotIn('traits', node.instance_info) def test_update_allocation_associated_with_another_node(self): db_utils.create_test_node(uuid=uuidutils.generate_uuid(), allocation_id=self.allocation.id, instance_uuid=self.allocation.uuid) self.assertRaises(exception.InstanceAssociated, self.dbapi.update_allocation, self.allocation.id, {'node_id': self.node.id, 'traits': ['foo']}) # Make sure we do not see partial updates allocation = self.dbapi.get_allocation_by_id(self.allocation.id) self.assertEqual([], allocation.traits) self.assertIsNone(allocation.node_id) node = self.dbapi.get_node_by_id(self.node.id) self.assertIsNone(node.allocation_id) self.assertIsNone(node.instance_uuid) self.assertNotIn('traits', node.instance_info) def test_take_over_success(self): for i in range(2): db_utils.create_test_conductor(id=i, hostname='host-%d' % i) allocation = db_utils.create_test_allocation(conductor_affinity=0) self.assertTrue(self.dbapi.take_over_allocation( allocation.id, old_conductor_id=0, new_conductor_id=1)) allocation = self.dbapi.get_allocation_by_id(allocation.id) self.assertEqual(1, allocation.conductor_affinity) def test_take_over_conflict(self): for i in range(3): db_utils.create_test_conductor(id=i, hostname='host-%d' % i) allocation = db_utils.create_test_allocation(conductor_affinity=2) self.assertFalse(self.dbapi.take_over_allocation( allocation.id, old_conductor_id=0, new_conductor_id=1)) allocation = self.dbapi.get_allocation_by_id(allocation.id) # The affinity was not changed self.assertEqual(2, allocation.conductor_affinity) def test_take_over_allocation_not_found(self): self.assertRaises(exception.AllocationNotFound, self.dbapi.take_over_allocation, 999, 0, 1) def test_create_allocation_duplicated_name(self): self.assertRaises(exception.AllocationDuplicateName, db_utils.create_test_allocation, uuid=uuidutils.generate_uuid(), name=self.allocation.name) def test_create_allocation_duplicated_uuid(self): self.assertRaises(exception.AllocationAlreadyExists, db_utils.create_test_allocation, uuid=self.allocation.uuid) ironic-15.0.0/ironic/tests/unit/db/test_conductor.py0000664000175000017500000004252213652514273022521 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Conductors via the DB API""" import datetime import mock import oslo_db from oslo_db import exception as db_exc from oslo_db import sqlalchemy from oslo_utils import timeutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbConductorTestCase(base.DbTestCase): def test_register_conductor_existing_fails(self): c = utils.get_test_conductor() self.dbapi.register_conductor(c) self.assertRaises( exception.ConductorAlreadyRegistered, self.dbapi.register_conductor, c) def test_register_conductor_override(self): c = utils.get_test_conductor() self.dbapi.register_conductor(c) self.dbapi.register_conductor(c, update_existing=True) def _create_test_cdr(self, hardware_types=None, **kwargs): hardware_types = hardware_types or [] c = utils.get_test_conductor(**kwargs) cdr = self.dbapi.register_conductor(c) for ht in hardware_types: self.dbapi.register_conductor_hardware_interfaces(cdr.id, ht, 'power', ['ipmi', 'fake'], 'ipmi') return cdr def test_register_conductor_hardware_interfaces(self): c = self._create_test_cdr() interfaces = ['direct', 'iscsi'] self.dbapi.register_conductor_hardware_interfaces(c.id, 'generic', 'deploy', interfaces, 'iscsi') ifaces = self.dbapi.list_conductor_hardware_interfaces(c.id) ci1, ci2 = ifaces self.assertEqual(2, len(ifaces)) self.assertEqual('generic', ci1.hardware_type) self.assertEqual('generic', ci2.hardware_type) self.assertEqual('deploy', ci1.interface_type) self.assertEqual('deploy', ci2.interface_type) self.assertEqual('direct', ci1.interface_name) self.assertEqual('iscsi', ci2.interface_name) self.assertFalse(ci1.default) self.assertTrue(ci2.default) def test_register_conductor_hardware_interfaces_duplicate(self): c = self._create_test_cdr() interfaces = ['direct', 'iscsi'] self.dbapi.register_conductor_hardware_interfaces(c.id, 'generic', 'deploy', interfaces, 'iscsi') ifaces = self.dbapi.list_conductor_hardware_interfaces(c.id) ci1, ci2 = ifaces self.assertEqual(2, len(ifaces)) # do it again for the duplicates self.assertRaises( exception.ConductorHardwareInterfacesAlreadyRegistered, self.dbapi.register_conductor_hardware_interfaces, c.id, 'generic', 'deploy', interfaces, 'iscsi') def test_unregister_conductor_hardware_interfaces(self): c = self._create_test_cdr() interfaces = ['direct', 'iscsi'] self.dbapi.register_conductor_hardware_interfaces(c.id, 'generic', 'deploy', interfaces, 'iscsi') self.dbapi.unregister_conductor_hardware_interfaces(c.id) ifaces = self.dbapi.list_conductor_hardware_interfaces(c.id) self.assertEqual([], ifaces) def test_get_conductor(self): c1 = self._create_test_cdr() c2 = self.dbapi.get_conductor(c1.hostname) self.assertEqual(c1.id, c2.id) def test_get_inactive_conductor_ignore_online(self): c1 = self._create_test_cdr() self.dbapi.unregister_conductor(c1.hostname) c2 = self.dbapi.get_conductor(c1.hostname, online=None) self.assertEqual(c1.id, c2.id) def test_get_inactive_conductor_with_online_true(self): c1 = self._create_test_cdr() self.dbapi.unregister_conductor(c1.hostname) self.assertRaises(exception.ConductorNotFound, self.dbapi.get_conductor, c1.hostname) def test_get_conductor_not_found(self): self._create_test_cdr() self.assertRaises( exception.ConductorNotFound, self.dbapi.get_conductor, 'bad-hostname') def test_unregister_conductor(self): c = self._create_test_cdr() self.dbapi.unregister_conductor(c.hostname) self.assertRaises( exception.ConductorNotFound, self.dbapi.unregister_conductor, c.hostname) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_touch_conductor(self, mock_utcnow): test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time c = self._create_test_cdr() self.assertEqual(test_time, timeutils.normalize_time(c.updated_at)) test_time = datetime.datetime(2000, 1, 1, 0, 1) mock_utcnow.return_value = test_time self.dbapi.touch_conductor(c.hostname) c = self.dbapi.get_conductor(c.hostname) self.assertEqual(test_time, timeutils.normalize_time(c.updated_at)) @mock.patch.object(oslo_db.api.time, 'sleep', autospec=True) @mock.patch.object(sqlalchemy.orm.Query, 'update', autospec=True) def test_touch_conductor_deadlock(self, mock_update, mock_sleep): mock_sleep.return_value = None mock_update.side_effect = [db_exc.DBDeadlock(), None] c = self._create_test_cdr() self.dbapi.touch_conductor(c.hostname) self.assertEqual(2, mock_update.call_count) self.assertEqual(2, mock_sleep.call_count) def test_touch_conductor_not_found(self): # A conductor's heartbeat will not create a new record, # it will only update existing ones self._create_test_cdr() self.assertRaises( exception.ConductorNotFound, self.dbapi.touch_conductor, 'bad-hostname') def test_touch_offline_conductor(self): # Ensure that a conductor's periodic heartbeat task can make the # conductor visible again, even if it was spuriously marked offline c = self._create_test_cdr() self.dbapi.unregister_conductor(c.hostname) self.assertRaises( exception.ConductorNotFound, self.dbapi.get_conductor, c.hostname) self.dbapi.touch_conductor(c.hostname) self.dbapi.get_conductor(c.hostname) def test_clear_node_reservations_for_conductor(self): node1 = self.dbapi.create_node({'reservation': 'hostname1'}) node2 = self.dbapi.create_node({'reservation': 'hostname2'}) node3 = self.dbapi.create_node({'reservation': None}) node4 = self.dbapi.create_node({'reservation': 'hostName1'}) self.dbapi.clear_node_reservations_for_conductor('hostname1') node1 = self.dbapi.get_node_by_id(node1.id) node2 = self.dbapi.get_node_by_id(node2.id) node3 = self.dbapi.get_node_by_id(node3.id) node4 = self.dbapi.get_node_by_id(node4.id) self.assertIsNone(node1.reservation) self.assertEqual('hostname2', node2.reservation) self.assertIsNone(node3.reservation) self.assertIsNone(node4.reservation) def test_clear_node_target_power_state(self): node1 = self.dbapi.create_node({'reservation': 'hostname1', 'target_power_state': 'power on'}) node2 = self.dbapi.create_node({'reservation': 'hostname2', 'target_power_state': 'power on'}) node3 = self.dbapi.create_node({'reservation': None, 'target_power_state': 'power on'}) node4 = self.dbapi.create_node({'reservation': 'hostName1', 'target_power_state': 'power on'}) self.dbapi.clear_node_target_power_state('hostname1') node1 = self.dbapi.get_node_by_id(node1.id) node2 = self.dbapi.get_node_by_id(node2.id) node3 = self.dbapi.get_node_by_id(node3.id) node4 = self.dbapi.get_node_by_id(node4.id) self.assertIsNone(node1.target_power_state) self.assertIn('power operation was aborted', node1.last_error) self.assertEqual('power on', node2.target_power_state) self.assertIsNone(node2.last_error) self.assertEqual('power on', node3.target_power_state) self.assertIsNone(node3.last_error) self.assertIsNone(node4.target_power_state) self.assertIn('power operation was aborted', node4.last_error) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_one_host_no_ht(self, mock_utcnow): h = 'fake-host' expected = {} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[], hardware_types=[]) result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_one_host_one_ht(self, mock_utcnow): h = 'fake-host' ht = 'hardware-type' expected = {ht: {h}} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[], hardware_types=[ht]) result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_one_host_one_ht_groups( self, mock_utcnow): h = 'fake-host' ht = 'hardware-type' group = 'foogroup' key = '%s:%s' % (group, ht) expected = {key: {h}} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[], hardware_types=[ht], conductor_group=group) result = self.dbapi.get_active_hardware_type_dict(use_groups=True) self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_one_host_many_ht(self, mock_utcnow): h = 'fake-host' ht1 = 'hardware-type' ht2 = 'another-hardware-type' expected = {ht1: {h}, ht2: {h}} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(hostname=h, drivers=[], hardware_types=[ht1, ht2]) result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_many_host_one_ht(self, mock_utcnow): h1 = 'host-one' h2 = 'host-two' ht = 'hardware-type' expected = {ht: {h1, h2}} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(id=1, hostname=h1, drivers=[], hardware_types=[ht]) self._create_test_cdr(id=2, hostname=h2, drivers=[], hardware_types=[ht]) result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_many_host_many_ht(self, mock_utcnow): h1 = 'host-one' h2 = 'host-two' ht1 = 'hardware-type' ht2 = 'another-hardware-type' expected = {ht1: {h1, h2}, ht2: {h1, h2}} mock_utcnow.return_value = datetime.datetime.utcnow() self._create_test_cdr(id=1, hostname=h1, drivers=[], hardware_types=[ht1, ht2]) self._create_test_cdr(id=2, hostname=h2, drivers=[], hardware_types=[ht1, ht2]) result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_active_hardware_type_dict_with_old_conductor(self, mock_utcnow): past = datetime.datetime(2000, 1, 1, 0, 0) present = past + datetime.timedelta(minutes=2) ht = 'hardware-type' h1 = 'old-host' ht1 = 'old-hardware-type' mock_utcnow.return_value = past self._create_test_cdr(id=1, hostname=h1, drivers=[], hardware_types=[ht, ht1]) h2 = 'new-host' ht2 = 'new-hardware-type' mock_utcnow.return_value = present self._create_test_cdr(id=2, hostname=h2, drivers=[], hardware_types=[ht, ht2]) # verify that old-host does not show up in current list self.config(heartbeat_timeout=60, group='conductor') expected = {ht: {h2}, ht2: {h2}} result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) # change the heartbeat timeout, and verify that old-host appears self.config(heartbeat_timeout=120, group='conductor') expected = {ht: {h1, h2}, ht1: {h1}, ht2: {h2}} result = self.dbapi.get_active_hardware_type_dict() self.assertEqual(expected, result) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_offline_conductors(self, mock_utcnow): self.config(heartbeat_timeout=60, group='conductor') time_ = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = time_ c = self._create_test_cdr() # Only 30 seconds passed since last heartbeat, it's still # considered alive mock_utcnow.return_value = time_ + datetime.timedelta(seconds=30) self.assertEqual([], self.dbapi.get_offline_conductors()) # 61 seconds passed since last heartbeat, it's dead mock_utcnow.return_value = time_ + datetime.timedelta(seconds=61) self.assertEqual([c.hostname], self.dbapi.get_offline_conductors()) self.assertEqual([c.id], self.dbapi.get_offline_conductors(field='id')) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_online_conductors(self, mock_utcnow): self.config(heartbeat_timeout=60, group='conductor') time_ = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = time_ c = self._create_test_cdr() # Only 30 seconds passed since last heartbeat, it's still # considered alive mock_utcnow.return_value = time_ + datetime.timedelta(seconds=30) self.assertEqual([c.hostname], self.dbapi.get_online_conductors()) # 61 seconds passed since last heartbeat, it's dead mock_utcnow.return_value = time_ + datetime.timedelta(seconds=61) self.assertEqual([], self.dbapi.get_online_conductors()) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_list_hardware_type_interfaces(self, mock_utcnow): self.config(heartbeat_timeout=60, group='conductor') time_ = datetime.datetime(2000, 1, 1, 0, 0) h = 'fake-host' ht1 = 'hw-type-1' ht2 = 'hw-type-2' mock_utcnow.return_value = time_ self._create_test_cdr(hostname=h, hardware_types=[ht1, ht2]) expected = [ { 'hardware_type': ht1, 'interface_type': 'power', 'interface_name': 'ipmi', 'default': True, }, { 'hardware_type': ht1, 'interface_type': 'power', 'interface_name': 'fake', 'default': False, }, { 'hardware_type': ht2, 'interface_type': 'power', 'interface_name': 'ipmi', 'default': True, }, { 'hardware_type': ht2, 'interface_type': 'power', 'interface_name': 'fake', 'default': False, }, ] def _verify(expected, result): for expected_row, row in zip(expected, result): for k, v in expected_row.items(): self.assertEqual(v, getattr(row, k)) # with both hw types result = self.dbapi.list_hardware_type_interfaces([ht1, ht2]) _verify(expected, result) # with one hw type result = self.dbapi.list_hardware_type_interfaces([ht1]) _verify(expected[:2], result) # 61 seconds passed since last heartbeat, it's dead mock_utcnow.return_value = time_ + datetime.timedelta(seconds=61) result = self.dbapi.list_hardware_type_interfaces([ht1, ht2]) self.assertEqual([], result) ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/0000775000175000017500000000000013652514443021244 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/test_migrations.py0000664000175000017500000013701213652514273025036 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2012-2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. There are "opportunistic" tests for both mysql and postgresql in here, which allows testing against these databases in a properly configured unit test environment. For the opportunistic testing you need to set up a db named 'openstack_citest' with user 'openstack_citest' and password 'openstack_citest' on localhost. The test will then use that db and u/p combo to run the tests. For postgres on Ubuntu this can be done with the following commands: :: sudo -u postgres psql postgres=# create user openstack_citest with createdb login password 'openstack_citest'; postgres=# create database openstack_citest with owner openstack_citest; """ import collections import contextlib from alembic import script import fixtures import mock from oslo_db import exception as db_exc from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log as logging from oslo_utils import uuidutils from oslotest import base as test_base import sqlalchemy import sqlalchemy.exc from ironic.conf import CONF from ironic.db.sqlalchemy import migration from ironic.db.sqlalchemy import models from ironic.tests import base LOG = logging.getLogger(__name__) # NOTE(vdrok): This was introduced after migration tests started taking more # time in gate. Timeout value in seconds for tests performing migrations. MIGRATIONS_TIMEOUT = 300 @contextlib.contextmanager def patch_with_engine(engine): with mock.patch.object(enginefacade.writer, 'get_engine') as patch_engine: patch_engine.return_value = engine yield class WalkVersionsMixin(object): def _walk_versions(self, engine=None, alembic_cfg=None): # Determine latest version script from the repo, then # upgrade from 1 through to the latest, with no data # in the databases. This just checks that the schema itself # upgrades successfully. # Place the database under version control with patch_with_engine(engine): script_directory = script.ScriptDirectory.from_config(alembic_cfg) self.assertIsNone(self.migration_api.version(alembic_cfg)) versions = [ver for ver in script_directory.walk_revisions()] for version in reversed(versions): self._migrate_up(engine, alembic_cfg, version.revision, with_data=True) def _migrate_up(self, engine, config, version, with_data=False): """migrate up to a new version of the db. We allow for data insertion and post checks at every migration version with special _pre_upgrade_### and _check_### functions in the main test. """ # NOTE(sdague): try block is here because it's impossible to debug # where a failed data migration happens otherwise try: if with_data: data = None pre_upgrade = getattr( self, "_pre_upgrade_%s" % version, None) if pre_upgrade: data = pre_upgrade(engine) self.migration_api.upgrade(version, config=config) self.assertEqual(version, self.migration_api.version(config)) if with_data: check = getattr(self, "_check_%s" % version, None) if check: check(engine, data) except Exception: LOG.error("Failed to migrate to version %(version)s on engine " "%(engine)s", {'version': version, 'engine': engine}) raise class TestWalkVersions(base.TestCase, WalkVersionsMixin): def setUp(self): super(TestWalkVersions, self).setUp() self.migration_api = mock.MagicMock() self.engine = mock.MagicMock() self.config = mock.MagicMock() self.versions = [mock.Mock(revision='2b2'), mock.Mock(revision='1a1')] def test_migrate_up(self): self.migration_api.version.return_value = 'dsa123' self._migrate_up(self.engine, self.config, 'dsa123') self.migration_api.upgrade.assert_called_with('dsa123', config=self.config) self.migration_api.version.assert_called_with(self.config) def test_migrate_up_with_data(self): test_value = {"a": 1, "b": 2} self.migration_api.version.return_value = '141' self._pre_upgrade_141 = mock.MagicMock() self._pre_upgrade_141.return_value = test_value self._check_141 = mock.MagicMock() self._migrate_up(self.engine, self.config, '141', True) self._pre_upgrade_141.assert_called_with(self.engine) self._check_141.assert_called_with(self.engine, test_value) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_default(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions(self.engine, self.config) self.migration_api.version.assert_called_with(self.config) upgraded = [mock.call(self.engine, self.config, v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(self._migrate_up.call_args_list, upgraded) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_false(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions(self.engine, self.config) upgraded = [mock.call(self.engine, self.config, v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(upgraded, self._migrate_up.call_args_list) class MigrationCheckersMixin(object): def setUp(self): super(MigrationCheckersMixin, self).setUp() self.engine = enginefacade.writer.get_engine() self.config = migration._alembic_config() self.migration_api = migration self.useFixture(fixtures.Timeout(MIGRATIONS_TIMEOUT, gentle=True)) def test_walk_versions(self): self._walk_versions(self.engine, self.config) def _check_21b331f883ef(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('provision_updated_at', col_names) self.assertIsInstance(nodes.c.provision_updated_at.type, sqlalchemy.types.DateTime) def _check_3cb628139ea4(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('console_enabled', col_names) # in some backends bool type is integer self.assertIsInstance(nodes.c.console_enabled.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) def _check_31baaf680d2b(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('instance_info', col_names) self.assertIsInstance(nodes.c.instance_info.type, sqlalchemy.types.TEXT) def _check_3bea56f25597(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') instance_uuid = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' data = {'driver': 'fake', 'uuid': uuidutils.generate_uuid(), 'instance_uuid': instance_uuid} nodes.insert().values(data).execute() data['uuid'] = uuidutils.generate_uuid() self.assertRaises(db_exc.DBDuplicateEntry, nodes.insert().execute, data) def _check_487deb87cc9d(self, engine, data): conductors = db_utils.get_table(engine, 'conductors') column_names = [column.name for column in conductors.c] self.assertIn('online', column_names) self.assertIsInstance(conductors.c.online.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) nodes = db_utils.get_table(engine, 'nodes') column_names = [column.name for column in nodes.c] self.assertIn('conductor_affinity', column_names) self.assertIsInstance(nodes.c.conductor_affinity.type, sqlalchemy.types.Integer) data_conductor = {'hostname': 'test_host'} conductors.insert().execute(data_conductor) conductor = conductors.select( conductors.c.hostname == data_conductor['hostname']).execute().first() data_node = {'uuid': uuidutils.generate_uuid(), 'conductor_affinity': conductor['id']} nodes.insert().execute(data_node) node = nodes.select( nodes.c.uuid == data_node['uuid']).execute().first() self.assertEqual(conductor['id'], node['conductor_affinity']) def _check_242cc6a923b3(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('maintenance_reason', col_names) self.assertIsInstance(nodes.c.maintenance_reason.type, sqlalchemy.types.String) def _pre_upgrade_5674c57409b9(self, engine): # add some nodes in various states so we can assert that "None" # was replaced by "available", and nothing else changed. nodes = db_utils.get_table(engine, 'nodes') data = [{'uuid': uuidutils.generate_uuid(), 'provision_state': 'fake state'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': 'active'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': 'deleting'}, {'uuid': uuidutils.generate_uuid(), 'provision_state': None}] nodes.insert().values(data).execute() return data def _check_5674c57409b9(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') result = engine.execute(nodes.select()) def _get_state(uuid): for row in data: if row['uuid'] == uuid: return row['provision_state'] for row in result: old = _get_state(row['uuid']) new = row['provision_state'] if old is None: self.assertEqual('available', new) else: self.assertEqual(old, new) def _check_bb59b63f55a(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('driver_internal_info', col_names) self.assertIsInstance(nodes.c.driver_internal_info.type, sqlalchemy.types.TEXT) def _check_3ae36a5f5131(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') column_names = [column.name for column in nodes.c] self.assertIn('name', column_names) self.assertIsInstance(nodes.c.name.type, sqlalchemy.types.String) data = {'driver': 'fake', 'uuid': uuidutils.generate_uuid(), 'name': 'node' } nodes.insert().values(data).execute() data['uuid'] = uuidutils.generate_uuid() self.assertRaises(db_exc.DBDuplicateEntry, nodes.insert().execute, data) def _check_1e1d5ace7dc6(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') column_names = [column.name for column in nodes.c] self.assertIn('inspection_started_at', column_names) self.assertIn('inspection_finished_at', column_names) self.assertIsInstance(nodes.c.inspection_started_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(nodes.c.inspection_finished_at.type, sqlalchemy.types.DateTime) def _check_4f399b21ae71(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('clean_step', col_names) self.assertIsInstance(nodes.c.clean_step.type, sqlalchemy.types.String) def _check_789acc877671(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('raid_config', col_names) self.assertIn('target_raid_config', col_names) self.assertIsInstance(nodes.c.raid_config.type, sqlalchemy.types.String) self.assertIsInstance(nodes.c.target_raid_config.type, sqlalchemy.types.String) def _check_2fb93ffd2af1(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') bigstring = 'a' * 255 uuid = uuidutils.generate_uuid() data = {'uuid': uuid, 'name': bigstring} nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == uuid).execute().first() self.assertEqual(bigstring, node['name']) def _check_516faf1bb9b1(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') bigstring = 'a' * 255 uuid = uuidutils.generate_uuid() data = {'uuid': uuid, 'driver': bigstring} nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == uuid).execute().first() self.assertEqual(bigstring, node['driver']) def _check_48d6c242bb9b(self, engine, data): node_tags = db_utils.get_table(engine, 'node_tags') col_names = [column.name for column in node_tags.c] self.assertIn('tag', col_names) self.assertIsInstance(node_tags.c.tag.type, sqlalchemy.types.String) nodes = db_utils.get_table(engine, 'nodes') data = {'id': '123', 'name': 'node1'} nodes.insert().execute(data) data = {'node_id': '123', 'tag': 'tag1'} node_tags.insert().execute(data) tag = node_tags.select(node_tags.c.node_id == '123').execute().first() self.assertEqual('tag1', tag['tag']) def _check_5ea1b0d310e(self, engine, data): portgroup = db_utils.get_table(engine, 'portgroups') col_names = [column.name for column in portgroup.c] expected_names = ['created_at', 'updated_at', 'id', 'uuid', 'name', 'node_id', 'address', 'extra'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(portgroup.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(portgroup.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(portgroup.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(portgroup.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.name.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(portgroup.c.address.type, sqlalchemy.types.String) self.assertIsInstance(portgroup.c.extra.type, sqlalchemy.types.TEXT) ports = db_utils.get_table(engine, 'ports') col_names = [column.name for column in ports.c] self.assertIn('pxe_enabled', col_names) self.assertIn('portgroup_id', col_names) self.assertIn('local_link_connection', col_names) self.assertIsInstance(ports.c.portgroup_id.type, sqlalchemy.types.Integer) # in some backends bool type is integer self.assertIsInstance(ports.c.pxe_enabled.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) def _pre_upgrade_f6fdb920c182(self, engine): # add some ports. ports = db_utils.get_table(engine, 'ports') data = [{'uuid': uuidutils.generate_uuid(), 'pxe_enabled': None}, {'uuid': uuidutils.generate_uuid(), 'pxe_enabled': None}] ports.insert().values(data).execute() return data def _check_f6fdb920c182(self, engine, data): ports = db_utils.get_table(engine, 'ports') result = engine.execute(ports.select()) def _was_inserted(uuid): for row in data: if row['uuid'] == uuid: return True for row in result: if _was_inserted(row['uuid']): self.assertTrue(row['pxe_enabled']) def _check_e294876e8028(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('network_interface', col_names) self.assertIsInstance(nodes.c.network_interface.type, sqlalchemy.types.String) def _check_10b163d4481e(self, engine, data): ports = db_utils.get_table(engine, 'ports') portgroups = db_utils.get_table(engine, 'portgroups') port_col_names = [column.name for column in ports.c] portgroup_col_names = [column.name for column in portgroups.c] self.assertIn('internal_info', port_col_names) self.assertIn('internal_info', portgroup_col_names) self.assertIsInstance(ports.c.internal_info.type, sqlalchemy.types.TEXT) self.assertIsInstance(portgroups.c.internal_info.type, sqlalchemy.types.TEXT) def _check_dd34e1f1303b(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('resource_class', col_names) self.assertIsInstance(nodes.c.resource_class.type, sqlalchemy.types.String) def _pre_upgrade_c14cef6dfedf(self, engine): # add some nodes. nodes = db_utils.get_table(engine, 'nodes') data = [{'uuid': uuidutils.generate_uuid(), 'network_interface': None}, {'uuid': uuidutils.generate_uuid(), 'network_interface': None}, {'uuid': uuidutils.generate_uuid(), 'network_interface': 'neutron'}] nodes.insert().values(data).execute() return data def _check_c14cef6dfedf(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') result = engine.execute(nodes.select()) counts = collections.defaultdict(int) def _was_inserted(uuid): for row in data: if row['uuid'] == uuid: return True for row in result: if _was_inserted(row['uuid']): counts[row['network_interface']] += 1 # using default config values, we should have 2 flat and one neutron self.assertEqual(2, counts['flat']) self.assertEqual(1, counts['neutron']) self.assertEqual(0, counts[None]) def _check_60cf717201bc(self, engine, data): portgroups = db_utils.get_table(engine, 'portgroups') col_names = [column.name for column in portgroups.c] self.assertIn('standalone_ports_supported', col_names) self.assertIsInstance(portgroups.c.standalone_ports_supported.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) def _check_bcdd431ba0bf(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] added_ifaces = ['boot', 'console', 'deploy', 'inspect', 'management', 'power', 'raid', 'vendor'] for iface in added_ifaces: name = '%s_interface' % iface self.assertIn(name, col_names) self.assertIsInstance(getattr(nodes.c, name).type, sqlalchemy.types.String) def _check_daa1ba02d98(self, engine, data): connectors = db_utils.get_table(engine, 'volume_connectors') col_names = [column.name for column in connectors.c] expected_names = ['created_at', 'updated_at', 'id', 'uuid', 'node_id', 'type', 'connector_id', 'extra'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(connectors.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(connectors.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(connectors.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(connectors.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(connectors.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(connectors.c.type.type, sqlalchemy.types.String) self.assertIsInstance(connectors.c.connector_id.type, sqlalchemy.types.String) self.assertIsInstance(connectors.c.extra.type, sqlalchemy.types.TEXT) def _check_1a59178ebdf6(self, engine, data): targets = db_utils.get_table(engine, 'volume_targets') col_names = [column.name for column in targets.c] expected_names = ['created_at', 'updated_at', 'id', 'uuid', 'node_id', 'boot_index', 'extra', 'properties', 'volume_type', 'volume_id'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(targets.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(targets.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(targets.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(targets.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(targets.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(targets.c.boot_index.type, sqlalchemy.types.Integer) self.assertIsInstance(targets.c.extra.type, sqlalchemy.types.TEXT) self.assertIsInstance(targets.c.properties.type, sqlalchemy.types.TEXT) self.assertIsInstance(targets.c.volume_type.type, sqlalchemy.types.String) self.assertIsInstance(targets.c.volume_id.type, sqlalchemy.types.String) def _pre_upgrade_493d8f27f235(self, engine): portgroups = db_utils.get_table(engine, 'portgroups') data = [{'uuid': uuidutils.generate_uuid()}, {'uuid': uuidutils.generate_uuid()}] portgroups.insert().values(data).execute() return data def _check_493d8f27f235(self, engine, data): portgroups = db_utils.get_table(engine, 'portgroups') col_names = [column.name for column in portgroups.c] self.assertIn('properties', col_names) self.assertIsInstance(portgroups.c.properties.type, sqlalchemy.types.TEXT) self.assertIn('mode', col_names) self.assertIsInstance(portgroups.c.mode.type, sqlalchemy.types.String) result = engine.execute(portgroups.select()) for row in result: self.assertEqual(CONF.default_portgroup_mode, row['mode']) def _check_1d6951876d68(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('storage_interface', col_names) self.assertIsInstance(nodes.c.storage_interface.type, sqlalchemy.types.String) def _check_2353895ecfae(self, engine, data): ifaces = db_utils.get_table(engine, 'conductor_hardware_interfaces') col_names = [column.name for column in ifaces.c] expected_names = ['created_at', 'updated_at', 'id', 'conductor_id', 'hardware_type', 'interface_type', 'interface_name'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(ifaces.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(ifaces.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(ifaces.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(ifaces.c.conductor_id.type, sqlalchemy.types.Integer) self.assertIsInstance(ifaces.c.hardware_type.type, sqlalchemy.types.String) self.assertIsInstance(ifaces.c.interface_type.type, sqlalchemy.types.String) self.assertIsInstance(ifaces.c.interface_name.type, sqlalchemy.types.String) def _check_dbefd6bdaa2c(self, engine, data): ifaces = db_utils.get_table(engine, 'conductor_hardware_interfaces') col_names = [column.name for column in ifaces.c] self.assertIn('default', col_names) self.assertIsInstance(ifaces.c.default.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) def _check_3d86a077a3f2(self, engine, data): ports = db_utils.get_table(engine, 'ports') col_names = [column.name for column in ports.c] self.assertIn('physical_network', col_names) self.assertIsInstance(ports.c.physical_network.type, sqlalchemy.types.String) def _check_868cb606a74a(self, engine, data): for table in ['chassis', 'conductors', 'node_tags', 'nodes', 'portgroups', 'ports', 'volume_connectors', 'volume_targets', 'conductor_hardware_interfaces']: table = db_utils.get_table(engine, table) col_names = [column.name for column in table.c] self.assertIn('version', col_names) self.assertIsInstance(table.c.version.type, sqlalchemy.types.String) def _check_405cfe08f18d(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('rescue_interface', col_names) self.assertIsInstance(nodes.c.rescue_interface.type, sqlalchemy.types.String) def _pre_upgrade_b4130a7fc904(self, engine): # Create a node to which traits can be added. data = {'uuid': uuidutils.generate_uuid()} nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == data['uuid']).execute().first() data['id'] = node['id'] return data def _check_b4130a7fc904(self, engine, data): node_traits = db_utils.get_table(engine, 'node_traits') col_names = [column.name for column in node_traits.c] self.assertIn('node_id', col_names) self.assertIsInstance(node_traits.c.node_id.type, sqlalchemy.types.Integer) self.assertIn('trait', col_names) self.assertIsInstance(node_traits.c.trait.type, sqlalchemy.types.String) trait = {'node_id': data['id'], 'trait': 'trait1'} node_traits.insert().execute(trait) trait = node_traits.select( node_traits.c.node_id == data['id']).execute().first() self.assertEqual('trait1', trait['trait']) def _pre_upgrade_82c315d60161(self, engine): # Create a node to which bios setting can be added. data = {'uuid': uuidutils.generate_uuid()} nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == data['uuid']).execute().first() data['id'] = node['id'] return data def _check_82c315d60161(self, engine, data): bios_settings = db_utils.get_table(engine, 'bios_settings') col_names = [column.name for column in bios_settings.c] expected_names = ['node_id', 'created_at', 'updated_at', 'name', 'value', 'version'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(bios_settings.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(bios_settings.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(bios_settings.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(bios_settings.c.name.type, sqlalchemy.types.String) self.assertIsInstance(bios_settings.c.version.type, sqlalchemy.types.String) self.assertIsInstance(bios_settings.c.value.type, sqlalchemy.types.Text) setting = {'node_id': data['id'], 'name': 'virtualization', 'value': 'on'} bios_settings.insert().execute(setting) setting = bios_settings.select( sqlalchemy.sql.and_( bios_settings.c.node_id == data['id'], bios_settings.c.name == setting['name'])).execute().first() self.assertEqual('on', setting['value']) def _check_2d13bc3d6bba(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('bios_interface', col_names) self.assertIsInstance(nodes.c.bios_interface.type, sqlalchemy.types.String) def _check_fb3f10dd262e(self, engine, data): nodes_tbl = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes_tbl.c] self.assertIn('fault', col_names) self.assertIsInstance(nodes_tbl.c.fault.type, sqlalchemy.types.String) def _check_b9117ac17882(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('deploy_step', col_names) self.assertIsInstance(nodes.c.deploy_step.type, sqlalchemy.types.String) def _pre_upgrade_664f85c2f622(self, engine): # Create a node and a conductor to verify existing records # get a conductor_group of "" data = { 'conductor_id': 98765432, 'node_uuid': uuidutils.generate_uuid(), } nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute({'uuid': data['node_uuid']}) conductors = db_utils.get_table(engine, 'conductors') conductors.insert().execute({'id': data['conductor_id'], 'hostname': uuidutils.generate_uuid()}) return data def _check_664f85c2f622(self, engine, data): nodes_tbl = db_utils.get_table(engine, 'nodes') conductors_tbl = db_utils.get_table(engine, 'conductors') for tbl in (nodes_tbl, conductors_tbl): col_names = [column.name for column in tbl.c] self.assertIn('conductor_group', col_names) self.assertIsInstance(tbl.c.conductor_group.type, sqlalchemy.types.String) node = nodes_tbl.select( nodes_tbl.c.uuid == data['node_uuid']).execute().first() self.assertEqual(node['conductor_group'], "") conductor = conductors_tbl.select( conductors_tbl.c.id == data['conductor_id']).execute().first() self.assertEqual(conductor['conductor_group'], "") def _check_d2b036ae9378(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('automated_clean', col_names) def _pre_upgrade_93706939026c(self, engine): data = { 'node_uuid': uuidutils.generate_uuid(), } nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute({'uuid': data['node_uuid']}) return data def _check_93706939026c(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('protected', col_names) self.assertIn('protected_reason', col_names) node = nodes.select( nodes.c.uuid == data['node_uuid']).execute().first() self.assertFalse(node['protected']) self.assertIsNone(node['protected_reason']) def _check_f190f9d00a11(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('owner', col_names) def _pre_upgrade_dd67b91a1981(self, engine): data = { 'node_uuid': uuidutils.generate_uuid(), } nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute({'uuid': data['node_uuid']}) return data def _check_dd67b91a1981(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('allocation_id', col_names) node = nodes.select( nodes.c.uuid == data['node_uuid']).execute().first() self.assertIsNone(node['allocation_id']) allocations = db_utils.get_table(engine, 'allocations') col_names = [column.name for column in allocations.c] expected_names = ['id', 'uuid', 'node_id', 'created_at', 'updated_at', 'name', 'version', 'state', 'last_error', 'resource_class', 'traits', 'candidate_nodes', 'extra', 'conductor_affinity'] self.assertEqual(sorted(expected_names), sorted(col_names)) self.assertIsInstance(allocations.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(allocations.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(allocations.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(allocations.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(allocations.c.node_id.type, sqlalchemy.types.Integer) self.assertIsInstance(allocations.c.state.type, sqlalchemy.types.String) self.assertIsInstance(allocations.c.last_error.type, sqlalchemy.types.TEXT) self.assertIsInstance(allocations.c.resource_class.type, sqlalchemy.types.String) self.assertIsInstance(allocations.c.traits.type, sqlalchemy.types.TEXT) self.assertIsInstance(allocations.c.candidate_nodes.type, sqlalchemy.types.TEXT) self.assertIsInstance(allocations.c.extra.type, sqlalchemy.types.TEXT) self.assertIsInstance(allocations.c.conductor_affinity.type, sqlalchemy.types.Integer) def _check_9cbeefa3763f(self, engine, data): ports = db_utils.get_table(engine, 'ports') col_names = [column.name for column in ports.c] self.assertIn('is_smartnic', col_names) # in some backends bool type is integer self.assertIsInstance(ports.c.is_smartnic.type, (sqlalchemy.types.Boolean, sqlalchemy.types.Integer)) def _check_28c44432c9c3(self, engine, data): nodes_tbl = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes_tbl.c] self.assertIn('description', col_names) self.assertIsInstance(nodes_tbl.c.description.type, sqlalchemy.types.TEXT) def _check_2aac7e0872f6(self, engine, data): # Deploy templates. deploy_templates = db_utils.get_table(engine, 'deploy_templates') col_names = [column.name for column in deploy_templates.c] expected = ['created_at', 'updated_at', 'version', 'id', 'uuid', 'name'] self.assertEqual(sorted(expected), sorted(col_names)) self.assertIsInstance(deploy_templates.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(deploy_templates.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(deploy_templates.c.version.type, sqlalchemy.types.String) self.assertIsInstance(deploy_templates.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(deploy_templates.c.uuid.type, sqlalchemy.types.String) self.assertIsInstance(deploy_templates.c.name.type, sqlalchemy.types.String) # Insert a deploy template. uuid = uuidutils.generate_uuid() name = 'CUSTOM_DT1' template = {'name': name, 'uuid': uuid} deploy_templates.insert().execute(template) # Query by UUID. result = deploy_templates.select( deploy_templates.c.uuid == uuid).execute().first() template_id = result['id'] self.assertEqual(name, result['name']) # Query by name. result = deploy_templates.select( deploy_templates.c.name == name).execute().first() self.assertEqual(template_id, result['id']) # Query by ID. result = deploy_templates.select( deploy_templates.c.id == template_id).execute().first() self.assertEqual(uuid, result['uuid']) self.assertEqual(name, result['name']) # UUID is unique. template = {'name': 'CUSTOM_DT2', 'uuid': uuid} self.assertRaises(db_exc.DBDuplicateEntry, deploy_templates.insert().execute, template) # Name is unique. template = {'name': name, 'uuid': uuidutils.generate_uuid()} self.assertRaises(db_exc.DBDuplicateEntry, deploy_templates.insert().execute, template) # Deploy template steps. deploy_template_steps = db_utils.get_table(engine, 'deploy_template_steps') col_names = [column.name for column in deploy_template_steps.c] expected = ['created_at', 'updated_at', 'version', 'id', 'deploy_template_id', 'interface', 'step', 'args', 'priority'] self.assertEqual(sorted(expected), sorted(col_names)) self.assertIsInstance(deploy_template_steps.c.created_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(deploy_template_steps.c.updated_at.type, sqlalchemy.types.DateTime) self.assertIsInstance(deploy_template_steps.c.version.type, sqlalchemy.types.String) self.assertIsInstance(deploy_template_steps.c.id.type, sqlalchemy.types.Integer) self.assertIsInstance(deploy_template_steps.c.deploy_template_id.type, sqlalchemy.types.Integer) self.assertIsInstance(deploy_template_steps.c.interface.type, sqlalchemy.types.String) self.assertIsInstance(deploy_template_steps.c.step.type, sqlalchemy.types.String) self.assertIsInstance(deploy_template_steps.c.args.type, sqlalchemy.types.Text) self.assertIsInstance(deploy_template_steps.c.priority.type, sqlalchemy.types.Integer) # Insert a deploy template step. interface = 'raid' step_name = 'create_configuration' args = '{"logical_disks": []}' priority = 10 step = {'deploy_template_id': template_id, 'interface': interface, 'step': step_name, 'args': args, 'priority': priority} deploy_template_steps.insert().execute(step) # Query by deploy template ID. result = deploy_template_steps.select( deploy_template_steps.c.deploy_template_id == template_id).execute().first() self.assertEqual(template_id, result['deploy_template_id']) self.assertEqual(interface, result['interface']) self.assertEqual(step_name, result['step']) self.assertEqual(args, result['args']) self.assertEqual(priority, result['priority']) # Insert another step for the same template. deploy_template_steps.insert().execute(step) def _check_1e15e7122cc9(self, engine, data): # Deploy template 'extra' field. deploy_templates = db_utils.get_table(engine, 'deploy_templates') col_names = [column.name for column in deploy_templates.c] expected = ['created_at', 'updated_at', 'version', 'id', 'uuid', 'name', 'extra'] self.assertEqual(sorted(expected), sorted(col_names)) self.assertIsInstance(deploy_templates.c.extra.type, sqlalchemy.types.TEXT) def _check_ce6c4b3cf5a2(self, engine, data): allocations = db_utils.get_table(engine, 'allocations') col_names = [column.name for column in allocations.c] self.assertIn('owner', col_names) def _pre_upgrade_cd2c80feb331(self, engine): data = { 'node_uuid': uuidutils.generate_uuid(), } nodes = db_utils.get_table(engine, 'nodes') nodes.insert().execute({'uuid': data['node_uuid']}) return data def _check_cd2c80feb331(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('retired', col_names) self.assertIn('retired_reason', col_names) node = nodes.select( nodes.c.uuid == data['node_uuid']).execute().first() self.assertFalse(node['retired']) self.assertIsNone(node['retired_reason']) def _check_b2ad35726bb0(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') col_names = [column.name for column in nodes.c] self.assertIn('lessee', col_names) def test_upgrade_and_version(self): with patch_with_engine(self.engine): self.migration_api.upgrade('head') self.assertIsNotNone(self.migration_api.version()) def test_create_schema_and_version(self): with patch_with_engine(self.engine): self.migration_api.create_schema() self.assertIsNotNone(self.migration_api.version()) def test_upgrade_and_create_schema(self): with patch_with_engine(self.engine): self.migration_api.upgrade('31baaf680d2b') self.assertRaises(db_exc.DBMigrationError, self.migration_api.create_schema) def test_upgrade_twice(self): with patch_with_engine(self.engine): self.migration_api.upgrade('31baaf680d2b') v1 = self.migration_api.version() self.migration_api.upgrade('head') v2 = self.migration_api.version() self.assertNotEqual(v1, v2) class TestMigrationsMySQL(MigrationCheckersMixin, WalkVersionsMixin, test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase): FIXTURE = test_fixtures.MySQLOpportunisticFixture def _pre_upgrade_e918ff30eb42(self, engine): nodes = db_utils.get_table(engine, 'nodes') # this should always fail pre-upgrade mediumtext = 'a' * (pow(2, 16) + 1) uuid = uuidutils.generate_uuid() expected_to_fail_data = {'uuid': uuid, 'instance_info': mediumtext} self.assertRaises(db_exc.DBError, nodes.insert().execute, expected_to_fail_data) # this should always work pre-upgrade text = 'a' * (pow(2, 16) - 1) uuid = uuidutils.generate_uuid() valid_pre_upgrade_data = {'uuid': uuid, 'instance_info': text} nodes.insert().execute(valid_pre_upgrade_data) return valid_pre_upgrade_data def _check_e918ff30eb42(self, engine, data): nodes = db_utils.get_table(engine, 'nodes') # check that the data for the successful pre-upgrade # entry didn't change node = nodes.select(nodes.c.uuid == data['uuid']).execute().first() self.assertIsNotNone(node) self.assertEqual(data['instance_info'], node['instance_info']) # now this should pass post-upgrade test = 'b' * (pow(2, 16) + 1) uuid = uuidutils.generate_uuid() data = {'uuid': uuid, 'instance_info': test} nodes.insert().execute(data) node = nodes.select(nodes.c.uuid == uuid).execute().first() self.assertEqual(test, node['instance_info']) class TestMigrationsPostgreSQL(MigrationCheckersMixin, WalkVersionsMixin, test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class ModelsMigrationSyncMixin(object): def setUp(self): super(ModelsMigrationSyncMixin, self).setUp() self.engine = enginefacade.writer.get_engine() self.useFixture(fixtures.Timeout(MIGRATIONS_TIMEOUT, gentle=True)) def get_metadata(self): return models.Base.metadata def get_engine(self): return self.engine def db_sync(self, engine): with patch_with_engine(engine): migration.upgrade('head') class ModelsMigrationsSyncMysql(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase): FIXTURE = test_fixtures.MySQLOpportunisticFixture class ModelsMigrationsSyncPostgres(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/test_models.py0000664000175000017500000000216713652514273024147 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import exception from ironic.db.sqlalchemy import models from ironic.tests import base as test_base class TestGetClass(test_base.TestCase): def test_get_class(self): ret = models.get_class('Chassis') self.assertEqual(models.Chassis, ret) for model in models.Base.__subclasses__(): ret = models.get_class(model.__name__) self.assertEqual(model, ret) def test_get_class_bad(self): self.assertRaises(exception.IronicException, models.get_class, "DoNotExist") ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/test_api.py0000664000175000017500000000265013652514273023432 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect from ironic.db.sqlalchemy import api as sqlalchemy_api from ironic.tests import base as test_base class TestDBWriteMethodsRetryOnDeadlock(test_base.TestCase): def test_retry_on_deadlock(self): # This test ensures that every dbapi method doing database write is # wrapped with retry_on_deadlock decorator for name, method in inspect.getmembers(sqlalchemy_api.Connection, predicate=inspect.ismethod): src = inspect.getsource(method) if 'with _session_for_write()' in src: self.assertIn( '@oslo_db_api.retry_on_deadlock', src, 'oslo_db\'s retry_on_deadlock decorator not ' 'applied to method ironic.db.sqlalchemy.api.Connection.%s ' 'doing database write' % name) ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/test_types.py0000664000175000017500000000647113652514273024032 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for custom SQLAlchemy types via Ironic DB.""" from oslo_db import exception as db_exc from oslo_utils import uuidutils import ironic.db.sqlalchemy.api as sa_api from ironic.db.sqlalchemy import models from ironic.tests.unit.db import base class SqlAlchemyCustomTypesTestCase(base.DbTestCase): # NOTE(max_lobur): Since it's not straightforward to check this in # isolation these tests use existing db models. def test_JSONEncodedDict_default_value(self): # Create chassis w/o extra specified. ch1_id = uuidutils.generate_uuid() self.dbapi.create_chassis({'uuid': ch1_id}) # Get chassis manually to test SA types in isolation from UOM. ch1 = sa_api.model_query(models.Chassis).filter_by(uuid=ch1_id).one() self.assertEqual({}, ch1.extra) # Create chassis with extra specified. ch2_id = uuidutils.generate_uuid() extra = {'foo1': 'test', 'foo2': 'other extra'} self.dbapi.create_chassis({'uuid': ch2_id, 'extra': extra}) # Get chassis manually to test SA types in isolation from UOM. ch2 = sa_api.model_query(models.Chassis).filter_by(uuid=ch2_id).one() self.assertEqual(extra, ch2.extra) def test_JSONEncodedDict_type_check(self): self.assertRaises(db_exc.DBError, self.dbapi.create_chassis, {'extra': ['this is not a dict']}) def test_JSONEncodedList_default_value(self): # Create conductor w/o extra specified. cdr1_id = 321321 self.dbapi.register_conductor({'hostname': 'test_host1', 'drivers': None, 'id': cdr1_id}) # Get conductor manually to test SA types in isolation from UOM. cdr1 = (sa_api .model_query(models.Conductor) .filter_by(id=cdr1_id) .one()) self.assertEqual([], cdr1.drivers) # Create conductor with drivers specified. cdr2_id = 623623 drivers = ['foo1', 'other driver'] self.dbapi.register_conductor({'hostname': 'test_host2', 'drivers': drivers, 'id': cdr2_id}) # Get conductor manually to test SA types in isolation from UOM. cdr2 = (sa_api .model_query(models.Conductor) .filter_by(id=cdr2_id) .one()) self.assertEqual(drivers, cdr2.drivers) def test_JSONEncodedList_type_check(self): self.assertRaises(db_exc.DBError, self.dbapi.register_conductor, {'hostname': 'test_host3', 'drivers': {'this is not a list': 'test'}}) ironic-15.0.0/ironic/tests/unit/db/sqlalchemy/__init__.py0000664000175000017500000000000013652514273023344 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/db/base.py0000664000175000017500000000427713652514273020401 0ustar zuulzuul00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic DB test base class.""" import fixtures from oslo_config import cfg from oslo_db.sqlalchemy import enginefacade from ironic.db import api as dbapi from ironic.db.sqlalchemy import migration from ironic.db.sqlalchemy import models from ironic.tests import base CONF = cfg.CONF _DB_CACHE = None class Database(fixtures.Fixture): def __init__(self, engine, db_migrate, sql_connection): self.sql_connection = sql_connection self.engine = engine self.engine.dispose() conn = self.engine.connect() self.setup_sqlite(db_migrate) self.post_migrations() self._DB = "".join(line for line in conn.connection.iterdump()) self.engine.dispose() def setup_sqlite(self, db_migrate): if db_migrate.version(): return models.Base.metadata.create_all(self.engine) db_migrate.stamp('head') def setUp(self): super(Database, self).setUp() conn = self.engine.connect() conn.connection.executescript(self._DB) self.addCleanup(self.engine.dispose) def post_migrations(self): """Any addition steps that are needed outside of the migrations.""" class DbTestCase(base.TestCase): def setUp(self): super(DbTestCase, self).setUp() self.dbapi = dbapi.get_instance() global _DB_CACHE if not _DB_CACHE: engine = enginefacade.writer.get_engine() _DB_CACHE = Database(engine, migration, sql_connection=CONF.database.connection) self.useFixture(_DB_CACHE) ironic-15.0.0/ironic/tests/unit/db/test_volume_connectors.py0000664000175000017500000001503513652514273024264 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating VolumeConnectors via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbVolumeConnectorTestCase(base.DbTestCase): def setUp(self): # This method creates a volume_connector for every test and # replaces a test for creating a volume_connector. super(DbVolumeConnectorTestCase, self).setUp() self.node = db_utils.create_test_node() self.connector = db_utils.create_test_volume_connector( node_id=self.node.id, type='test', connector_id='test-connector_id') def test_create_volume_connector_duplicated_type_connector_id(self): self.assertRaises(exception.VolumeConnectorTypeAndIdAlreadyExists, db_utils.create_test_volume_connector, uuid=uuidutils.generate_uuid(), node_id=self.node.id, type=self.connector.type, connector_id=self.connector.connector_id) def test_create_volume_connector_duplicated_uuid(self): self.assertRaises(exception.VolumeConnectorAlreadyExists, db_utils.create_test_volume_connector, uuid=self.connector.uuid, node_id=self.node.id, type='test', connector_id='test-connector_id-2') def test_get_volume_connector_by_id(self): res = self.dbapi.get_volume_connector_by_id(self.connector.id) self.assertEqual(self.connector.type, res.type) self.assertEqual(self.connector.connector_id, res.connector_id) self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_id, -1) def test_get_volume_connector_by_uuid(self): res = self.dbapi.get_volume_connector_by_uuid(self.connector.uuid) self.assertEqual(self.connector.id, res.id) self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_uuid, -1) def _connector_list_preparation(self): uuids = [str(self.connector.uuid)] for i in range(1, 6): volume_connector = db_utils.create_test_volume_connector( uuid=uuidutils.generate_uuid(), type='iqn', connector_id='iqn.test-%s' % i) uuids.append(str(volume_connector.uuid)) return uuids def test_get_volume_connector_list(self): uuids = self._connector_list_preparation() res = self.dbapi.get_volume_connector_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_volume_connector_list_sorted(self): uuids = self._connector_list_preparation() res = self.dbapi.get_volume_connector_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_volume_connector_list, sort_key='foo') def test_get_volume_connectors_by_node_id(self): res = self.dbapi.get_volume_connectors_by_node_id(self.node.id) self.assertEqual(self.connector.type, res[0].type) self.assertEqual(self.connector.connector_id, res[0].connector_id) def test_get_volume_connectors_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_volume_connectors_by_node_id(99)) def test_update_volume_connector(self): old_connector_id = self.connector.connector_id new_connector_id = 'test-connector_id-2' self.assertNotEqual(old_connector_id, new_connector_id) res = self.dbapi.update_volume_connector( self.connector.id, {'connector_id': new_connector_id}) self.assertEqual(new_connector_id, res.connector_id) res = self.dbapi.update_volume_connector( self.connector.uuid, {'connector_id': old_connector_id}) self.assertEqual(old_connector_id, res.connector_id) def test_update_volume_connector_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_volume_connector, self.connector.id, {'uuid': ''}) def test_update_volume_connector_fails_invalid_id(self): self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.update_volume_connector, -1, {'node_id': ''}) def test_update_volume_connector_duplicated_type_connector_id(self): type = self.connector.type connector_id1 = self.connector.connector_id connector_id2 = 'test-connector_id-2' volume_connector2 = db_utils.create_test_volume_connector( uuid=uuidutils.generate_uuid(), node_id=self.node.id, type=type, connector_id=connector_id2) self.assertRaises(exception.VolumeConnectorTypeAndIdAlreadyExists, self.dbapi.update_volume_connector, volume_connector2.id, {'connector_id': connector_id1}) def test_destroy_volume_connector(self): self.dbapi.destroy_volume_connector(self.connector.id) # Attempt to retrieve the volume to verify it is gone. self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_id, self.connector.id) # Ensure that the destroy_volume_connector returns the # expected exception. self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.destroy_volume_connector, self.connector.id) ironic-15.0.0/ironic/tests/unit/db/test_deploy_templates.py0000664000175000017500000002212313652514273024066 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating DeployTemplates via the DB API""" from oslo_db import exception as db_exc from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbDeployTemplateTestCase(base.DbTestCase): def setUp(self): super(DbDeployTemplateTestCase, self).setUp() self.template = db_utils.create_test_deploy_template() def test_create(self): self.assertEqual('CUSTOM_DT1', self.template.name) self.assertEqual(1, len(self.template.steps)) step = self.template.steps[0] self.assertEqual(self.template.id, step.deploy_template_id) self.assertEqual('raid', step.interface) self.assertEqual('create_configuration', step.step) self.assertEqual({'logical_disks': []}, step.args) self.assertEqual(10, step.priority) self.assertEqual({}, self.template.extra) def test_create_no_steps(self): uuid = uuidutils.generate_uuid() template = db_utils.create_test_deploy_template( uuid=uuid, name='CUSTOM_DT2', steps=[]) self.assertEqual([], template.steps) def test_create_duplicate_uuid(self): self.assertRaises(exception.DeployTemplateAlreadyExists, db_utils.create_test_deploy_template, uuid=self.template.uuid, name='CUSTOM_DT2') def test_create_duplicate_name(self): uuid = uuidutils.generate_uuid() self.assertRaises(exception.DeployTemplateDuplicateName, db_utils.create_test_deploy_template, uuid=uuid, name=self.template.name) def test_create_invalid_step_no_interface(self): uuid = uuidutils.generate_uuid() template = db_utils.get_test_deploy_template(uuid=uuid, name='CUSTOM_DT2') del template['steps'][0]['interface'] self.assertRaises(db_exc.DBError, self.dbapi.create_deploy_template, template) def test_update_name(self): values = {'name': 'CUSTOM_DT2'} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual('CUSTOM_DT2', template.name) def test_update_steps_replace(self): step = {'interface': 'bios', 'step': 'apply_configuration', 'args': {}, 'priority': 50} values = {'steps': [step]} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual(1, len(template.steps)) step = template.steps[0] self.assertEqual('bios', step.interface) self.assertEqual('apply_configuration', step.step) self.assertEqual({}, step.args) self.assertEqual(50, step.priority) def test_update_steps_add(self): step = {'interface': 'bios', 'step': 'apply_configuration', 'args': {}, 'priority': 50} values = {'steps': [self.template.steps[0], step]} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual(2, len(template.steps)) step0 = template.steps[0] self.assertEqual(self.template.steps[0].id, step0.id) self.assertEqual('raid', step0.interface) self.assertEqual('create_configuration', step0.step) self.assertEqual({'logical_disks': []}, step0.args) self.assertEqual(10, step0.priority) step1 = template.steps[1] self.assertNotEqual(self.template.steps[0].id, step1.id) self.assertEqual('bios', step1.interface) self.assertEqual('apply_configuration', step1.step) self.assertEqual({}, step1.args) self.assertEqual(50, step1.priority) def test_update_steps_replace_args(self): step = self.template.steps[0] step['args'] = {'foo': 'bar'} values = {'steps': [step]} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual(1, len(template.steps)) step = template.steps[0] self.assertEqual({'foo': 'bar'}, step.args) def test_update_steps_remove_all(self): values = {'steps': []} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual([], template.steps) def test_update_extra(self): values = {'extra': {'foo': 'bar'}} template = self.dbapi.update_deploy_template(self.template.id, values) self.assertEqual({'foo': 'bar'}, template.extra) def test_update_duplicate_name(self): uuid = uuidutils.generate_uuid() template2 = db_utils.create_test_deploy_template(uuid=uuid, name='CUSTOM_DT2') values = {'name': self.template.name} self.assertRaises(exception.DeployTemplateDuplicateName, self.dbapi.update_deploy_template, template2.id, values) def test_update_not_found(self): self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.update_deploy_template, 123, {}) def test_update_uuid_not_allowed(self): uuid = uuidutils.generate_uuid() self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_deploy_template, self.template.id, {'uuid': uuid}) def test_destroy(self): self.dbapi.destroy_deploy_template(self.template.id) # Attempt to retrieve the template to verify it is gone. self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.get_deploy_template_by_id, self.template.id) # Ensure that the destroy_deploy_template returns the # expected exception. self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.destroy_deploy_template, self.template.id) def test_get_deploy_template_by_id(self): res = self.dbapi.get_deploy_template_by_id(self.template.id) self.assertEqual(self.template.id, res.id) self.assertEqual(self.template.name, res.name) self.assertEqual(1, len(res.steps)) self.assertEqual(self.template.id, res.steps[0].deploy_template_id) self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.get_deploy_template_by_id, -1) def test_get_deploy_template_by_uuid(self): res = self.dbapi.get_deploy_template_by_uuid(self.template.uuid) self.assertEqual(self.template.id, res.id) invalid_uuid = uuidutils.generate_uuid() self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.get_deploy_template_by_uuid, invalid_uuid) def test_get_deploy_template_by_name(self): res = self.dbapi.get_deploy_template_by_name(self.template.name) self.assertEqual(self.template.id, res.id) self.assertRaises(exception.DeployTemplateNotFound, self.dbapi.get_deploy_template_by_name, 'bogus') def _template_list_preparation(self): uuids = [str(self.template.uuid)] for i in range(1, 3): template = db_utils.create_test_deploy_template( uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%d' % (i + 1)) uuids.append(str(template.uuid)) return uuids def test_get_deploy_template_list(self): uuids = self._template_list_preparation() res = self.dbapi.get_deploy_template_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_deploy_template_list_sorted(self): uuids = self._template_list_preparation() res = self.dbapi.get_deploy_template_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_deploy_template_list, sort_key='foo') def test_get_deploy_template_list_by_names(self): self._template_list_preparation() names = ['CUSTOM_DT2', 'CUSTOM_DT3'] res = self.dbapi.get_deploy_template_list_by_names(names=names) res_names = [r.name for r in res] self.assertCountEqual(names, res_names) def test_get_deploy_template_list_by_names_no_match(self): self._template_list_preparation() names = ['CUSTOM_FOO'] res = self.dbapi.get_deploy_template_list_by_names(names=names) self.assertEqual([], res) ironic-15.0.0/ironic/tests/unit/db/test_api.py0000664000175000017500000002350113652514273021266 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import mock from oslo_db.sqlalchemy import utils as db_utils from oslo_utils import uuidutils from testtools import matchers from ironic.common import context from ironic.common import exception from ironic.common import release_mappings from ironic.db import api as db_api from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class UpgradingTestCase(base.DbTestCase): def setUp(self): super(UpgradingTestCase, self).setUp() self.dbapi = db_api.get_instance() self.object_versions = release_mappings.get_object_versions() def test_check_versions_emptyDB(self): # nothing in the DB self.assertTrue(self.dbapi.check_versions()) @mock.patch.object(db_utils, 'column_exists', autospec=True) def test_check_versions_missing_version_columns(self, column_exists): column_exists.return_value = False self.assertRaises(exception.DatabaseVersionTooOld, self.dbapi.check_versions) def test_check_versions(self): for v in self.object_versions['Node']: node = utils.create_test_node(uuid=uuidutils.generate_uuid(), version=v) node = self.dbapi.get_node_by_id(node.id) self.assertEqual(v, node.version) self.assertTrue(self.dbapi.check_versions()) def test_check_versions_node_no_version(self): node = utils.create_test_node(version=None) node = self.dbapi.get_node_by_id(node.id) self.assertIsNone(node.version) self.assertFalse(self.dbapi.check_versions()) def test_check_versions_ignore_node(self): node = utils.create_test_node(version=None) node = self.dbapi.get_node_by_id(node.id) self.assertIsNone(node.version) self.assertTrue(self.dbapi.check_versions(ignore_models=['Node'])) def test_check_versions_node_old(self): node = utils.create_test_node(version='1.0') node = self.dbapi.get_node_by_id(node.id) self.assertEqual('1.0', node.version) self.assertFalse(self.dbapi.check_versions()) def test_check_versions_conductor(self): for v in self.object_versions['Conductor']: # NOTE(jroll) conductor model doesn't have a uuid :( conductor = utils.create_test_conductor( hostname=uuidutils.generate_uuid(), version=v, id=random.randint(1, 1000000)) conductor = self.dbapi.get_conductor(conductor.hostname) self.assertEqual(v, conductor.version) self.assertTrue(self.dbapi.check_versions()) def test_check_versions_conductor_old(self): conductor = utils.create_test_conductor(version='1.0') conductor = self.dbapi.get_conductor(conductor.hostname) self.assertEqual('1.0', conductor.version) self.assertFalse(self.dbapi.check_versions()) class GetNotVersionsTestCase(base.DbTestCase): def setUp(self): super(GetNotVersionsTestCase, self).setUp() self.dbapi = db_api.get_instance() def test_get_not_versions(self): versions = ['1.1', '1.2', '1.3'] node_uuids = [] for v in versions: node = utils.create_test_node(uuid=uuidutils.generate_uuid(), version=v) node_uuids.append(node.uuid) self.assertEqual([], self.dbapi.get_not_versions('Node', versions)) res = self.dbapi.get_not_versions('Node', ['2.0']) self.assertThat(res, matchers.HasLength(len(node_uuids))) res_uuids = [n.uuid for n in res] self.assertEqual(node_uuids, res_uuids) res = self.dbapi.get_not_versions('Node', versions[1:]) self.assertThat(res, matchers.HasLength(1)) self.assertEqual(node_uuids[0], res[0].uuid) def test_get_not_versions_null(self): node = utils.create_test_node(uuid=uuidutils.generate_uuid(), version=None) node = self.dbapi.get_node_by_id(node.id) self.assertIsNone(node.version) res = self.dbapi.get_not_versions('Node', ['1.6']) self.assertThat(res, matchers.HasLength(1)) self.assertEqual(node.uuid, res[0].uuid) def test_get_not_versions_no_model(self): utils.create_test_node(uuid=uuidutils.generate_uuid(), version='1.4') self.assertRaises(exception.IronicException, self.dbapi.get_not_versions, 'NotExist', ['1.6']) class UpdateToLatestVersionsTestCase(base.DbTestCase): def setUp(self): super(UpdateToLatestVersionsTestCase, self).setUp() self.context = context.get_admin_context() self.dbapi = db_api.get_instance() obj_versions = release_mappings.get_object_versions( objects=['Node', 'Chassis']) master_objs = release_mappings.RELEASE_MAPPING['master']['objects'] self.node_ver = master_objs['Node'][0] self.chassis_ver = master_objs['Chassis'][0] self.node_old_ver = self._get_old_object_version( self.node_ver, obj_versions['Node']) self.chassis_old_ver = self._get_old_object_version( self.chassis_ver, obj_versions['Chassis']) self.node_version_same = self.node_old_ver == self.node_ver self.chassis_version_same = self.chassis_old_ver == self.chassis_ver # number of objects with different versions self.num_diff_objs = 2 if self.node_version_same: self.num_diff_objs -= 1 if self.chassis_version_same: self.num_diff_objs -= 1 def _get_old_object_version(self, latest_version, versions): """Return a version that is older (not same) as latest version. If there aren't any older versions, return the latest version. """ for v in versions: if v != latest_version: return v return latest_version def test_empty_db(self): self.assertEqual( (0, 0), self.dbapi.update_to_latest_versions(self.context, 10)) def test_version_exists(self): # Node will be in latest version utils.create_test_node() self.assertEqual( (0, 0), self.dbapi.update_to_latest_versions(self.context, 10)) def test_one_node(self): node = utils.create_test_node(version=self.node_old_ver) expected = (0, 0) if self.node_version_same else (1, 1) self.assertEqual( expected, self.dbapi.update_to_latest_versions(self.context, 10)) res = self.dbapi.get_node_by_uuid(node.uuid) self.assertEqual(self.node_ver, res.version) def test_max_count_zero(self): orig_node = utils.create_test_node(version=self.node_old_ver) orig_chassis = utils.create_test_chassis(version=self.chassis_old_ver) self.assertEqual((self.num_diff_objs, self.num_diff_objs), self.dbapi.update_to_latest_versions(self.context, 0)) node = self.dbapi.get_node_by_uuid(orig_node.uuid) self.assertEqual(self.node_ver, node.version) chassis = self.dbapi.get_chassis_by_uuid(orig_chassis.uuid) self.assertEqual(self.chassis_ver, chassis.version) def test_old_version_max_count_1(self): orig_node = utils.create_test_node(version=self.node_old_ver) orig_chassis = utils.create_test_chassis(version=self.chassis_old_ver) num_modified = 1 if self.num_diff_objs else 0 self.assertEqual((self.num_diff_objs, num_modified), self.dbapi.update_to_latest_versions(self.context, 1)) node = self.dbapi.get_node_by_uuid(orig_node.uuid) chassis = self.dbapi.get_chassis_by_uuid(orig_chassis.uuid) self.assertTrue(node.version == self.node_old_ver or chassis.version == self.chassis_old_ver) self.assertTrue(node.version == self.node_ver or chassis.version == self.chassis_ver) def _create_nodes(self, num_nodes): version = self.node_old_ver nodes = [] for i in range(0, num_nodes): node = utils.create_test_node(version=version, uuid=uuidutils.generate_uuid()) nodes.append(node.uuid) for uuid in nodes: node = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(version, node.version) return nodes def test_old_version_max_count_2_some_nodes(self): if self.node_version_same: # can't test if we don't have diff versions of the node return nodes = self._create_nodes(5) self.assertEqual( (5, 2), self.dbapi.update_to_latest_versions(self.context, 2)) self.assertEqual( (3, 3), self.dbapi.update_to_latest_versions(self.context, 10)) for uuid in nodes: node = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(self.node_ver, node.version) def test_old_version_max_count_same_nodes(self): if self.node_version_same: # can't test if we don't have diff versions of the node return nodes = self._create_nodes(5) self.assertEqual( (5, 5), self.dbapi.update_to_latest_versions(self.context, 5)) for uuid in nodes: node = self.dbapi.get_node_by_uuid(uuid) self.assertEqual(self.node_ver, node.version) ironic-15.0.0/ironic/tests/unit/db/test_portgroups.py0000664000175000017500000002170113652514273022741 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating portgroups via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbportgroupTestCase(base.DbTestCase): def setUp(self): # This method creates a portgroup for every test and # replaces a test for creating a portgroup. super(DbportgroupTestCase, self).setUp() self.node = db_utils.create_test_node() self.portgroup = db_utils.create_test_portgroup(node_id=self.node.id) def _create_test_portgroup_range(self, count): """Create the specified number of test portgroup entries in DB It uses create_test_portgroup method. And returns List of Portgroup DB objects. :param count: Specifies the number of portgroups to be created :returns: List of Portgroup DB objects """ uuids = [] for i in range(1, count): portgroup = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), name='portgroup' + str(i), address='52:54:00:cf:2d:4%s' % i) uuids.append(str(portgroup.uuid)) return uuids def test_get_portgroup_by_id(self): res = self.dbapi.get_portgroup_by_id(self.portgroup.id) self.assertEqual(self.portgroup.address, res.address) def test_get_portgroup_by_id_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_id, 99) def test_get_portgroup_by_uuid(self): res = self.dbapi.get_portgroup_by_uuid(self.portgroup.uuid) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_uuid_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_uuid, 'EEEEEEEE-EEEE-EEEE-EEEE-EEEEEEEEEEEE') def test_get_portgroup_by_address(self): res = self.dbapi.get_portgroup_by_address(self.portgroup.address) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_address_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_address, '31:31:31:31:31:31') def test_get_portgroup_by_name(self): res = self.dbapi.get_portgroup_by_name(self.portgroup.name) self.assertEqual(self.portgroup.id, res.id) def test_get_portgroup_by_name_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_name, 'testfail') def test_get_portgroup_list(self): uuids = self._create_test_portgroup_range(6) # Also add the uuid for the portgroup created in setUp() uuids.append(str(self.portgroup.uuid)) res = self.dbapi.get_portgroup_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_portgroup_list_sorted(self): uuids = self._create_test_portgroup_range(6) # Also add the uuid for the portgroup created in setUp() uuids.append(str(self.portgroup.uuid)) res = self.dbapi.get_portgroup_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_portgroup_list, sort_key='foo') def test_get_portgroups_by_node_id(self): res = self.dbapi.get_portgroups_by_node_id(self.node.id) self.assertEqual(self.portgroup.address, res[0].address) def test_get_portgroups_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_portgroups_by_node_id(99)) def test_destroy_portgroup(self): self.dbapi.destroy_portgroup(self.portgroup.id) self.assertRaises(exception.PortgroupNotFound, self.dbapi.get_portgroup_by_id, self.portgroup.id) def test_destroy_portgroup_that_does_not_exist(self): self.assertRaises(exception.PortgroupNotFound, self.dbapi.destroy_portgroup, 99) def test_destroy_portgroup_uuid(self): self.dbapi.destroy_portgroup(self.portgroup.uuid) def test_destroy_portgroup_not_empty(self): self.port = db_utils.create_test_port(node_id=self.node.id, portgroup_id=self.portgroup.id) self.assertRaises(exception.PortgroupNotEmpty, self.dbapi.destroy_portgroup, self.portgroup.id) def test_update_portgroup(self): old_address = self.portgroup.address new_address = 'ff:ee:dd:cc:bb:aa' self.assertNotEqual(old_address, new_address) old_name = self.portgroup.name new_name = 'newname' self.assertNotEqual(old_name, new_name) res = self.dbapi.update_portgroup(self.portgroup.id, {'address': new_address, 'name': new_name}) self.assertEqual(new_address, res.address) self.assertEqual(new_name, res.name) def test_update_portgroup_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_portgroup, self.portgroup.id, {'uuid': ''}) def test_update_portgroup_not_found(self): id_2 = 99 self.assertNotEqual(self.portgroup.id, id_2) address2 = 'aa:bb:cc:11:22:33' self.assertRaises(exception.PortgroupNotFound, self.dbapi.update_portgroup, id_2, {'address': address2}) def test_update_portgroup_duplicated_address(self): address1 = self.portgroup.address address2 = 'aa:bb:cc:11:22:33' portgroup2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=str(uuidutils.generate_uuid()), address=address2) self.assertRaises(exception.PortgroupMACAlreadyExists, self.dbapi.update_portgroup, portgroup2.id, {'address': address1}) def test_update_portgroup_duplicated_name(self): name1 = self.portgroup.name portgroup2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), node_id=self.node.id, name='name2', address='aa:bb:cc:11:22:55') self.assertRaises(exception.PortgroupDuplicateName, self.dbapi.update_portgroup, portgroup2.id, {'name': name1}) def test_create_portgroup_duplicated_name(self): self.assertRaises(exception.PortgroupDuplicateName, db_utils.create_test_portgroup, uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=self.portgroup.name, address='aa:bb:cc:11:22:55') def test_create_portgroup_duplicated_address(self): self.assertRaises(exception.PortgroupMACAlreadyExists, db_utils.create_test_portgroup, uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=str(uuidutils.generate_uuid()), address=self.portgroup.address) def test_create_portgroup_duplicated_uuid(self): self.assertRaises(exception.PortgroupAlreadyExists, db_utils.create_test_portgroup, uuid=self.portgroup.uuid, node_id=self.node.id, name=str(uuidutils.generate_uuid()), address='aa:bb:cc:33:11:22') def test_create_portgroup_no_mode(self): self.config(default_portgroup_mode='802.3ad') name = uuidutils.generate_uuid() db_utils.create_test_portgroup(uuid=uuidutils.generate_uuid(), node_id=self.node.id, name=name, address='aa:bb:cc:dd:ee:ff') res = self.dbapi.get_portgroup_by_id(self.portgroup.id) self.assertEqual('active-backup', res.mode) res = self.dbapi.get_portgroup_by_name(name) self.assertEqual('802.3ad', res.mode) ironic-15.0.0/ironic/tests/unit/db/test_volume_targets.py0000664000175000017500000001652613652514273023566 0ustar zuulzuul00000000000000# Copyright 2016 Hitachi, Ltc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating VolumeTargets via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbVolumeTargetTestCase(base.DbTestCase): def setUp(self): # This method creates a volume_target for every test. super(DbVolumeTargetTestCase, self).setUp() self.node = db_utils.create_test_node() self.target = db_utils.create_test_volume_target(node_id=self.node.id) def test_create_volume_target(self): info = {'uuid': uuidutils.generate_uuid(), 'node_id': self.node.id, 'boot_index': 1, 'volume_type': 'iscsi', 'volume_id': '12345678'} target = self.dbapi.create_volume_target(info) self.assertEqual(info['uuid'], target.uuid) self.assertEqual(info['node_id'], target.node_id) self.assertEqual(info['boot_index'], target.boot_index) self.assertEqual(info['volume_type'], target.volume_type) self.assertEqual(info['volume_id'], target.volume_id) self.assertIsNone(target.properties) self.assertIsNone(target.extra) def test_create_volume_target_duplicated_nodeid_and_bootindex(self): self.assertRaises(exception.VolumeTargetBootIndexAlreadyExists, db_utils.create_test_volume_target, uuid=uuidutils.generate_uuid(), node_id=self.target.node_id, boot_index=self.target.boot_index) def test_create_volume_target_duplicated_uuid(self): self.assertRaises(exception.VolumeTargetAlreadyExists, db_utils.create_test_volume_target, uuid=self.target.uuid, node_id=self.node.id, boot_index=100) def test_get_volume_target_by_id(self): res = self.dbapi.get_volume_target_by_id(self.target.id) self.assertEqual(self.target.volume_type, res.volume_type) self.assertEqual(self.target.properties, res.properties) self.assertEqual(self.target.boot_index, res.boot_index) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_id, 100) def test_get_volume_target_by_uuid(self): res = self.dbapi.get_volume_target_by_uuid(self.target.uuid) self.assertEqual(self.target.id, res.id) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_uuid, '11111111-2222-3333-4444-555555555555') def _create_list_of_volume_targets(self, num): uuids = [str(self.target.uuid)] for i in range(1, num): volume_target = db_utils.create_test_volume_target( uuid=uuidutils.generate_uuid(), properties={"target_iqn": "iqn.test-%s" % i}, boot_index=i) uuids.append(str(volume_target.uuid)) return uuids def test_get_volume_target_list(self): uuids = self._create_list_of_volume_targets(6) res = self.dbapi.get_volume_target_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_volume_target_list_sorted(self): uuids = self._create_list_of_volume_targets(5) res = self.dbapi.get_volume_target_list(sort_key='uuid') res_uuids = [r.uuid for r in res] self.assertEqual(sorted(uuids), res_uuids) self.assertRaises(exception.InvalidParameterValue, self.dbapi.get_volume_target_list, sort_key='foo') def test_get_volume_targets_by_node_id(self): node2 = db_utils.create_test_node(uuid=uuidutils.generate_uuid()) target2 = db_utils.create_test_volume_target( uuid=uuidutils.generate_uuid(), node_id=node2.id) self._create_list_of_volume_targets(2) res = self.dbapi.get_volume_targets_by_node_id(node2.id) self.assertEqual(1, len(res)) self.assertEqual(target2.uuid, res[0].uuid) def test_get_volume_targets_by_node_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_volume_targets_by_node_id(99)) def test_get_volume_targets_by_volume_id(self): # Create two volume_targets. They'll have the same volume_id. uuids = self._create_list_of_volume_targets(2) res = self.dbapi.get_volume_targets_by_volume_id('12345678') res_uuids = [r.uuid for r in res] self.assertEqual(uuids, res_uuids) def test_get_volume_targets_by_volume_id_that_does_not_exist(self): self.assertEqual([], self.dbapi.get_volume_targets_by_volume_id('dne')) def test_update_volume_target(self): old_boot_index = self.target.boot_index new_boot_index = old_boot_index + 1 res = self.dbapi.update_volume_target(self.target.id, {'boot_index': new_boot_index}) self.assertEqual(new_boot_index, res.boot_index) res = self.dbapi.update_volume_target(self.target.id, {'boot_index': old_boot_index}) self.assertEqual(old_boot_index, res.boot_index) def test_update_volume_target_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_volume_target, self.target.id, {'uuid': uuidutils.generate_uuid()}) def test_update_volume_target_fails_invalid_id(self): self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.update_volume_target, 99, {'boot_index': 6}) def test_update_volume_target_duplicated_nodeid_and_bootindex(self): t = db_utils.create_test_volume_target(uuid=uuidutils.generate_uuid(), boot_index=1) self.assertRaises(exception.VolumeTargetBootIndexAlreadyExists, self.dbapi.update_volume_target, t.uuid, {'boot_index': self.target.boot_index, 'node_id': self.target.node_id}) def test_destroy_volume_target(self): self.dbapi.destroy_volume_target(self.target.id) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_id, self.target.id) # Ensure that destroy_volume_target returns the expected exception. self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.destroy_volume_target, self.target.id) ironic-15.0.0/ironic/tests/unit/db/utils.py0000664000175000017500000005410613652514273020623 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic test utilities.""" from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.common import states from ironic.db import api as db_api from ironic.drivers import base as drivers_base from ironic.objects import allocation from ironic.objects import bios from ironic.objects import chassis from ironic.objects import conductor from ironic.objects import deploy_template from ironic.objects import node from ironic.objects import port from ironic.objects import portgroup from ironic.objects import trait from ironic.objects import volume_connector from ironic.objects import volume_target def get_test_ipmi_info(): return { "ipmi_address": "1.2.3.4", "ipmi_username": "admin", "ipmi_password": "fake" } def get_test_ipmi_bridging_parameters(): return { "ipmi_bridging": "dual", "ipmi_local_address": "0x20", "ipmi_transit_channel": "0", "ipmi_transit_address": "0x82", "ipmi_target_channel": "7", "ipmi_target_address": "0x72" } def get_test_pxe_driver_info(): return { "deploy_kernel": "glance://deploy_kernel_uuid", "deploy_ramdisk": "glance://deploy_ramdisk_uuid", "rescue_kernel": "glance://rescue_kernel_uuid", "rescue_ramdisk": "glance://rescue_ramdisk_uuid" } def get_test_pxe_driver_internal_info(): return { "is_whole_disk_image": False, } def get_test_pxe_instance_info(): return { "image_source": "glance://image_uuid", "root_gb": 100, "rescue_password": "password" } def get_test_ilo_info(): return { "ilo_address": "1.2.3.4", "ilo_username": "admin", "ilo_password": "fake", } def get_test_drac_info(): return { "drac_address": "1.2.3.4", "drac_port": 443, "drac_path": "/wsman", "drac_protocol": "https", "drac_username": "admin", "drac_password": "fake", } def get_test_irmc_info(): return { "irmc_address": "1.2.3.4", "irmc_username": "admin0", "irmc_password": "fake0", "irmc_port": 80, "irmc_auth_method": "digest", } def get_test_agent_instance_info(): return { 'image_source': 'fake-image', 'image_url': 'http://image', 'image_checksum': 'checksum', 'image_disk_format': 'qcow2', 'image_container_format': 'bare', } def get_test_agent_driver_info(): return { 'deploy_kernel': 'glance://deploy_kernel_uuid', 'deploy_ramdisk': 'glance://deploy_ramdisk_uuid', 'ipmi_password': 'foo', } def get_test_agent_driver_internal_info(): return { 'agent_url': 'http://127.0.0.1/foo', 'is_whole_disk_image': True, } def get_test_snmp_info(**kw): result = { "snmp_driver": kw.get("snmp_driver", "teltronix"), "snmp_address": kw.get("snmp_address", "1.2.3.4"), "snmp_port": kw.get("snmp_port", "161"), "snmp_outlet": kw.get("snmp_outlet", "1"), "snmp_version": kw.get("snmp_version", "1") } if result["snmp_version"] in ("1", "2c"): result["snmp_community"] = kw.get("snmp_community", "public") if "snmp_community_read" in kw: result["snmp_community_read"] = kw["snmp_community_read"] if "snmp_community_write" in kw: result["snmp_community_write"] = kw["snmp_community_write"] elif result["snmp_version"] == "3": result["snmp_user"] = kw.get( "snmp_user", kw.get("snmp_security", "snmpuser") ) for option in ('snmp_auth_protocol', 'snmp_auth_key', 'snmp_priv_protocol', 'snmp_priv_key', 'snmp_context_engine_id', 'snmp_context_name'): if option in kw: result[option] = kw[option] return result def get_test_node(**kw): properties = { "cpu_arch": "x86_64", "cpus": "8", "local_gb": "10", "memory_mb": "4096", } # NOTE(tenbrae): API unit tests confirm that sensitive fields in # instance_info and driver_info will get scrubbed # from the API response but other fields # (eg, 'foo') do not. fake_instance_info = { "configdrive": "TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQ=", "image_url": "http://example.com/test_image_url", "foo": "bar", } fake_driver_info = { "foo": "bar", "fake_password": "fakepass", } fake_internal_info = { "private_state": "secret value" } result = { 'version': kw.get('version', node.Node.VERSION), 'id': kw.get('id', 123), 'name': kw.get('name', None), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c123'), 'chassis_id': kw.get('chassis_id', None), 'conductor_affinity': kw.get('conductor_affinity', None), 'conductor_group': kw.get('conductor_group', ''), 'power_state': kw.get('power_state', states.NOSTATE), 'target_power_state': kw.get('target_power_state', states.NOSTATE), 'provision_state': kw.get('provision_state', states.AVAILABLE), 'target_provision_state': kw.get('target_provision_state', states.NOSTATE), 'provision_updated_at': kw.get('provision_updated_at'), 'last_error': kw.get('last_error'), 'instance_uuid': kw.get('instance_uuid'), 'instance_info': kw.get('instance_info', fake_instance_info), 'driver': kw.get('driver', 'fake-hardware'), 'driver_info': kw.get('driver_info', fake_driver_info), 'driver_internal_info': kw.get('driver_internal_info', fake_internal_info), 'clean_step': kw.get('clean_step'), 'deploy_step': kw.get('deploy_step'), 'properties': kw.get('properties', properties), 'reservation': kw.get('reservation'), 'maintenance': kw.get('maintenance', False), 'maintenance_reason': kw.get('maintenance_reason'), 'fault': kw.get('fault'), 'console_enabled': kw.get('console_enabled', False), 'extra': kw.get('extra', {}), 'updated_at': kw.get('updated_at'), 'created_at': kw.get('created_at'), 'inspection_finished_at': kw.get('inspection_finished_at'), 'inspection_started_at': kw.get('inspection_started_at'), 'raid_config': kw.get('raid_config'), 'target_raid_config': kw.get('target_raid_config'), 'tags': kw.get('tags', []), 'resource_class': kw.get('resource_class'), 'traits': kw.get('traits', []), 'automated_clean': kw.get('automated_clean', None), 'protected': kw.get('protected', False), 'protected_reason': kw.get('protected_reason', None), 'conductor': kw.get('conductor'), 'owner': kw.get('owner', None), 'allocation_id': kw.get('allocation_id'), 'description': kw.get('description'), 'retired': kw.get('retired', False), 'retired_reason': kw.get('retired_reason', None), 'lessee': kw.get('lessee', None), } for iface in drivers_base.ALL_INTERFACES: name = '%s_interface' % iface result[name] = kw.get(name) return result def create_test_node(**kw): """Create test node entry in DB and return Node DB object. Function to be used to create test Node objects in the database. :param kw: kwargs with overriding values for node's attributes. :returns: Test Node DB object. """ node = get_test_node(**kw) # Let DB generate an ID if one isn't specified explicitly. # Creating a node with tags or traits will raise an exception. If tags or # traits are not specified explicitly just delete them. for field in {'id', 'tags', 'traits'}: if field not in kw: del node[field] dbapi = db_api.get_instance() return dbapi.create_node(node) def get_test_port(**kw): return { 'id': kw.get('id', 987), 'version': kw.get('version', port.Port.VERSION), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c781'), 'node_id': kw.get('node_id', 123), 'address': kw.get('address', '52:54:00:cf:2d:31'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), 'local_link_connection': kw.get('local_link_connection', {'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'Ethernet3/1', 'switch_info': 'switch1'}), 'portgroup_id': kw.get('portgroup_id'), 'pxe_enabled': kw.get('pxe_enabled', True), 'internal_info': kw.get('internal_info', {"bar": "buzz"}), 'physical_network': kw.get('physical_network'), 'is_smartnic': kw.get('is_smartnic', False), } def create_test_port(**kw): """Create test port entry in DB and return Port DB object. Function to be used to create test Port objects in the database. :param kw: kwargs with overriding values for port's attributes. :returns: Test Port DB object. """ port = get_test_port(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del port['id'] dbapi = db_api.get_instance() return dbapi.create_port(port) def get_test_volume_connector(**kw): return { 'id': kw.get('id', 789), 'version': kw.get('version', volume_connector.VolumeConnector.VERSION), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c781'), 'node_id': kw.get('node_id', 123), 'type': kw.get('type', 'iqn'), 'connector_id': kw.get('connector_id', 'iqn.2012-06.com.example:initiator'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_volume_connector(**kw): """Create test connector entry in DB and return VolumeConnector DB object. Function to be used to create test VolumeConnector objects in the database. :param kw: kwargs with overriding values for connector's attributes. :returns: Test VolumeConnector DB object. """ connector = get_test_volume_connector(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del connector['id'] dbapi = db_api.get_instance() return dbapi.create_volume_connector(connector) def get_test_volume_target(**kw): fake_properties = {"target_iqn": "iqn.foo"} return { 'id': kw.get('id', 789), 'version': kw.get('version', volume_target.VolumeTarget.VERSION), 'uuid': kw.get('uuid', '1be26c0b-03f2-4d2e-ae87-c02d7f33c781'), 'node_id': kw.get('node_id', 123), 'volume_type': kw.get('volume_type', 'iscsi'), 'properties': kw.get('properties', fake_properties), 'boot_index': kw.get('boot_index', 0), 'volume_id': kw.get('volume_id', '12345678'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_volume_target(**kw): """Create test target entry in DB and return VolumeTarget DB object. Function to be used to create test VolumeTarget objects in the database. :param kw: kwargs with overriding values for target's attributes. :returns: Test VolumeTarget DB object. """ target = get_test_volume_target(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del target['id'] dbapi = db_api.get_instance() return dbapi.create_volume_target(target) def get_test_chassis(**kw): return { 'id': kw.get('id', 42), 'version': kw.get('version', chassis.Chassis.VERSION), 'uuid': kw.get('uuid', 'e74c40e0-d825-11e2-a28f-0800200c9a66'), 'extra': kw.get('extra', {}), 'description': kw.get('description', 'data-center-1-chassis'), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_chassis(**kw): """Create test chassis entry in DB and return Chassis DB object. Function to be used to create test Chassis objects in the database. :param kw: kwargs with overriding values for chassis's attributes. :returns: Test Chassis DB object. """ chassis = get_test_chassis(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del chassis['id'] dbapi = db_api.get_instance() return dbapi.create_chassis(chassis) def get_test_conductor(**kw): return { 'id': kw.get('id', 6), 'version': kw.get('version', conductor.Conductor.VERSION), 'hostname': kw.get('hostname', 'test-conductor-node'), 'drivers': kw.get('drivers', ['fake-driver', 'null-driver']), 'conductor_group': kw.get('conductor_group', ''), 'created_at': kw.get('created_at', timeutils.utcnow()), 'updated_at': kw.get('updated_at', timeutils.utcnow()), } def create_test_conductor(**kw): """Create test conductor entry in DB and return Conductor DB object. Function to be used to create test Conductor objects in the database. :param kw: kwargs with overriding values for conductor's attributes. :returns: Test Conductor DB object. """ conductor = get_test_conductor(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del conductor['id'] dbapi = db_api.get_instance() return dbapi.register_conductor(conductor) def get_test_redfish_info(): return { "redfish_address": "https://example.com", "redfish_system_id": "/redfish/v1/Systems/FAKESYSTEM", "redfish_username": "username", "redfish_password": "password" } def get_test_portgroup(**kw): return { 'id': kw.get('id', 654), 'version': kw.get('version', portgroup.Portgroup.VERSION), 'uuid': kw.get('uuid', '6eb02b44-18a3-4659-8c0b-8d2802581ae4'), 'name': kw.get('name', 'fooname'), 'node_id': kw.get('node_id', 123), 'address': kw.get('address', '52:54:00:cf:2d:31'), 'extra': kw.get('extra', {}), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), 'internal_info': kw.get('internal_info', {"bar": "buzz"}), 'standalone_ports_supported': kw.get('standalone_ports_supported', True), 'mode': kw.get('mode'), 'properties': kw.get('properties', {}), } def create_test_portgroup(**kw): """Create test portgroup entry in DB and return Portgroup DB object. Function to be used to create test Portgroup objects in the database. :param kw: kwargs with overriding values for port's attributes. :returns: Test Portgroup DB object. """ portgroup = get_test_portgroup(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del portgroup['id'] dbapi = db_api.get_instance() return dbapi.create_portgroup(portgroup) def get_test_node_tag(**kw): return { # TODO(rloo): Replace None below with the object NodeTag VERSION, # after this lands: https://review.opendev.org/#/c/233357 'version': kw.get('version', None), "tag": kw.get("tag", "tag1"), "node_id": kw.get("node_id", "123"), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_node_tag(**kw): """Create test node tag entry in DB and return NodeTag DB object. Function to be used to create test NodeTag objects in the database. :param kw: kwargs with overriding values for tag's attributes. :returns: Test NodeTag DB object. """ tag = get_test_node_tag(**kw) dbapi = db_api.get_instance() return dbapi.add_node_tag(tag['node_id'], tag['tag']) def get_test_xclarity_properties(): return { "cpu_arch": "x86_64", "cpus": "8", "local_gb": "10", "memory_mb": "4096", } def get_test_xclarity_driver_info(): return { 'xclarity_manager_ip': "1.2.3.4", 'xclarity_username': "USERID", 'xclarity_password': "fake", 'xclarity_port': 443, 'xclarity_hardware_id': 'fake_sh_id', } def get_test_node_trait(**kw): return { 'version': kw.get('version', trait.Trait.VERSION), "trait": kw.get("trait", "trait1"), "node_id": kw.get("node_id", "123"), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def create_test_node_trait(**kw): """Create test node trait entry in DB and return NodeTrait DB object. Function to be used to create test NodeTrait objects in the database. :param kw: kwargs with overriding values for trait's attributes. :returns: Test NodeTrait DB object. """ trait = get_test_node_trait(**kw) dbapi = db_api.get_instance() return dbapi.add_node_trait(trait['node_id'], trait['trait'], trait['version']) def create_test_node_traits(traits, **kw): """Create test node trait entries in DB and return NodeTrait DB objects. Function to be used to create test NodeTrait objects in the database. :param traits: a list of Strings; traits to create. :param kw: kwargs with overriding values for trait's attributes. :returns: a list of test NodeTrait DB objects. """ return [create_test_node_trait(trait=trait, **kw) for trait in traits] def create_test_bios_setting(**kw): """Create test bios entry in DB and return BIOSSetting DB object. Function to be used to create test BIOSSetting object in the database. :param kw: kwargs with overriding values for node bios settings. :returns: Test BIOSSetting DB object. """ bios_setting = get_test_bios_setting(**kw) dbapi = db_api.get_instance() node_id = bios_setting['node_id'] version = bios_setting['version'] settings = [{'name': bios_setting['name'], 'value': bios_setting['value']}] return dbapi.create_bios_setting_list(node_id, settings, version)[0] def get_test_bios_setting(**kw): return { 'node_id': kw.get('node_id', '123'), 'name': kw.get('name', 'virtualization'), 'value': kw.get('value', 'on'), 'version': kw.get('version', bios.BIOSSetting.VERSION), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), } def get_test_bios_setting_setting_list(): return [ {'name': 'virtualization', 'value': 'on'}, {'name': 'hyperthread', 'value': 'enabled'}, {'name': 'numlock', 'value': 'off'} ] def get_test_allocation(**kw): return { 'candidate_nodes': kw.get('candidate_nodes', []), 'conductor_affinity': kw.get('conductor_affinity'), 'created_at': kw.get('created_at'), 'extra': kw.get('extra', {}), 'id': kw.get('id', 42), 'last_error': kw.get('last_error'), 'name': kw.get('name'), 'node_id': kw.get('node_id'), 'resource_class': kw.get('resource_class', 'baremetal'), 'state': kw.get('state', 'allocating'), 'traits': kw.get('traits', []), 'updated_at': kw.get('updated_at'), 'uuid': kw.get('uuid', uuidutils.generate_uuid()), 'version': kw.get('version', allocation.Allocation.VERSION), 'owner': kw.get('owner', None), } def create_test_allocation(**kw): allocation = get_test_allocation(**kw) if 'id' not in kw: del allocation['id'] dbapi = db_api.get_instance() return dbapi.create_allocation(allocation) def get_test_deploy_template(**kw): default_uuid = uuidutils.generate_uuid() return { 'version': kw.get('version', deploy_template.DeployTemplate.VERSION), 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), 'id': kw.get('id', 234), 'name': kw.get('name', u'CUSTOM_DT1'), 'uuid': kw.get('uuid', default_uuid), 'steps': kw.get('steps', [get_test_deploy_template_step( deploy_template_id=kw.get('id', 234))]), 'extra': kw.get('extra', {}), } def get_test_deploy_template_step(**kw): return { 'created_at': kw.get('created_at'), 'updated_at': kw.get('updated_at'), 'id': kw.get('id', 345), 'deploy_template_id': kw.get('deploy_template_id', 234), 'interface': kw.get('interface', 'raid'), 'step': kw.get('step', 'create_configuration'), 'args': kw.get('args', {'logical_disks': []}), 'priority': kw.get('priority', 10), } def create_test_deploy_template(**kw): """Create a deployment template in the DB and return DeployTemplate model. :param kw: kwargs with overriding values for the deploy template. :returns: Test DeployTemplate DB object. """ template = get_test_deploy_template(**kw) dbapi = db_api.get_instance() # Let DB generate an ID if one isn't specified explicitly. if 'id' not in kw: del template['id'] if 'steps' not in kw: for step in template['steps']: del step['id'] del step['deploy_template_id'] else: for kw_step, template_step in zip(kw['steps'], template['steps']): if 'id' not in kw_step: del template_step['id'] return dbapi.create_deploy_template(template) def get_test_ibmc_info(): return { "ibmc_address": "https://example.com", "ibmc_username": "username", "ibmc_password": "password", "verify_ca": False, } ironic-15.0.0/ironic/tests/unit/db/__init__.py0000664000175000017500000000000013652514273021202 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/db/test_node_tags.py0000664000175000017500000001113613652514273022461 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating NodeTags via the DB API""" from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils as db_utils class DbNodeTagTestCase(base.DbTestCase): def setUp(self): super(DbNodeTagTestCase, self).setUp() self.node = db_utils.create_test_node() def test_set_node_tags(self): tags = self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) tags = self.dbapi.set_node_tags(self.node.id, []) self.assertEqual([], tags) def test_set_node_tags_duplicate(self): tags = self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2', 'tag2']) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) def test_set_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.set_node_tags, '1234', ['tag1', 'tag2']) def test_get_node_tags_by_node_id(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual(self.node.id, tags[0].node_id) self.assertItemsEqual(['tag1', 'tag2'], [tag.tag for tag in tags]) def test_get_node_tags_empty(self): tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_get_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_tags_by_node_id, '123') def test_unset_node_tags(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.dbapi.unset_node_tags(self.node.id) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_unset_empty_node_tags(self): self.dbapi.unset_node_tags(self.node.id) tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual([], tags) def test_unset_node_tags_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.unset_node_tags, '123') def test_add_node_tag(self): tag = self.dbapi.add_node_tag(self.node.id, 'tag1') self.assertEqual(self.node.id, tag.node_id) self.assertEqual('tag1', tag.tag) def test_add_node_tag_duplicate(self): tag = self.dbapi.add_node_tag(self.node.id, 'tag1') tag = self.dbapi.add_node_tag(self.node.id, 'tag1') self.assertEqual(self.node.id, tag.node_id) self.assertEqual('tag1', tag.tag) def test_add_node_tag_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.add_node_tag, '123', 'tag1') def test_delete_node_tag(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) self.dbapi.delete_node_tag(self.node.id, 'tag1') tags = self.dbapi.get_node_tags_by_node_id(self.node.id) self.assertEqual(1, len(tags)) self.assertEqual('tag2', tags[0].tag) def test_delete_node_tag_not_found(self): self.assertRaises(exception.NodeTagNotFound, self.dbapi.delete_node_tag, self.node.id, 'tag1') def test_delete_node_tag_node_not_found(self): self.assertRaises(exception.NodeNotFound, self.dbapi.delete_node_tag, '123', 'tag1') def test_node_tag_exists(self): self.dbapi.set_node_tags(self.node.id, ['tag1', 'tag2']) ret = self.dbapi.node_tag_exists(self.node.id, 'tag1') self.assertTrue(ret) def test_node_tag_not_exists(self): ret = self.dbapi.node_tag_exists(self.node.id, 'tag1') self.assertFalse(ret) def test_node_tag_node_not_exist(self): self.assertRaises(exception.NodeNotFound, self.dbapi.node_tag_exists, '123', 'tag1') ironic-15.0.0/ironic/tests/unit/db/test_chassis.py0000664000175000017500000000635513652514273022162 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for manipulating Chassis via the DB API""" from oslo_utils import uuidutils from ironic.common import exception from ironic.tests.unit.db import base from ironic.tests.unit.db import utils class DbChassisTestCase(base.DbTestCase): def setUp(self): super(DbChassisTestCase, self).setUp() self.chassis = utils.create_test_chassis() def test_get_chassis_list(self): uuids = [self.chassis.uuid] for i in range(1, 6): ch = utils.create_test_chassis(uuid=uuidutils.generate_uuid()) uuids.append(str(ch.uuid)) res = self.dbapi.get_chassis_list() res_uuids = [r.uuid for r in res] self.assertCountEqual(uuids, res_uuids) def test_get_chassis_by_id(self): chassis = self.dbapi.get_chassis_by_id(self.chassis.id) self.assertEqual(self.chassis.uuid, chassis.uuid) def test_get_chassis_by_uuid(self): chassis = self.dbapi.get_chassis_by_uuid(self.chassis.uuid) self.assertEqual(self.chassis.id, chassis.id) def test_get_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.get_chassis_by_id, 666) def test_update_chassis(self): res = self.dbapi.update_chassis(self.chassis.id, {'description': 'hello'}) self.assertEqual('hello', res.description) def test_update_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.update_chassis, 666, {'description': ''}) def test_update_chassis_uuid(self): self.assertRaises(exception.InvalidParameterValue, self.dbapi.update_chassis, self.chassis.id, {'uuid': 'hello'}) def test_destroy_chassis(self): self.dbapi.destroy_chassis(self.chassis.id) self.assertRaises(exception.ChassisNotFound, self.dbapi.get_chassis_by_id, self.chassis.id) def test_destroy_chassis_that_does_not_exist(self): self.assertRaises(exception.ChassisNotFound, self.dbapi.destroy_chassis, 666) def test_destroy_chassis_with_nodes(self): utils.create_test_node(chassis_id=self.chassis.id) self.assertRaises(exception.ChassisNotEmpty, self.dbapi.destroy_chassis, self.chassis.id) def test_create_chassis_already_exists(self): self.assertRaises(exception.ChassisAlreadyExists, utils.create_test_chassis, uuid=self.chassis.uuid) ironic-15.0.0/ironic/tests/unit/raid_constants.py0000664000175000017500000001456613652514273022117 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Different RAID configurations for unit tests in test_raid.py RAID_CONFIG_OKAY = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "volume_name": "my-volume", "is_root_volume": true, "share_physical_disks": false, "disk_type": "ssd", "interface_type": "sas", "number_of_physical_disks": 2, "controller": "Smart Array P822 in Slot 2", "physical_disks": [ "5I:1:1", "5I:1:2" ] } ] } ''' RAID_SW_CONFIG_OKAY = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "software", "physical_disks": [{"size": ">= 50"}, {"name": "/dev/sdc"}] } ] } ''' RAID_CONFIG_NO_LOGICAL_DISKS = ''' { "logical_disks": [] } ''' RAID_CONFIG_NO_RAID_LEVEL = ''' { "logical_disks": [ { "size_gb": 100 } ] } ''' RAID_CONFIG_INVALID_RAID_LEVEL = ''' { "logical_disks": [ { "size_gb": 100, "raid_level": "foo" } ] } ''' RAID_CONFIG_NO_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1" } ] } ''' RAID_CONFIG_INVALID_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": "abcd" } ] } ''' RAID_CONFIG_ZERO_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 0 } ] } ''' RAID_CONFIG_MAX_SIZE_GB = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": "MAX" } ] } ''' RAID_CONFIG_INVALID_IS_ROOT_VOL = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "is_root_volume": "True" } ] } ''' RAID_CONFIG_MULTIPLE_IS_ROOT_VOL = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "is_root_volume": true }, { "raid_level": "1", "size_gb": 100, "is_root_volume": true } ] } ''' RAID_CONFIG_INVALID_SHARE_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "share_physical_disks": "True" } ] } ''' RAID_CONFIG_INVALID_DISK_TYPE = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "disk_type": "foo" } ] } ''' RAID_CONFIG_INVALID_INT_TYPE = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "interface_type": "foo" } ] } ''' RAID_CONFIG_INVALID_NUM_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "number_of_physical_disks": "a" } ] } ''' RAID_CONFIG_INVALID_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "physical_disks": "5I:1:1" } ] } ''' RAID_CONFIG_TOO_FEW_PHY_DISKS = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "physical_disks": [{"size": ">= 50"}] } ] } ''' RAID_CONFIG_ADDITIONAL_PROP = ''' { "logical_disks": [ { "raid_levelllllll": "1", "size_gb": 100 } ] } ''' RAID_CONFIG_JBOD_VOLUME = ''' { "logical_disks": [ { "raid_level": "JBOD", "size_gb": 100 } ] } ''' CUSTOM_SCHEMA_RAID_CONFIG = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "foo": "bar" } ] } ''' CUSTOM_RAID_SCHEMA = ''' { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "properties": { "logical_disks": { "type": "array", "items": { "type": "object", "properties": { "raid_level": { "type": "string", "enum": [ "0", "1", "2", "5", "6", "1+0" ], "description": "RAID level for the logical disk." }, "size_gb": { "type": "integer", "minimum": 0, "exclusiveMinimum": true, "description": "Size (Integer) for the logical disk." }, "foo": { "type": "string", "description": "property foo" } }, "required": ["raid_level", "size_gb"], "additionalProperties": false }, "minItems": 1 } }, "required": ["logical_disks"], "additionalProperties": false } ''' CURRENT_RAID_CONFIG = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } } ] } ''' RAID_CONFIG_MULTIPLE_ROOT = ''' { "logical_disks": [ { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } }, { "raid_level": "1", "size_gb": 100, "controller": "Smart Array P822 in Slot 2", "is_root_volume": true, "physical_disks": [ "5I:1:1", "5I:1:2" ], "root_device_hint": { "wwn": "600508B100" } } ] } ''' ironic-15.0.0/ironic/tests/unit/conf/0000775000175000017500000000000013652514443017442 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/conf/test_auth.py0000664000175000017500000000440513652514273022020 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as kaloading from oslo_config import cfg from ironic.conf import auth as ironic_auth from ironic.tests import base class AuthConfTestCase(base.TestCase): def setUp(self): super(AuthConfTestCase, self).setUp() self.test_group = 'test_group' self.cfg_fixture.conf.register_group(cfg.OptGroup(self.test_group)) ironic_auth.register_auth_opts(self.cfg_fixture.conf, self.test_group) self.config(auth_type='password', group=self.test_group) # NOTE(pas-ha) this is due to auth_plugin options # being dynamically registered on first load, # but we need to set the config before plugin = kaloading.get_plugin_loader('password') opts = kaloading.get_auth_plugin_conf_options(plugin) self.cfg_fixture.register_opts(opts, group=self.test_group) self.config(auth_url='http://127.0.0.1:9898', username='fake_user', password='fake_pass', project_name='fake_tenant', group=self.test_group) def test_add_auth_opts(self): opts = ironic_auth.add_auth_opts([]) # check that there is no duplicates names = {o.dest for o in opts} self.assertEqual(len(names), len(opts)) # NOTE(pas-ha) checking for most standard auth and session ones only expected = {'timeout', 'insecure', 'cafile', 'certfile', 'keyfile', 'auth_type', 'auth_url', 'username', 'password', 'tenant_name', 'project_name', 'trust_id', 'domain_id', 'user_domain_id', 'project_domain_id'} self.assertTrue(expected.issubset(names)) ironic-15.0.0/ironic/tests/unit/conf/__init__.py0000664000175000017500000000000013652514273021542 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/test_base.py0000664000175000017500000000676513652514273021057 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import subprocess from ironic_lib import utils import mock from oslo_concurrency import processutils from ironic.tests import base class BlockExecuteTestCase(base.TestCase): """Test to ensure we block access to the 'execute' type functions""" def test_exception_raised_for_execute(self): execute_functions = (processutils.execute, subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output, utils.execute) for function_name in execute_functions: exc = self.assertRaises( Exception, function_name, ["echo", "%s" % function_name]) # noqa # Have to use 'noqa' as we are raising plain Exception and we will # get H202 error in 'pep8' check. self.assertEqual( "Don't call ironic_lib.utils.execute() / " "processutils.execute() or similar functions in tests!", "%s" % exc) @mock.patch.object(utils, "execute", autospec=True) def test_can_mock_execute(self, mock_exec): # NOTE(jlvillal): We had discovered an issue where mocking wasn't # working because we had used a mock to block access to the execute # functions. This caused us to "mock a mock" and didn't work correctly. # We want to make sure that we can mock our execute functions even with # our "block execute" code. utils.execute("ls") utils.execute("echo") self.assertEqual(2, mock_exec.call_count) @mock.patch.object(processutils, "execute", autospec=True) def test_exception_raised_for_execute_parent_mocked(self, mock_exec): # Make sure that even if we mock the parent execute function, that we # still get an exception for a child. So in this case # ironic_lib.utils.execute() calls processutils.execute(). Make sure an # exception is raised even though we mocked processutils.execute() exc = self.assertRaises( Exception, utils.execute, "ls") # noqa # Have to use 'noqa' as we are raising plain Exception and we will get # H202 error in 'pep8' check. self.assertEqual( "Don't call ironic_lib.utils.execute() / " "processutils.execute() or similar functions in tests!", "%s" % exc) class DontBlockExecuteTestCase(base.TestCase): """Ensure we can turn off blocking access to 'execute' type functions""" # Don't block the execute function block_execute = False @mock.patch.object(processutils, "execute", autospec=True) def test_no_exception_raised_for_execute(self, mock_exec): # Make sure we can call ironic_lib.utils.execute() even though we # didn't mock it. We do mock processutils.execute() so we don't # actually execute anything. utils.execute("ls") utils.execute("echo") self.assertEqual(2, mock_exec.call_count) ironic-15.0.0/ironic/tests/unit/policy_fixture.py0000664000175000017500000000320513652514273022135 0ustar zuulzuul00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures from oslo_config import cfg from oslo_policy import opts as policy_opts from ironic.common import policy as ironic_policy CONF = cfg.CONF # NOTE(tenbrae): We ship a default that always masks passwords, but for testing # we need to override that default to ensure passwords can be # made visible by operators that choose to do so. policy_data = """ { "show_password": "tenant:admin" } """ class PolicyFixture(fixtures.Fixture): def setUp(self): super(PolicyFixture, self).setUp() self.policy_dir = self.useFixture(fixtures.TempDir()) self.policy_file_name = os.path.join(self.policy_dir.path, 'policy.json') with open(self.policy_file_name, 'w') as policy_file: policy_file.write(policy_data) policy_opts.set_defaults(CONF) CONF.set_override('policy_file', self.policy_file_name, 'oslo_policy') ironic_policy._ENFORCER = None self.addCleanup(ironic_policy.get_enforcer().clear) ironic-15.0.0/ironic/tests/unit/conductor/0000775000175000017500000000000013652514443020515 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/conductor/test_utils.py0000664000175000017500000030040313652514273023267 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import exception from ironic.common import network from ironic.common import neutron from ironic.common import nova from ironic.common import states from ironic.conductor import rpcapi from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.drivers import base as drivers_base from ironic.drivers.modules import fake from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class TestPowerNoTimeout(drivers_base.PowerInterface): """Missing 'timeout' parameter for get_power_state & reboot""" def get_properties(self): return {} def validate(self, task): pass def get_power_state(self, task): return task.node.power_state def set_power_state(self, task, power_state, timeout=None): task.node.power_state = power_state def reboot(self, task): pass class NodeSetBootDeviceTestCase(db_base.DbTestCase): def setUp(self): super(NodeSetBootDeviceTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) self.task = task_manager.TaskManager(self.context, self.node.uuid) def test_node_set_boot_device_non_existent_device(self): self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_set_boot_device, self.task, device='fake') @mock.patch.object(fake.FakeManagement, 'set_boot_device', autospec=True) def test_node_set_boot_device_valid(self, mock_sbd): conductor_utils.node_set_boot_device(self.task, device='pxe') mock_sbd.assert_called_once_with(mock.ANY, self.task, device='pxe', persistent=False) @mock.patch.object(fake.FakeManagement, 'set_boot_device', autospec=True) def test_node_set_boot_device_adopting(self, mock_sbd): self.task.node.provision_state = states.ADOPTING conductor_utils.node_set_boot_device(self.task, device='pxe') self.assertFalse(mock_sbd.called) class NodeGetBootModeTestCase(db_base.DbTestCase): def setUp(self): super(NodeGetBootModeTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) self.task = task_manager.TaskManager(self.context, self.node.uuid) @mock.patch.object(fake.FakeManagement, 'get_boot_mode', autospec=True) def test_node_get_boot_mode_valid(self, mock_gbm): mock_gbm.return_value = 'bios' boot_mode = conductor_utils.node_get_boot_mode(self.task) self.assertEqual(boot_mode, 'bios') mock_gbm.assert_called_once_with(mock.ANY, self.task) @mock.patch.object(fake.FakeManagement, 'get_boot_mode', autospec=True) def test_node_get_boot_mode_unsupported(self, mock_gbm): mock_gbm.side_effect = exception.UnsupportedDriverExtension( driver=self.task.node.driver, extension='get_boot_mode') self.assertRaises(exception.UnsupportedDriverExtension, conductor_utils.node_get_boot_mode, self.task) class NodeSetBootModeTestCase(db_base.DbTestCase): def setUp(self): super(NodeSetBootModeTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) self.task = task_manager.TaskManager(self.context, self.node.uuid) @mock.patch.object(fake.FakeManagement, 'get_supported_boot_modes', autospec=True) def test_node_set_boot_mode_non_existent_mode(self, mock_gsbm): mock_gsbm.return_value = [boot_modes.LEGACY_BIOS] self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_set_boot_mode, self.task, mode='non-existing') @mock.patch.object(fake.FakeManagement, 'set_boot_mode', autospec=True) @mock.patch.object(fake.FakeManagement, 'get_supported_boot_modes', autospec=True) def test_node_set_boot_mode_valid(self, mock_gsbm, mock_sbm): mock_gsbm.return_value = [boot_modes.LEGACY_BIOS] conductor_utils.node_set_boot_mode(self.task, mode=boot_modes.LEGACY_BIOS) mock_sbm.assert_called_once_with(mock.ANY, self.task, mode=boot_modes.LEGACY_BIOS) @mock.patch.object(fake.FakeManagement, 'set_boot_mode', autospec=True) @mock.patch.object(fake.FakeManagement, 'get_supported_boot_modes', autospec=True) def test_node_set_boot_mode_adopting(self, mock_gsbm, mock_sbm): mock_gsbm.return_value = [boot_modes.LEGACY_BIOS] old_provision_state = self.task.node.provision_state self.task.node.provision_state = states.ADOPTING try: conductor_utils.node_set_boot_mode(self.task, mode=boot_modes.LEGACY_BIOS) finally: self.task.node.provision_state = old_provision_state self.assertFalse(mock_sbm.called) class NodePowerActionTestCase(db_base.DbTestCase): @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_power_on(self, get_power_mock): """Test node_power_action to turn node power on.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch.object(nova, 'power_update', autospec=True) def test_node_power_action_power_on_notify(self, mock_power_update, get_power_mock, mock_notif): """Test node_power_action to power on node and send notification.""" self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', instance_uuid=uuidutils.uuid, power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) # 2 notifications should be sent: 1 .start and 1 .end self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.end', obj_fields.NotificationLevel.INFO) mock_power_update.assert_called_once_with( task.context, node.instance_uuid, states.POWER_ON) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_power_off(self, get_power_mock): """Test node_power_action to turn node power off.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'reboot', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_power_reboot(self, get_power_mock, reboot_mock): """Test for reboot a node.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.REBOOT) self.assertFalse(get_power_mock.called) node.refresh() reboot_mock.assert_called_once_with(mock.ANY, mock.ANY, timeout=None) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_invalid_state(self, get_power_mock): """Test for exception when changing to an invalid power state.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, "INVALID_POWER_STATE") node.refresh() self.assertFalse(get_power_mock.called) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) # last_error is cleared when a new transaction happens conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_invalid_state_notify(self, get_power_mock, mock_notif): """Test for notification when changing to an invalid power state.""" self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, "INVALID_POWER_STATE") node.refresh() self.assertFalse(get_power_mock.called) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # 2 notifications should be sent: 1 .start and 1 .error self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) def test_node_power_action_already_being_processed(self): """Test node power action after aborted power action. The target_power_state is expected to be None so it isn't checked in the code. This is what happens if it is not None. (Eg, if a conductor had died during a previous power-off attempt and left the target_power_state set to states.POWER_OFF, and the user is attempting to power-off again.) """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON, target_power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() self.assertEqual(states.POWER_OFF, node['power_state']) self.assertEqual(states.NOSTATE, node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(conductor_utils, 'LOG', autospec=True) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_in_same_state(self, get_power_mock, set_power_mock, log_mock): """Test setting node state to its present state. Test that we don't try to set the power state if the requested state is the same as the current state. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(set_power_mock.called, "set_power_state unexpectedly called") self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) log_mock.warning.assert_called_once_with( u"Not going to change node %(node)s power state because " u"current state = requested state = '%(state)s'.", {'state': states.POWER_ON, 'node': node.uuid}) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_in_same_state_db_not_in_sync(self, get_power_mock, set_power_mock): """Test setting node state to its present state if DB is out of sync. Under rare conditions (see bug #1403106) database might contain stale information, make sure we fix it. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(set_power_mock.called, "set_power_state unexpectedly called") self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_failed_getting_state(self, get_power_mock): """Test for exception when we can't get the current power state.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.side_effect = ( exception.InvalidParameterValue('failed getting power state')) self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_failed_getting_state_notify(self, get_power_mock, mock_notif): """Test for notification when we can't get the current power state.""" self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.side_effect = ( exception.InvalidParameterValue('failed getting power state')) self.assertRaises(exception.InvalidParameterValue, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # 2 notifications should be sent: 1 .start and 1 .error self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_set_power_failure(self, get_power_mock, set_power_mock): """Test if an exception is thrown when the set_power call fails.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF set_power_mock.side_effect = exception.IronicException() self.assertRaises( exception.IronicException, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) set_power_mock.assert_called_once_with( mock.ANY, mock.ANY, states.POWER_ON, timeout=None) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNotNone(node['last_error']) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_set_power_failure_notify(self, get_power_mock, set_power_mock, mock_notif): """Test if a notification is sent when the set_power call fails.""" self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_OFF) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF set_power_mock.side_effect = exception.IronicException() self.assertRaises( exception.IronicException, conductor_utils.node_power_action, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) set_power_mock.assert_called_once_with( mock.ANY, mock.ANY, states.POWER_ON, timeout=None) self.assertEqual(states.POWER_OFF, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # 2 notifications should be sent: 1 .start and 1 .error self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual( second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) @mock.patch.object(fake.FakeStorage, 'attach_volumes', autospec=True) def test_node_power_action_power_on_storage_attach(self, attach_mock): """Test node_power_action to turn node power on and attach storage.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_OFF, storage_interface="fake", provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.POWER_ON) node.refresh() attach_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakeStorage, 'attach_volumes', autospec=True) def test_node_power_action_reboot_storage_attach(self, attach_mock): """Test node_power_action to reboot the node and attach storage.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON, storage_interface="fake", provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.REBOOT) node.refresh() attach_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakeStorage, 'detach_volumes', autospec=True) def test_node_power_action_power_off_storage_detach(self, detach_mock): """Test node_power_action to turn node power off and detach storage.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON, storage_interface="fake", provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.POWER_OFF) node.refresh() detach_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) def test__calculate_target_state(self): for new_state in (states.POWER_ON, states.REBOOT, states.SOFT_REBOOT): self.assertEqual( states.POWER_ON, conductor_utils._calculate_target_state(new_state)) for new_state in (states.POWER_OFF, states.SOFT_POWER_OFF): self.assertEqual( states.POWER_OFF, conductor_utils._calculate_target_state(new_state)) self.assertIsNone(conductor_utils._calculate_target_state('bad_state')) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test__can_skip_state_change_different_state(self, get_power_mock): """Test setting node state to different state. Test that we should change state if requested state is different from current state. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON result = conductor_utils._can_skip_state_change( task, states.POWER_OFF) self.assertFalse(result) get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(conductor_utils, 'LOG', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test__can_skip_state_change_same_state(self, get_power_mock, mock_log): """Test setting node state to its present state. Test that we don't try to set the power state if the requested state is the same as the current state. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON result = conductor_utils._can_skip_state_change( task, states.POWER_ON) self.assertTrue(result) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node['power_state']) self.assertEqual(states.NOSTATE, node['target_power_state']) self.assertIsNone(node['last_error']) mock_log.warning.assert_called_once_with( u"Not going to change node %(node)s power state because " u"current state = requested state = '%(state)s'.", {'state': states.POWER_ON, 'node': node.uuid}) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test__can_skip_state_change_db_not_in_sync(self, get_power_mock): """Test setting node state to its present state if DB is out of sync. Under rare conditions (see bug #1403106) database might contain stale information, make sure we fix it. """ node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', last_error='anything but None', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_OFF result = conductor_utils._can_skip_state_change(task, states.POWER_OFF) self.assertTrue(result) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertEqual(states.NOSTATE, node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test__can_skip_state_change_failed_getting_state_notify( self, get_power_mock, mock_notif): """Test for notification & exception when can't get power state. Test to make sure we generate a notification and also that an exception is raised when we can't get the current power state. """ self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.side_effect = ( exception.InvalidParameterValue('failed getting power state')) self.assertRaises(exception.InvalidParameterValue, conductor_utils._can_skip_state_change, task, states.POWER_ON) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertEqual(states.NOSTATE, node['target_power_state']) self.assertIsNotNone(node.last_error) # 1 notification should be sent for the error self.assertEqual(1, mock_notif.call_count) self.assertEqual(1, mock_notif.return_value.emit.call_count) notif_args = mock_notif.call_args_list[0][1] self.assertNotificationEqual(notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) def test_node_power_action_reboot_no_timeout(self): """Test node reboot using Power Interface with no timeout arg.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', console_interface='no-console', inspect_interface='no-inspect', raid_interface='no-raid', rescue_interface='no-rescue', vendor_interface='no-vendor', bios_interface='no-bios', power_state=states.POWER_ON) self.config(enabled_boot_interfaces=['fake']) self.config(enabled_deploy_interfaces=['fake']) self.config(enabled_management_interfaces=['fake']) self.config(enabled_power_interfaces=['fake']) task = task_manager.TaskManager(self.context, node.uuid) task.driver.power = TestPowerNoTimeout() self.assertRaisesRegex(TypeError, 'unexpected keyword argument', conductor_utils.node_power_action, task, states.REBOOT) node.refresh() self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertTrue('unexpected keyword argument' in node['last_error']) class NodeSoftPowerActionTestCase(db_base.DbTestCase): @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_power_soft_reboot(self, get_power_mock): """Test for soft reboot a node.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.SOFT_REBOOT) node.refresh() self.assertFalse(get_power_mock.called) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_power_soft_reboot_timeout(self, get_power_mock): """Test for soft reboot a node.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.SOFT_REBOOT, timeout=2) node.refresh() self.assertFalse(get_power_mock.called) self.assertEqual(states.POWER_ON, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_soft_power_off(self, get_power_mock): """Test node_power_action to turn node soft power off.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.SOFT_POWER_OFF) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_node_power_action_soft_power_off_timeout(self, get_power_mock): """Test node_power_action to turn node soft power off.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON) task = task_manager.TaskManager(self.context, node.uuid) get_power_mock.return_value = states.POWER_ON conductor_utils.node_power_action(task, states.SOFT_POWER_OFF, timeout=2) node.refresh() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) @mock.patch.object(fake.FakeStorage, 'detach_volumes', autospec=True) def test_node_power_action_soft_power_off_storage_detach(self, detach_mock): """Test node_power_action to soft power off node and detach storage.""" node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', power_state=states.POWER_ON, storage_interface="fake", provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) conductor_utils.node_power_action(task, states.SOFT_POWER_OFF) node.refresh() detach_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.POWER_OFF, node['power_state']) self.assertIsNone(node['target_power_state']) self.assertIsNone(node['last_error']) class DeployingErrorHandlerTestCase(tests_base.TestCase): def setUp(self): super(DeployingErrorHandlerTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.context = self.context self.task.driver = mock.Mock(spec_set=['deploy']) self.task.shared = False self.task.node = mock.Mock(spec_set=objects.Node) self.node = self.task.node self.node.provision_state = states.DEPLOYING self.node.last_error = None self.node.deploy_step = None self.node.driver_internal_info = {} self.logmsg = "log message" self.errmsg = "err message" @mock.patch.object(conductor_utils, 'deploying_error_handler', autospec=True) def test_cleanup_after_timeout(self, mock_handler): conductor_utils.cleanup_after_timeout(self.task) mock_handler.assert_called_once_with(self.task, mock.ANY, mock.ANY) def test_cleanup_after_timeout_shared_lock(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, conductor_utils.cleanup_after_timeout, self.task) def test_deploying_error_handler(self): info = self.node.driver_internal_info info['deploy_step_index'] = 2 info['deployment_reboot'] = True info['deployment_polling'] = True info['skip_current_deploy_step'] = True info['agent_url'] = 'url' conductor_utils.deploying_error_handler(self.task, self.logmsg, self.errmsg) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.task.driver.deploy.clean_up.assert_called_once_with(self.task) self.assertEqual(self.errmsg, self.node.last_error) self.assertEqual({}, self.node.deploy_step) self.assertNotIn('deploy_step_index', self.node.driver_internal_info) self.assertNotIn('deployment_reboot', self.node.driver_internal_info) self.assertNotIn('deployment_polling', self.node.driver_internal_info) self.assertNotIn('skip_current_deploy_step', self.node.driver_internal_info) self.assertNotIn('agent_url', self.node.driver_internal_info) self.task.process_event.assert_called_once_with('fail') def _test_deploying_error_handler_cleanup(self, exc, expected_str): clean_up_mock = self.task.driver.deploy.clean_up clean_up_mock.side_effect = exc conductor_utils.deploying_error_handler(self.task, self.logmsg, self.errmsg) self.task.driver.deploy.clean_up.assert_called_once_with(self.task) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.assertIn(expected_str, self.node.last_error) self.assertEqual({}, self.node.deploy_step) self.assertNotIn('deploy_step_index', self.node.driver_internal_info) self.task.process_event.assert_called_once_with('fail') def test_deploying_error_handler_cleanup_ironic_exception(self): self._test_deploying_error_handler_cleanup( exception.IronicException('moocow'), 'moocow') def test_deploying_error_handler_cleanup_random_exception(self): self._test_deploying_error_handler_cleanup( Exception('moocow'), 'unhandled exception') def test_deploying_error_handler_no_cleanup(self): conductor_utils.deploying_error_handler( self.task, self.logmsg, self.errmsg, clean_up=False) self.assertFalse(self.task.driver.deploy.clean_up.called) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.assertEqual(self.errmsg, self.node.last_error) self.assertEqual({}, self.node.deploy_step) self.assertNotIn('deploy_step_index', self.node.driver_internal_info) self.task.process_event.assert_called_once_with('fail') def test_deploying_error_handler_not_deploy(self): # Not in a deploy state self.node.provision_state = states.AVAILABLE self.node.driver_internal_info['deploy_step_index'] = 2 conductor_utils.deploying_error_handler( self.task, self.logmsg, self.errmsg, clean_up=False) self.assertEqual([mock.call()] * 2, self.node.save.call_args_list) self.assertEqual(self.errmsg, self.node.last_error) self.assertIsNone(self.node.deploy_step) self.assertIn('deploy_step_index', self.node.driver_internal_info) self.task.process_event.assert_called_once_with('fail') class ErrorHandlersTestCase(tests_base.TestCase): def setUp(self): super(ErrorHandlersTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.driver = mock.Mock(spec_set=['deploy', 'network', 'rescue']) self.task.node = mock.Mock(spec_set=objects.Node) self.task.shared = False self.node = self.task.node # NOTE(mariojv) Some of the test cases that use the task below require # strict typing of the node power state fields and would fail if passed # a Mock object in constructors. A task context is also required for # notifications. self.node.configure_mock(power_state=states.POWER_OFF, target_power_state=states.POWER_ON, maintenance=False, maintenance_reason=None) self.task.context = self.context @mock.patch.object(conductor_utils, 'LOG') def test_provision_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.provisioning_error_handler(exc, self.node, 'state-one', 'state-two') self.node.save.assert_called_once_with() self.assertEqual('state-one', self.node.provision_state) self.assertEqual('state-two', self.node.target_provision_state) self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_provision_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.provisioning_error_handler(exc, self.node, 'state-one', 'state-two') self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor_utils, 'cleaning_error_handler') def test_cleanup_cleanwait_timeout_handler_call(self, mock_error_handler): self.node.clean_step = {} conductor_utils.cleanup_cleanwait_timeout(self.task) mock_error_handler.assert_called_once_with( self.task, msg="Timeout reached while cleaning the node. Please " "check if the ramdisk responsible for the cleaning is " "running on the node. Failed on step {}.", set_fail_state=False) def test_cleanup_cleanwait_timeout(self): self.node.provision_state = states.CLEANFAIL target = 'baz' self.node.target_provision_state = target self.node.driver_internal_info = {} self.node.clean_step = {'key': 'val'} clean_error = ("Timeout reached while cleaning the node. Please " "check if the ramdisk responsible for the cleaning is " "running on the node. Failed on step {'key': 'val'}.") self.node.driver_internal_info = { 'cleaning_reboot': True, 'clean_step_index': 0} conductor_utils.cleanup_cleanwait_timeout(self.task) self.assertEqual({}, self.node.clean_step) self.assertNotIn('clean_step_index', self.node.driver_internal_info) self.assertFalse(self.task.process_event.called) self.assertTrue(self.node.maintenance) self.assertEqual(clean_error, self.node.maintenance_reason) self.assertEqual('clean failure', self.node.fault) def _test_cleaning_error_handler(self, prov_state=states.CLEANING): self.node.provision_state = prov_state target = 'baz' self.node.target_provision_state = target self.node.clean_step = {'key': 'val'} self.node.driver_internal_info = { 'cleaning_reboot': True, 'cleaning_polling': True, 'skip_current_clean_step': True, 'clean_step_index': 0, 'agent_url': 'url'} msg = 'error bar' conductor_utils.cleaning_error_handler(self.task, msg) self.node.save.assert_called_once_with() self.assertEqual({}, self.node.clean_step) self.assertNotIn('clean_step_index', self.node.driver_internal_info) self.assertNotIn('cleaning_reboot', self.node.driver_internal_info) self.assertNotIn('cleaning_polling', self.node.driver_internal_info) self.assertNotIn('skip_current_clean_step', self.node.driver_internal_info) self.assertEqual(msg, self.node.last_error) self.assertTrue(self.node.maintenance) self.assertEqual(msg, self.node.maintenance_reason) self.assertEqual('clean failure', self.node.fault) driver = self.task.driver.deploy driver.tear_down_cleaning.assert_called_once_with(self.task) if prov_state == states.CLEANFAIL: self.assertFalse(self.task.process_event.called) else: self.task.process_event.assert_called_once_with('fail', target_state=None) self.assertNotIn('agent_url', self.node.driver_internal_info) def test_cleaning_error_handler(self): self._test_cleaning_error_handler() def test_cleaning_error_handler_cleanwait(self): self._test_cleaning_error_handler(prov_state=states.CLEANWAIT) def test_cleaning_error_handler_cleanfail(self): self._test_cleaning_error_handler(prov_state=states.CLEANFAIL) def test_cleaning_error_handler_manual(self): target = states.MANAGEABLE self.node.target_provision_state = target conductor_utils.cleaning_error_handler(self.task, 'foo') self.task.process_event.assert_called_once_with('fail', target_state=target) def test_cleaning_error_handler_no_teardown(self): target = states.MANAGEABLE self.node.target_provision_state = target conductor_utils.cleaning_error_handler(self.task, 'foo', tear_down_cleaning=False) self.assertFalse(self.task.driver.deploy.tear_down_cleaning.called) self.task.process_event.assert_called_once_with('fail', target_state=target) def test_cleaning_error_handler_no_fail(self): conductor_utils.cleaning_error_handler(self.task, 'foo', set_fail_state=False) driver = self.task.driver.deploy driver.tear_down_cleaning.assert_called_once_with(self.task) self.assertFalse(self.task.process_event.called) @mock.patch.object(conductor_utils, 'LOG') def test_cleaning_error_handler_tear_down_error(self, log_mock): def _side_effect(task): # simulate overwriting last error by another operation (e.g. power) task.node.last_error = None raise Exception('bar') driver = self.task.driver.deploy msg = 'foo' driver.tear_down_cleaning.side_effect = _side_effect conductor_utils.cleaning_error_handler(self.task, msg) self.assertTrue(log_mock.exception.called) self.assertIn(msg, self.node.last_error) self.assertIn(msg, self.node.maintenance_reason) self.assertEqual('clean failure', self.node.fault) def test_abort_on_conductor_take_over_cleaning(self): self.node.provision_state = states.CLEANFAIL conductor_utils.abort_on_conductor_take_over(self.task) self.assertTrue(self.node.maintenance) self.assertIn('take over', self.node.maintenance_reason) self.assertIn('take over', self.node.last_error) self.assertEqual('clean failure', self.node.fault) self.task.driver.deploy.tear_down_cleaning.assert_called_once_with( self.task) self.node.save.assert_called_once_with() def test_abort_on_conductor_take_over_deploying(self): self.node.provision_state = states.DEPLOYFAIL conductor_utils.abort_on_conductor_take_over(self.task) self.assertFalse(self.node.maintenance) self.assertIn('take over', self.node.last_error) self.node.save.assert_called_once_with() @mock.patch.object(conductor_utils, 'LOG') def test_spawn_cleaning_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.spawn_cleaning_error_handler(exc, self.node) self.node.save.assert_called_once_with() self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_spawn_cleaning_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.spawn_cleaning_error_handler(exc, self.node) self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG', autospec=True) def test_spawn_deploying_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.spawn_deploying_error_handler(exc, self.node) self.node.save.assert_called_once_with() self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG', autospec=True) def test_spawn_deploying_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.spawn_deploying_error_handler(exc, self.node) self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_spawn_rescue_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() self.node.instance_info = {'rescue_password': 'pass', 'hashed_rescue_password': '12'} conductor_utils.spawn_rescue_error_handler(exc, self.node) self.node.save.assert_called_once_with() self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) self.assertNotIn('rescue_password', self.node.instance_info) self.assertNotIn('hashed_rescue_password', self.node.instance_info) @mock.patch.object(conductor_utils, 'LOG') def test_spawn_rescue_error_handler_other_error(self, log_mock): exc = Exception('foo') self.node.instance_info = {'rescue_password': 'pass', 'hashed_rescue_password': '12'} conductor_utils.spawn_rescue_error_handler(exc, self.node) self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) self.assertIn('rescue_password', self.node.instance_info) @mock.patch.object(conductor_utils, 'LOG') def test_power_state_error_handler_no_worker(self, log_mock): exc = exception.NoFreeConductorWorker() conductor_utils.power_state_error_handler(exc, self.node, 'newstate') self.node.save.assert_called_once_with() self.assertEqual('newstate', self.node.power_state) self.assertEqual(states.NOSTATE, self.node.target_power_state) self.assertIn('No free conductor workers', self.node.last_error) self.assertTrue(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') def test_power_state_error_handler_other_error(self, log_mock): exc = Exception('foo') conductor_utils.power_state_error_handler(exc, self.node, 'foo') self.assertFalse(self.node.save.called) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor_utils, 'LOG') @mock.patch.object(conductor_utils, 'node_power_action') def test_cleanup_rescuewait_timeout(self, node_power_mock, log_mock): conductor_utils.cleanup_rescuewait_timeout(self.task) self.assertTrue(log_mock.error.called) node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) self.assertIn('Timeout reached', self.node.last_error) self.node.save.assert_called_once_with() @mock.patch.object(conductor_utils, 'LOG') @mock.patch.object(conductor_utils, 'node_power_action') def test_cleanup_rescuewait_timeout_known_exc( self, node_power_mock, log_mock): clean_up_mock = self.task.driver.rescue.clean_up clean_up_mock.side_effect = exception.IronicException('moocow') conductor_utils.cleanup_rescuewait_timeout(self.task) self.assertEqual(2, log_mock.error.call_count) node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) self.assertIn('moocow', self.node.last_error) self.node.save.assert_called_once_with() @mock.patch.object(conductor_utils, 'LOG') @mock.patch.object(conductor_utils, 'node_power_action') def test_cleanup_rescuewait_timeout_unknown_exc( self, node_power_mock, log_mock): clean_up_mock = self.task.driver.rescue.clean_up clean_up_mock.side_effect = Exception('moocow') conductor_utils.cleanup_rescuewait_timeout(self.task) self.assertTrue(log_mock.error.called) node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) self.assertIn('Rescue failed', self.node.last_error) self.node.save.assert_called_once_with() self.assertTrue(log_mock.exception.called) @mock.patch.object(conductor_utils, 'node_power_action') def _test_rescuing_error_handler(self, node_power_mock, set_state=True): self.node.provision_state = states.RESCUEWAIT self.node.driver_internal_info.update({'agent_url': 'url'}) conductor_utils.rescuing_error_handler(self.task, 'some exception for node', set_fail_state=set_state) node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) self.node.save.assert_called_once_with() self.assertNotIn('agent_url', self.node.driver_internal_info) if set_state: self.assertTrue(self.task.process_event.called) else: self.assertFalse(self.task.process_event.called) def test_rescuing_error_handler(self): self._test_rescuing_error_handler() def test_rescuing_error_handler_set_failed_state_false(self): self._test_rescuing_error_handler(set_state=False) @mock.patch.object(conductor_utils.LOG, 'error') @mock.patch.object(conductor_utils, 'node_power_action') def test_rescuing_error_handler_ironic_exc(self, node_power_mock, log_mock): self.node.provision_state = states.RESCUEWAIT expected_exc = exception.IronicException('moocow') clean_up_mock = self.task.driver.rescue.clean_up clean_up_mock.side_effect = expected_exc conductor_utils.rescuing_error_handler(self.task, 'some exception for node') node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) log_mock.assert_called_once_with('Rescue operation was unsuccessful, ' 'clean up failed for node %(node)s: ' '%(error)s', {'node': self.node.uuid, 'error': expected_exc}) self.node.save.assert_called_once_with() @mock.patch.object(conductor_utils.LOG, 'exception') @mock.patch.object(conductor_utils, 'node_power_action') def test_rescuing_error_handler_other_exc(self, node_power_mock, log_mock): self.node.provision_state = states.RESCUEWAIT expected_exc = RuntimeError() clean_up_mock = self.task.driver.rescue.clean_up clean_up_mock.side_effect = expected_exc conductor_utils.rescuing_error_handler(self.task, 'some exception for node') node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) log_mock.assert_called_once_with('Rescue failed for node ' '%(node)s, an exception was ' 'encountered while aborting.', {'node': self.node.uuid}) self.node.save.assert_called_once_with() @mock.patch.object(conductor_utils.LOG, 'error') @mock.patch.object(conductor_utils, 'node_power_action') def test_rescuing_error_handler_bad_state(self, node_power_mock, log_mock): self.node.provision_state = states.RESCUE self.task.process_event.side_effect = exception.InvalidState expected_exc = exception.IronicException('moocow') clean_up_mock = self.task.driver.rescue.clean_up clean_up_mock.side_effect = expected_exc conductor_utils.rescuing_error_handler(self.task, 'some exception for node') node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) self.task.driver.rescue.clean_up.assert_called_once_with(self.task) self.task.process_event.assert_called_once_with('fail') log_calls = [mock.call('Rescue operation was unsuccessful, clean up ' 'failed for node %(node)s: %(error)s', {'node': self.node.uuid, 'error': expected_exc}), mock.call('Internal error. Node %(node)s in provision ' 'state "%(state)s" could not transition to a ' 'failed state.', {'node': self.node.uuid, 'state': self.node.provision_state})] log_mock.assert_has_calls(log_calls) self.node.save.assert_called_once_with() class ValidatePortPhysnetTestCase(db_base.DbTestCase): def setUp(self): super(ValidatePortPhysnetTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') @mock.patch.object(objects.Port, 'obj_what_changed') def test_validate_port_physnet_no_portgroup_create(self, mock_owc): port = obj_utils.get_test_port(self.context, node_id=self.node.id) # NOTE(mgoddard): The port object passed to the conductor will not have # a portgroup_id attribute in this case. del port.portgroup_id with task_manager.acquire(self.context, self.node.uuid) as task: conductor_utils.validate_port_physnet(task, port) # Verify the early return in the non-portgroup case. self.assertFalse(mock_owc.called) @mock.patch.object(network, 'get_ports_by_portgroup_id') def test_validate_port_physnet_no_portgroup_update(self, mock_gpbpi): port = obj_utils.create_test_port(self.context, node_id=self.node.id) port.extra = {'foo': 'bar'} with task_manager.acquire(self.context, self.node.uuid) as task: conductor_utils.validate_port_physnet(task, port) # Verify the early return in the no portgroup update case. self.assertFalse(mock_gpbpi.called) def test_validate_port_physnet_inconsistent_physnets(self): # NOTE(mgoddard): This *shouldn't* happen, but let's make sure we can # handle it. portgroup = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) obj_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='00:11:22:33:44:55', physical_network='physnet1', uuid=uuidutils.generate_uuid()) obj_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='00:11:22:33:44:56', physical_network='physnet2', uuid=uuidutils.generate_uuid()) port = obj_utils.get_test_port(self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='00:11:22:33:44:57', physical_network='physnet2', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PortgroupPhysnetInconsistent, conductor_utils.validate_port_physnet, task, port) def test_validate_port_physnet_inconsistent_physnets_fix(self): # NOTE(mgoddard): This *shouldn't* happen, but let's make sure that if # we do get into this state that it is possible to resolve by setting # the physical_network correctly. portgroup = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) obj_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='00:11:22:33:44:55', physical_network='physnet1', uuid=uuidutils.generate_uuid()) port = obj_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='00:11:22:33:44:56', physical_network='physnet2', uuid=uuidutils.generate_uuid()) port.physical_network = 'physnet1' with task_manager.acquire(self.context, self.node.uuid) as task: conductor_utils.validate_port_physnet(task, port) def _test_validate_port_physnet(self, num_current_ports, current_physnet, new_physnet, operation, valid=True): """Helper method for testing validate_port_physnet. :param num_current_ports: Number of existing ports in the portgroup. :param current_physnet: Physical network of existing ports in the portgroup. :param new_physnet: Physical network to set on the port that is being created or updated. :param operation: The operation to perform. One of 'create', 'update', or 'update_add'. 'create' creates a new port and adds it to the portgroup. 'update' updates one of the existing ports. 'update_add' updates a port and adds it to the portgroup. :param valid: Whether the operation is expected to succeed. """ # Prepare existing resources - a node, and a portgroup with optional # existing ports. port = None portgroup = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) macs = ("00:11:22:33:44:%02x" % index for index in range(num_current_ports + 1)) for _ in range(num_current_ports): # NOTE: When operation == 'update' we update the last port in the # portgroup. port = obj_utils.create_test_port( self.context, node_id=self.node.id, portgroup_id=portgroup.id, address=next(macs), physical_network=current_physnet, uuid=uuidutils.generate_uuid()) # Prepare the port on which we are performing the operation. if operation == 'create': # NOTE(mgoddard): We use db_utils here rather than obj_utils as it # allows us to create a Port without a physical_network field, more # closely matching what happens during creation of a port when a # physical_network is not specified. port = db_utils.get_test_port( node_id=self.node.id, portgroup_id=portgroup.id, address=next(macs), uuid=uuidutils.generate_uuid(), physical_network=new_physnet) if new_physnet is None: del port["physical_network"] port = objects.Port(self.context, **port) elif operation == 'update_add': port = obj_utils.create_test_port( self.context, node_id=self.node.id, portgroup_id=None, address=next(macs), physical_network=current_physnet, uuid=uuidutils.generate_uuid()) port.portgroup_id = portgroup.id if operation != 'create' and new_physnet != current_physnet: port.physical_network = new_physnet # Perform the validation. with task_manager.acquire(self.context, self.node.uuid) as task: if valid: conductor_utils.validate_port_physnet(task, port) else: self.assertRaises(exception.Conflict, conductor_utils.validate_port_physnet, task, port) def _test_validate_port_physnet_create(self, **kwargs): self._test_validate_port_physnet(operation='create', **kwargs) def _test_validate_port_physnet_update(self, **kwargs): self._test_validate_port_physnet(operation='update', **kwargs) def _test_validate_port_physnet_update_add(self, **kwargs): self._test_validate_port_physnet(operation='update_add', **kwargs) # Empty portgroup def test_validate_port_physnet_empty_portgroup_create_1(self): self._test_validate_port_physnet_create( num_current_ports=0, current_physnet=None, new_physnet=None) def test_validate_port_physnet_empty_portgroup_create_2(self): self._test_validate_port_physnet_create( num_current_ports=0, current_physnet=None, new_physnet='physnet1') def test_validate_port_physnet_empty_portgroup_update_1(self): self._test_validate_port_physnet_update_add( num_current_ports=0, current_physnet=None, new_physnet=None) def test_validate_port_physnet_empty_portgroup_update_2(self): self._test_validate_port_physnet_update_add( num_current_ports=0, current_physnet=None, new_physnet='physnet1') # 1-port portgroup, no physnet. def test_validate_port_physnet_1_port_portgroup_no_physnet_create_1(self): self._test_validate_port_physnet_create( num_current_ports=1, current_physnet=None, new_physnet=None) def test_validate_port_physnet_1_port_portgroup_no_physnet_create_2(self): self._test_validate_port_physnet_create( num_current_ports=1, current_physnet=None, new_physnet='physnet1', valid=False) def test_validate_port_physnet_1_port_portgroup_no_physnet_update_1(self): self._test_validate_port_physnet_update( num_current_ports=1, current_physnet=None, new_physnet=None) def test_validate_port_physnet_1_port_portgroup_no_physnet_update_2(self): self._test_validate_port_physnet_update( num_current_ports=1, current_physnet=None, new_physnet='physnet1') def test_validate_port_physnet_1_port_portgroup_no_physnet_update_add_1( self): self._test_validate_port_physnet_update_add( num_current_ports=1, current_physnet=None, new_physnet=None) def test_validate_port_physnet_1_port_portgroup_no_physnet_update_add_2( self): self._test_validate_port_physnet_update_add( num_current_ports=1, current_physnet=None, new_physnet='physnet1', valid=False) # 1-port portgroup, with physnet 'physnet1'. def test_validate_port_physnet_1_port_portgroup_w_physnet_create_1(self): self._test_validate_port_physnet_create( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet1') def test_validate_port_physnet_1_port_portgroup_w_physnet_create_2(self): self._test_validate_port_physnet_create( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet2', valid=False) def test_validate_port_physnet_1_port_portgroup_w_physnet_create_3(self): self._test_validate_port_physnet_create( num_current_ports=1, current_physnet='physnet1', new_physnet=None, valid=False) def test_validate_port_physnet_1_port_portgroup_w_physnet_update_1(self): self._test_validate_port_physnet_update( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet1') def test_validate_port_physnet_1_port_portgroup_w_physnet_update_2(self): self._test_validate_port_physnet_update( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet2') def test_validate_port_physnet_1_port_portgroup_w_physnet_update_3(self): self._test_validate_port_physnet_update( num_current_ports=1, current_physnet='physnet1', new_physnet=None) def test_validate_port_physnet_1_port_portgroup_w_physnet_update_add_1( self): self._test_validate_port_physnet_update_add( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet1') def test_validate_port_physnet_1_port_portgroup_w_physnet_update_add_2( self): self._test_validate_port_physnet_update_add( num_current_ports=1, current_physnet='physnet1', new_physnet='physnet2', valid=False) def test_validate_port_physnet_1_port_portgroup_w_physnet_update_add_3( self): self._test_validate_port_physnet_update_add( num_current_ports=1, current_physnet='physnet1', new_physnet=None, valid=False) # 2-port portgroup, no physnet def test_validate_port_physnet_2_port_portgroup_no_physnet_update_1(self): self._test_validate_port_physnet_update( num_current_ports=2, current_physnet=None, new_physnet=None) def test_validate_port_physnet_2_port_portgroup_no_physnet_update_2(self): self._test_validate_port_physnet_update( num_current_ports=2, current_physnet=None, new_physnet='physnet1', valid=False) # 2-port portgroup, with physnet 'physnet1' def test_validate_port_physnet_2_port_portgroup_w_physnet_update_1(self): self._test_validate_port_physnet_update( num_current_ports=2, current_physnet='physnet1', new_physnet='physnet1') def test_validate_port_physnet_2_port_portgroup_w_physnet_update_2(self): self._test_validate_port_physnet_update( num_current_ports=2, current_physnet='physnet1', new_physnet='physnet2', valid=False) def test_validate_port_physnet_2_port_portgroup_w_physnet_update_3(self): self._test_validate_port_physnet_update( num_current_ports=2, current_physnet='physnet1', new_physnet=None, valid=False) class MiscTestCase(db_base.DbTestCase): def setUp(self): super(MiscTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', instance_info={'rescue_password': 'pass'}) def _test_remove_node_rescue_password(self, save=True): conductor_utils.remove_node_rescue_password(self.node, save=save) self.assertNotIn('rescue_password', self.node.instance_info) self.node.refresh() if save: self.assertNotIn('rescue_password', self.node.instance_info) else: self.assertIn('rescue_password', self.node.instance_info) def test_remove_node_rescue_password_save_true(self): self._test_remove_node_rescue_password(save=True) def test_remove_node_rescue_password_save_false(self): self._test_remove_node_rescue_password(save=False) @mock.patch.object(rpcapi.ConductorAPI, 'continue_node_deploy', autospec=True) @mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for', autospec=True) def test_notify_conductor_resume_operation(self, mock_topic, mock_rpc_call): mock_topic.return_value = 'topic' with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: conductor_utils.notify_conductor_resume_operation(task, 'deploy') mock_rpc_call.assert_called_once_with( mock.ANY, task.context, self.node.uuid, topic='topic') @mock.patch.object(conductor_utils, 'notify_conductor_resume_operation', autospec=True) def test_notify_conductor_resume_clean(self, mock_resume): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: conductor_utils.notify_conductor_resume_clean(task) mock_resume.assert_called_once_with(task, 'clean') @mock.patch.object(conductor_utils, 'notify_conductor_resume_operation', autospec=True) def test_notify_conductor_resume_deploy(self, mock_resume): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: conductor_utils.notify_conductor_resume_deploy(task) mock_resume.assert_called_once_with(task, 'deploy') @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch.object(drivers_base.NetworkInterface, 'need_power_on') @mock.patch.object(conductor_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_power_on_node_if_needed_true( self, power_action_mock, boot_device_mock, need_power_on_mock, get_power_state_mock, time_mock): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: need_power_on_mock.return_value = True get_power_state_mock.return_value = states.POWER_OFF power_state = conductor_utils.power_on_node_if_needed(task) self.assertEqual(power_state, states.POWER_OFF) boot_device_mock.assert_called_once_with( task, boot_devices.BIOS, persistent=False) power_action_mock.assert_called_once_with(task, states.POWER_ON) @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch.object(drivers_base.NetworkInterface, 'need_power_on') @mock.patch.object(conductor_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_power_on_node_if_needed_false_power_on( self, power_action_mock, boot_device_mock, need_power_on_mock, get_power_state_mock, time_mock): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: need_power_on_mock.return_value = True get_power_state_mock.return_value = states.POWER_ON power_state = conductor_utils.power_on_node_if_needed(task) self.assertIsNone(power_state) self.assertEqual(0, boot_device_mock.call_count) self.assertEqual(0, power_action_mock.call_count) @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch.object(drivers_base.NetworkInterface, 'need_power_on') @mock.patch.object(conductor_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_power_on_node_if_needed_false_no_need( self, power_action_mock, boot_device_mock, need_power_on_mock, get_power_state_mock, time_mock): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: need_power_on_mock.return_value = False get_power_state_mock.return_value = states.POWER_OFF power_state = conductor_utils.power_on_node_if_needed(task) self.assertIsNone(power_state) self.assertEqual(0, boot_device_mock.call_count) self.assertEqual(0, power_action_mock.call_count) @mock.patch.object(neutron, 'get_client', autospec=True) @mock.patch.object(neutron, 'wait_for_host_agent', autospec=True) @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch.object(drivers_base.NetworkInterface, 'need_power_on') @mock.patch.object(conductor_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_power_on_node_if_needed_with_smart_nic_port( self, power_action_mock, boot_device_mock, need_power_on_mock, get_power_state_mock, time_mock, wait_agent_mock, get_client_mock): llc = {'port_id': 'rep0-0', 'hostname': 'host1'} port = obj_utils.get_test_port(self.context, node_id=self.node.id, is_smartnic=True, local_link_connection=llc) with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: task.ports = [port] need_power_on_mock.return_value = True get_power_state_mock.return_value = states.POWER_OFF power_state = conductor_utils.power_on_node_if_needed(task) self.assertEqual(power_state, states.POWER_OFF) boot_device_mock.assert_called_once_with( task, boot_devices.BIOS, persistent=False) power_action_mock.assert_called_once_with(task, states.POWER_ON) get_client_mock.assert_called_once_with(context=self.context) wait_agent_mock.assert_called_once_with(mock.ANY, 'host1', target_state='down') @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_restore_power_state_if_needed_true( self, power_action_mock, time_mock): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: power_state = states.POWER_OFF conductor_utils.restore_power_state_if_needed(task, power_state) power_action_mock.assert_called_once_with(task, power_state) @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', autospec=True) def test_restore_power_state_if_needed_false( self, power_action_mock, time_mock): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: power_state = None conductor_utils.restore_power_state_if_needed(task, power_state) self.assertEqual(0, power_action_mock.call_count) class ValidateInstanceInfoTraitsTestCase(tests_base.TestCase): def setUp(self): super(ValidateInstanceInfoTraitsTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake-hardware', traits=['trait1', 'trait2']) def test_validate_instance_info_traits_no_instance_traits(self): conductor_utils.validate_instance_info_traits(self.node) def test_validate_instance_info_traits_empty_instance_traits(self): self.node.instance_info['traits'] = [] conductor_utils.validate_instance_info_traits(self.node) def test_validate_instance_info_traits_invalid_type(self): self.node.instance_info['traits'] = 'not-a-list' self.assertRaisesRegex(exception.InvalidParameterValue, 'Error parsing traits from Node', conductor_utils.validate_instance_info_traits, self.node) def test_validate_instance_info_traits_invalid_trait_type(self): self.node.instance_info['traits'] = ['trait1', {}] self.assertRaisesRegex(exception.InvalidParameterValue, 'Error parsing traits from Node', conductor_utils.validate_instance_info_traits, self.node) def test_validate_instance_info_traits(self): self.node.instance_info['traits'] = ['trait1', 'trait2'] conductor_utils.validate_instance_info_traits(self.node) def test_validate_instance_info_traits_missing(self): self.node.instance_info['traits'] = ['trait1', 'trait3'] self.assertRaisesRegex(exception.InvalidParameterValue, 'Cannot specify instance traits that are not', conductor_utils.validate_instance_info_traits, self.node) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) class FastTrackTestCase(db_base.DbTestCase): def setUp(self): super(FastTrackTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), driver_internal_info={ 'agent_last_heartbeat': str(timeutils.utcnow().isoformat())}) self.config(fast_track=True, group='deploy') def test_is_fast_track(self, mock_get_power): mock_get_power.return_value = states.POWER_ON with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertTrue(conductor_utils.is_fast_track(task)) def test_is_fast_track_config_false(self, mock_get_power): self.config(fast_track=False, group='deploy') mock_get_power.return_value = states.POWER_ON with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(conductor_utils.is_fast_track(task)) def test_is_fast_track_power_off_false(self, mock_get_power): mock_get_power.return_value = states.POWER_OFF with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(conductor_utils.is_fast_track(task)) def test_is_fast_track_no_heartbeat(self, mock_get_power): mock_get_power.return_value = states.POWER_ON i_info = self.node.driver_internal_info i_info.pop('agent_last_heartbeat') self.node.driver_internal_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(conductor_utils.is_fast_track(task)) def test_is_fast_track_error_blocks(self, mock_get_power): mock_get_power.return_value = states.POWER_ON self.node.last_error = "bad things happened" self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(conductor_utils.is_fast_track(task)) class GetNodeNextStepsTestCase(db_base.DbTestCase): def setUp(self): super(GetNodeNextStepsTestCase, self).setUp() self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy'} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] self.deploy_start = { 'step': 'deploy_start', 'priority': 50, 'interface': 'deploy'} self.deploy_end = { 'step': 'deploy_end', 'priority': 20, 'interface': 'deploy'} self.deploy_steps = [self.deploy_start, self.deploy_end] def _test_get_node_next_deploy_steps(self, skip=True): driver_internal_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, driver_internal_info=driver_internal_info, last_error=None, deploy_step=self.deploy_steps[0]) with task_manager.acquire(self.context, node.uuid) as task: step_index = conductor_utils.get_node_next_deploy_steps( task, skip_current_step=skip) expected_index = 1 if skip else 0 self.assertEqual(expected_index, step_index) def test_get_node_next_deploy_steps(self): self._test_get_node_next_deploy_steps() def test_get_node_next_deploy_steps_no_skip(self): self._test_get_node_next_deploy_steps(skip=False) def test_get_node_next_deploy_steps_unset_deploy_step(self): driver_internal_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': None} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, driver_internal_info=driver_internal_info, last_error=None, deploy_step=None) with task_manager.acquire(self.context, node.uuid) as task: step_index = conductor_utils.get_node_next_deploy_steps(task) self.assertEqual(0, step_index) def test_get_node_next_steps_exception(self): node = obj_utils.create_test_node(self.context) with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.Invalid, conductor_utils._get_node_next_steps, task, 'foo') def _test_get_node_next_clean_steps(self, skip=True): driver_internal_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step=self.clean_steps[0]) with task_manager.acquire(self.context, node.uuid) as task: step_index = conductor_utils.get_node_next_clean_steps( task, skip_current_step=skip) expected_index = 1 if skip else 0 self.assertEqual(expected_index, step_index) def test_get_node_next_clean_steps(self): self._test_get_node_next_clean_steps() def test_get_node_next_clean_steps_no_skip(self): self._test_get_node_next_clean_steps(skip=False) def test_get_node_next_clean_steps_unset_clean_step(self): driver_internal_info = {'clean_steps': self.clean_steps, 'clean_step_index': None} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, driver_internal_info=driver_internal_info, last_error=None, clean_step=None) with task_manager.acquire(self.context, node.uuid) as task: step_index = conductor_utils.get_node_next_clean_steps(task) self.assertEqual(0, step_index) class AgentTokenUtilsTestCase(tests_base.TestCase): def setUp(self): super(AgentTokenUtilsTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake-hardware') def test_add_secret_token(self): self.assertNotIn('agent_secret_token', self.node.driver_internal_info) conductor_utils.add_secret_token(self.node) self.assertIn('agent_secret_token', self.node.driver_internal_info) def test_del_secret_token(self): conductor_utils.add_secret_token(self.node) self.assertIn('agent_secret_token', self.node.driver_internal_info) conductor_utils.del_secret_token(self.node) self.assertNotIn('agent_secret_token', self.node.driver_internal_info) def test_is_agent_token_present(self): # This should always be False as the token has not been added yet. self.assertFalse(conductor_utils.is_agent_token_present(self.node)) conductor_utils.add_secret_token(self.node) self.assertTrue(conductor_utils.is_agent_token_present(self.node)) def test_is_agent_token_supported(self): self.assertTrue( conductor_utils.is_agent_token_supported('6.1.1.dev39')) self.assertTrue( conductor_utils.is_agent_token_supported('6.2.1')) self.assertFalse( conductor_utils.is_agent_token_supported('6.0.0')) ironic-15.0.0/ironic/tests/unit/conductor/test_rpcapi.py0000664000175000017500000006356413652514273023423 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for :py:class:`ironic.conductor.rpcapi.ConductorAPI`. """ import copy import mock from oslo_config import cfg import oslo_messaging as messaging from oslo_messaging import _utils as messaging_utils from ironic.common import boot_devices from ironic.common import components from ironic.common import exception from ironic.common import indicator_states from ironic.common import release_mappings from ironic.common import states from ironic.conductor import manager as conductor_manager from ironic.conductor import rpcapi as conductor_rpcapi from ironic import objects from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils CONF = cfg.CONF class ConductorRPCAPITestCase(tests_base.TestCase): def test_versions_in_sync(self): self.assertEqual( conductor_manager.ConductorManager.RPC_API_VERSION, conductor_rpcapi.ConductorAPI.RPC_API_VERSION) @mock.patch('ironic.common.rpc.get_client') def test_version_cap(self, mock_get_client): conductor_rpcapi.ConductorAPI() self.assertEqual(conductor_rpcapi.ConductorAPI.RPC_API_VERSION, mock_get_client.call_args[1]['version_cap']) @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING') @mock.patch('ironic.common.rpc.get_client') def test_version_capped(self, mock_get_client, mock_release_mapping): CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[0]) mock_release_mapping.get.return_value = {'rpc': '3'} conductor_rpcapi.ConductorAPI() self.assertEqual('3', mock_get_client.call_args[1]['version_cap']) class RPCAPITestCase(db_base.DbTestCase): def setUp(self): super(RPCAPITestCase, self).setUp() self.fake_node = db_utils.get_test_node(driver='fake-driver') self.fake_node_obj = objects.Node._from_db_object( self.context, objects.Node(), self.fake_node) self.fake_portgroup = db_utils.get_test_portgroup() def test_serialized_instance_has_uuid(self): self.assertIn('uuid', self.fake_node) def test_get_topic_for_known_driver(self): CONF.set_override('host', 'fake-host') c = self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': []}) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_topic = 'fake-topic.fake-host' self.assertEqual(expected_topic, rpcapi.get_topic_for(self.fake_node_obj)) def test_get_topic_for_unknown_driver(self): CONF.set_override('host', 'fake-host') c = self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': []}) self.dbapi.register_conductor_hardware_interfaces( c.id, 'other-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.NoValidHost, rpcapi.get_topic_for, self.fake_node_obj) def test_get_topic_doesnt_cache(self): CONF.set_override('host', 'fake-host') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.TemporaryFailure, rpcapi.get_topic_for, self.fake_node_obj) c = self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': []}) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_topic = 'fake-topic.fake-host' self.assertEqual(expected_topic, rpcapi.get_topic_for(self.fake_node_obj)) def test_get_topic_for_driver_known_driver(self): CONF.set_override('host', 'fake-host') c = self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': [], }) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertEqual('fake-topic.fake-host', rpcapi.get_topic_for_driver('fake-driver')) def test_get_topic_for_driver_unknown_driver(self): CONF.set_override('host', 'fake-host') c = self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': [], }) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.DriverNotFound, rpcapi.get_topic_for_driver, 'fake-driver-2') def test_get_topic_for_driver_doesnt_cache(self): CONF.set_override('host', 'fake-host') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.DriverNotFound, rpcapi.get_topic_for_driver, 'fake-driver') c = self.dbapi.register_conductor({ 'hostname': 'fake-host', 'drivers': [], }) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertEqual('fake-topic.fake-host', rpcapi.get_topic_for_driver('fake-driver')) def test_get_conductor_for(self): CONF.set_override('host', 'fake-host') c = self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': []}) self.dbapi.register_conductor_hardware_interfaces( c.id, 'fake-driver', 'deploy', ['iscsi', 'direct'], 'iscsi') rpcapi = conductor_rpcapi.ConductorAPI() self.assertEqual(rpcapi.get_conductor_for(self.fake_node_obj), 'fake-host') def test_get_random_topic(self): CONF.set_override('host', 'fake-host') self.dbapi.register_conductor({'hostname': 'fake-host', 'drivers': []}) rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_topic = 'fake-topic.fake-host' self.assertEqual(expected_topic, rpcapi.get_random_topic()) def test_get_random_topic_no_conductors(self): CONF.set_override('host', 'fake-host') rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') self.assertRaises(exception.TemporaryFailure, rpcapi.get_random_topic) def _test_can_send_create_port(self, can_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') with mock.patch.object(rpcapi.client, "can_send_version") as mock_can_send_version: mock_can_send_version.return_value = can_send result = rpcapi.can_send_create_port() self.assertEqual(can_send, result) mock_can_send_version.assert_called_once_with("1.41") def test_can_send_create_port_True(self): self._test_can_send_create_port(True) def test_can_send_create_port_False(self): self._test_can_send_create_port(False) def _test_rpcapi(self, method, rpc_method, **kwargs): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') expected_retval = 'hello world' if rpc_method == 'call' else None expected_topic = 'fake-topic' if 'host' in kwargs: expected_topic += ".%s" % kwargs['host'] target = { "topic": expected_topic, "version": kwargs.pop('version', rpcapi.RPC_API_VERSION) } expected_msg = copy.deepcopy(kwargs) self.fake_args = None self.fake_kwargs = None def _fake_can_send_version_method(version): return messaging_utils.version_is_compatible( rpcapi.RPC_API_VERSION, version) def _fake_prepare_method(*args, **kwargs): for kwd in kwargs: self.assertEqual(kwargs[kwd], target[kwd]) return rpcapi.client def _fake_rpc_method(*args, **kwargs): self.fake_args = args self.fake_kwargs = kwargs if expected_retval: return expected_retval with mock.patch.object(rpcapi.client, "can_send_version") as mock_can_send_version: mock_can_send_version.side_effect = _fake_can_send_version_method with mock.patch.object(rpcapi.client, "prepare") as mock_prepared: mock_prepared.side_effect = _fake_prepare_method with mock.patch.object(rpcapi.client, rpc_method) as mock_method: mock_method.side_effect = _fake_rpc_method retval = getattr(rpcapi, method)(self.context, **kwargs) self.assertEqual(retval, expected_retval) expected_args = [self.context, method, expected_msg] for arg, expected_arg in zip(self.fake_args, expected_args): self.assertEqual(arg, expected_arg) def test_update_node(self): self._test_rpcapi('update_node', 'call', version='1.1', node_obj=self.fake_node) def test_change_node_power_state(self): self._test_rpcapi('change_node_power_state', 'call', version='1.39', node_id=self.fake_node['uuid'], new_state=states.POWER_ON) def test_vendor_passthru(self): self._test_rpcapi('vendor_passthru', 'call', version='1.20', node_id=self.fake_node['uuid'], driver_method='test-driver-method', http_method='test-http-method', info={"test_info": "test_value"}) def test_driver_vendor_passthru(self): self._test_rpcapi('driver_vendor_passthru', 'call', version='1.20', driver_name='test-driver-name', driver_method='test-driver-method', http_method='test-http-method', info={'test_key': 'test_value'}) def test_do_node_deploy(self): self._test_rpcapi('do_node_deploy', 'call', version='1.22', node_id=self.fake_node['uuid'], rebuild=False, configdrive=None) def test_do_node_tear_down(self): self._test_rpcapi('do_node_tear_down', 'call', version='1.6', node_id=self.fake_node['uuid']) def test_validate_driver_interfaces(self): self._test_rpcapi('validate_driver_interfaces', 'call', version='1.5', node_id=self.fake_node['uuid']) def test_destroy_node(self): self._test_rpcapi('destroy_node', 'call', version='1.9', node_id=self.fake_node['uuid']) def test_get_console_information(self): self._test_rpcapi('get_console_information', 'call', version='1.11', node_id=self.fake_node['uuid']) def test_set_console_mode(self): self._test_rpcapi('set_console_mode', 'call', version='1.11', node_id=self.fake_node['uuid'], enabled=True) def test_create_port(self): fake_port = db_utils.get_test_port() self._test_rpcapi('create_port', 'call', version='1.41', port_obj=fake_port) def test_update_port(self): fake_port = db_utils.get_test_port() self._test_rpcapi('update_port', 'call', version='1.13', port_obj=fake_port) def test_get_driver_properties(self): self._test_rpcapi('get_driver_properties', 'call', version='1.16', driver_name='fake-driver') def test_set_boot_device(self): self._test_rpcapi('set_boot_device', 'call', version='1.17', node_id=self.fake_node['uuid'], device=boot_devices.DISK, persistent=False) def test_get_boot_device(self): self._test_rpcapi('get_boot_device', 'call', version='1.17', node_id=self.fake_node['uuid']) def test_inject_nmi(self): self._test_rpcapi('inject_nmi', 'call', version='1.40', node_id=self.fake_node['uuid']) def test_get_supported_boot_devices(self): self._test_rpcapi('get_supported_boot_devices', 'call', version='1.17', node_id=self.fake_node['uuid']) def test_set_indicator_state(self): self._test_rpcapi('set_indicator_state', 'call', version='1.50', node_id=self.fake_node['uuid'], component=components.CHASSIS, indicator='led', state=indicator_states.ON) def test_get_indicator_state(self): self._test_rpcapi('get_indicator_state', 'call', version='1.50', node_id=self.fake_node['uuid'], component=components.CHASSIS, indicator='led') def test_get_supported_indicators(self): self._test_rpcapi('get_supported_indicators', 'call', version='1.50', node_id=self.fake_node['uuid']) def test_get_node_vendor_passthru_methods(self): self._test_rpcapi('get_node_vendor_passthru_methods', 'call', version='1.21', node_id=self.fake_node['uuid']) def test_get_driver_vendor_passthru_methods(self): self._test_rpcapi('get_driver_vendor_passthru_methods', 'call', version='1.21', driver_name='fake-driver') def test_inspect_hardware(self): self._test_rpcapi('inspect_hardware', 'call', version='1.24', node_id=self.fake_node['uuid']) def test_continue_node_clean(self): self._test_rpcapi('continue_node_clean', 'cast', version='1.27', node_id=self.fake_node['uuid']) def test_continue_node_deploy(self): self._test_rpcapi('continue_node_deploy', 'cast', version='1.45', node_id=self.fake_node['uuid']) def test_get_raid_logical_disk_properties(self): self._test_rpcapi('get_raid_logical_disk_properties', 'call', version='1.30', driver_name='fake-driver') def test_set_target_raid_config(self): self._test_rpcapi('set_target_raid_config', 'call', version='1.30', node_id=self.fake_node['uuid'], target_raid_config='config') def test_do_node_clean(self): clean_steps = [{'step': 'upgrade_firmware', 'interface': 'deploy'}, {'step': 'upgrade_bmc', 'interface': 'management'}] self._test_rpcapi('do_node_clean', 'call', version='1.32', node_id=self.fake_node['uuid'], clean_steps=clean_steps) def test_object_action(self): self._test_rpcapi('object_action', 'call', version='1.31', objinst='fake-object', objmethod='foo', args=tuple(), kwargs=dict()) def test_object_class_action_versions(self): self._test_rpcapi('object_class_action_versions', 'call', version='1.31', objname='fake-object', objmethod='foo', object_versions={'fake-object': '1.0'}, args=tuple(), kwargs=dict()) def test_object_backport_versions(self): self._test_rpcapi('object_backport_versions', 'call', version='1.31', objinst='fake-object', object_versions={'fake-object': '1.0'}) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_action_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_action, self.context, objinst='fake-object', objmethod='foo', args=tuple(), kwargs=dict()) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_class_action_versions_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_class_action_versions, self.context, objname='fake-object', objmethod='foo', object_versions={'fake-object': '1.0'}, args=tuple(), kwargs=dict()) @mock.patch.object(messaging.RPCClient, 'can_send_version', autospec=True) def test_object_backport_versions_invalid_version(self, mock_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') mock_send.return_value = False self.assertRaises(NotImplementedError, rpcapi.object_backport_versions, self.context, objinst='fake-object', object_versions={'fake-object': '1.0'}) def test_update_portgroup(self): self._test_rpcapi('update_portgroup', 'call', version='1.33', portgroup_obj=self.fake_portgroup) def test_destroy_portgroup(self): self._test_rpcapi('destroy_portgroup', 'call', version='1.33', portgroup=self.fake_portgroup) def test_heartbeat(self): self._test_rpcapi('heartbeat', 'call', node_id='fake-node', callback_url='http://ramdisk.url:port', agent_version=None, version='1.49') def test_heartbeat_agent_token(self): self._test_rpcapi('heartbeat', 'call', node_id='fake-node', callback_url='http://ramdisk.url:port', agent_version=None, agent_token='xyz1', version='1.49') def test_destroy_volume_connector(self): fake_volume_connector = db_utils.get_test_volume_connector() self._test_rpcapi('destroy_volume_connector', 'call', version='1.35', connector=fake_volume_connector) def test_update_volume_connector(self): fake_volume_connector = db_utils.get_test_volume_connector() self._test_rpcapi('update_volume_connector', 'call', version='1.35', connector=fake_volume_connector) def test_create_node(self): self._test_rpcapi('create_node', 'call', version='1.36', node_obj=self.fake_node) def test_destroy_volume_target(self): fake_volume_target = db_utils.get_test_volume_target() self._test_rpcapi('destroy_volume_target', 'call', version='1.37', target=fake_volume_target) def test_update_volume_target(self): fake_volume_target = db_utils.get_test_volume_target() self._test_rpcapi('update_volume_target', 'call', version='1.37', target=fake_volume_target) def test_vif_attach(self): self._test_rpcapi('vif_attach', 'call', node_id='fake-node', vif_info={"id": "vif"}, version='1.38') def test_vif_detach(self): self._test_rpcapi('vif_detach', 'call', node_id='fake-node', vif_id="vif", version='1.38') def test_vif_list(self): self._test_rpcapi('vif_list', 'call', node_id='fake-node', version='1.38') def test_do_node_rescue(self): self._test_rpcapi('do_node_rescue', 'call', version='1.43', node_id=self.fake_node['uuid'], rescue_password="password") def test_do_node_unrescue(self): self._test_rpcapi('do_node_unrescue', 'call', version='1.43', node_id=self.fake_node['uuid']) def test_get_node_with_token(self): self._test_rpcapi('get_node_with_token', 'call', version='1.49', node_id=self.fake_node['uuid']) def _test_can_send_rescue(self, can_send): rpcapi = conductor_rpcapi.ConductorAPI(topic='fake-topic') with mock.patch.object(rpcapi.client, "can_send_version") as mock_can_send_version: mock_can_send_version.return_value = can_send result = rpcapi.can_send_rescue() self.assertEqual(can_send, result) mock_can_send_version.assert_called_once_with("1.43") def test_can_send_rescue_true(self): self._test_can_send_rescue(True) def test_can_send_rescue_false(self): self._test_can_send_rescue(False) def test_add_node_traits(self): self._test_rpcapi('add_node_traits', 'call', node_id='fake-node', traits=['trait1'], version='1.44') def test_add_node_traits_replace(self): self._test_rpcapi('add_node_traits', 'call', node_id='fake-node', traits=['trait1'], replace=True, version='1.44') def test_remove_node_traits(self): self._test_rpcapi('remove_node_traits', 'call', node_id='fake-node', traits=['trait1'], version='1.44') def test_remove_node_traits_all(self): self._test_rpcapi('remove_node_traits', 'call', node_id='fake-node', traits=None, version='1.44') def test_create_allocation(self): self._test_rpcapi('create_allocation', 'call', allocation='fake-allocation', version='1.48') def test_destroy_allocation(self): self._test_rpcapi('destroy_allocation', 'call', allocation='fake-allocation', version='1.48') ironic-15.0.0/ironic/tests/unit/conductor/test_cleaning.py0000664000175000017500000012755513652514273023726 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for cleaning bits.""" import mock from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.common import states from ironic.conductor import cleaning from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.drivers.modules import fake from ironic.drivers.modules.network import flat as n_flat from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class DoNodeCleanTestCase(db_base.DbTestCase): def setUp(self): super(DoNodeCleanTestCase, self).setUp() self.config(automated_clean=True, group='conductor') self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy'} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] self.next_clean_step_index = 1 # Manual clean step self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy'} def __do_node_clean_validate_fail(self, mock_validate, clean_steps=None): # InvalidParameterValue should cause node to go to CLEANFAIL mock_validate.side_effect = exception.InvalidParameterValue('error') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_validate.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test__do_node_clean_automated_power_validate_fail(self, mock_validate): self.__do_node_clean_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test__do_node_clean_manual_power_validate_fail(self, mock_validate): self.__do_node_clean_validate_fail(mock_validate, clean_steps=[]) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_automated_network_validate_fail(self, mock_validate): self.__do_node_clean_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_manual_network_validate_fail(self, mock_validate): self.__do_node_clean_validate_fail(mock_validate, clean_steps=[]) @mock.patch.object(cleaning, 'LOG', autospec=True) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(cleaning, 'do_next_clean_step', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeBIOS.cache_bios_settings', autospec=True) def _test__do_node_clean_cache_bios(self, mock_bios, mock_validate, mock_prep, mock_next_step, mock_steps, mock_log, clean_steps=None, enable_unsupported=False, enable_exception=False): if enable_unsupported: mock_bios.side_effect = exception.UnsupportedDriverExtension('') elif enable_exception: mock_bios.side_effect = exception.IronicException('test') mock_prep.return_value = states.NOSTATE tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) node.refresh() mock_bios.assert_called_once_with(mock.ANY, task) if clean_steps: self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) else: self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) mock_validate.assert_called_once_with(mock.ANY, task) if enable_exception: mock_log.exception.assert_called_once_with( 'Caching of bios settings failed on node {}. ' 'Continuing with node cleaning.' .format(node.uuid)) def test__do_node_clean_manual_cache_bios(self): self._test__do_node_clean_cache_bios(clean_steps=[self.deploy_raid]) def test__do_node_clean_automated_cache_bios(self): self._test__do_node_clean_cache_bios() def test__do_node_clean_manual_cache_bios_exception(self): self._test__do_node_clean_cache_bios(clean_steps=[self.deploy_raid], enable_exception=True) def test__do_node_clean_automated_cache_bios_exception(self): self._test__do_node_clean_cache_bios(enable_exception=True) def test__do_node_clean_manual_cache_bios_unsupported(self): self._test__do_node_clean_cache_bios(clean_steps=[self.deploy_raid], enable_unsupported=True) def test__do_node_clean_automated_cache_bios_unsupported(self): self._test__do_node_clean_cache_bios(enable_unsupported=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test__do_node_clean_automated_disabled(self, mock_validate): self.config(automated_clean=False, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node was moved to available without cleaning self.assertFalse(mock_validate.called) self.assertEqual(states.AVAILABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_steps', node.driver_internal_info) self.assertNotIn('clean_step_index', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_automated_disabled_individual_enabled( self, mock_network, mock_validate): self.config(automated_clean=False, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, automated_clean=True) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node clean was called self.assertTrue(mock_validate.called) self.assertIn('clean_steps', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test__do_node_clean_automated_disabled_individual_disabled( self, mock_validate): self.config(automated_clean=False, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, automated_clean=False) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node was moved to available without cleaning self.assertFalse(mock_validate.called) self.assertEqual(states.AVAILABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_steps', node.driver_internal_info) self.assertNotIn('clean_step_index', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_automated_enabled(self, mock_validate, mock_network): self.config(automated_clean=True, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, driver_internal_info={'agent_url': 'url'}) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node was cleaned self.assertTrue(mock_validate.called) self.assertIn('clean_steps', node.driver_internal_info) self.assertNotIn('agent_url', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_automated_enabled_individual_enabled( self, mock_network, mock_validate): self.config(automated_clean=True, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, automated_clean=True) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node was cleaned self.assertTrue(mock_validate.called) self.assertIn('clean_steps', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test__do_node_clean_automated_enabled_individual_none( self, mock_validate, mock_network): self.config(automated_clean=True, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, automated_clean=None) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() # Assert that the node was cleaned self.assertTrue(mock_validate.called) self.assertIn('clean_steps', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down_cleaning', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning', autospec=True) def test__do_node_clean_maintenance(self, mock_prep, mock_tear_down): CONF.set_override('allow_provisioning_in_maintenance', False, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, maintenance=True, maintenance_reason='Original reason') with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIn('is not allowed', node.last_error) self.assertTrue(node.maintenance) self.assertEqual('Original reason', node.maintenance_reason) self.assertFalse(mock_prep.called) self.assertFalse(mock_tear_down.called) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning', autospec=True) def __do_node_clean_prepare_clean_fail(self, mock_prep, mock_validate, clean_steps=None): # Exception from task.driver.deploy.prepare_cleaning should cause node # to go to CLEANFAIL mock_prep.side_effect = exception.InvalidParameterValue('error') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_prep.assert_called_once_with(mock.ANY, task) mock_validate.assert_called_once_with(mock.ANY, task) def test__do_node_clean_automated_prepare_clean_fail(self): self.__do_node_clean_prepare_clean_fail() def test__do_node_clean_manual_prepare_clean_fail(self): self.__do_node_clean_prepare_clean_fail(clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare_cleaning', autospec=True) def __do_node_clean_prepare_clean_wait(self, mock_prep, mock_validate, clean_steps=None): mock_prep.return_value = states.CLEANWAIT tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_prep.assert_called_once_with(mock.ANY, mock.ANY) mock_validate.assert_called_once_with(mock.ANY, mock.ANY) def test__do_node_clean_automated_prepare_clean_wait(self): self.__do_node_clean_prepare_clean_wait() def test__do_node_clean_manual_prepare_clean_wait(self): self.__do_node_clean_prepare_clean_wait(clean_steps=[self.deploy_raid]) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) def __do_node_clean_steps_fail(self, mock_steps, mock_validate, clean_steps=None, invalid_exc=True): if invalid_exc: mock_steps.side_effect = exception.InvalidParameterValue('invalid') else: mock_steps.side_effect = exception.NodeCleaningFailure('failure') tgt_prov_state = states.MANAGEABLE if clean_steps else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), provision_state=states.CLEANING, target_provision_state=tgt_prov_state) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) mock_validate.assert_called_once_with(mock.ANY, task) node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) mock_steps.assert_called_once_with(mock.ANY) def test__do_node_clean_automated_steps_fail(self): for invalid in (True, False): self.__do_node_clean_steps_fail(invalid_exc=invalid) def test__do_node_clean_manual_steps_fail(self): for invalid in (True, False): self.__do_node_clean_steps_fail(clean_steps=[self.deploy_raid], invalid_exc=invalid) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(cleaning, 'do_next_clean_step', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def __do_node_clean(self, mock_power_valid, mock_network_valid, mock_next_step, mock_steps, clean_steps=None): if clean_steps: tgt_prov_state = states.MANAGEABLE driver_info = {} else: tgt_prov_state = states.AVAILABLE driver_info = {'clean_steps': self.clean_steps} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, power_state=states.POWER_OFF, driver_internal_info=driver_info) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_node_clean(task, clean_steps=clean_steps) node.refresh() mock_power_valid.assert_called_once_with(mock.ANY, task) mock_network_valid.assert_called_once_with(mock.ANY, task) mock_next_step.assert_called_once_with(task, 0) mock_steps.assert_called_once_with(task) if clean_steps: self.assertEqual(clean_steps, node.driver_internal_info['clean_steps']) # Check that state didn't change self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) def test__do_node_clean_automated(self): self.__do_node_clean() def test__do_node_clean_manual(self): self.__do_node_clean(clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def _do_next_clean_step_first_step_async(self, return_state, mock_execute, clean_steps=None): # Execute the first async clean step on a node driver_internal_info = {'clean_step_index': None} if clean_steps: tgt_prov_state = states.MANAGEABLE driver_internal_info['clean_steps'] = clean_steps else: tgt_prov_state = states.AVAILABLE driver_internal_info['clean_steps'] = self.clean_steps node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_internal_info, clean_step={}) mock_execute.return_value = return_state expected_first_step = node.driver_internal_info['clean_steps'][0] with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(expected_first_step, node.clean_step) self.assertEqual(0, node.driver_internal_info['clean_step_index']) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, expected_first_step) def test_do_next_clean_step_automated_first_step_async(self): self._do_next_clean_step_first_step_async(states.CLEANWAIT) def test_do_next_clean_step_manual_first_step_async(self): self._do_next_clean_step_first_step_async( states.CLEANWAIT, clean_steps=[self.deploy_raid]) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step', autospec=True) def _do_next_clean_step_continue_from_last_cleaning(self, return_state, mock_execute, manual=False): # Resume an in-progress cleaning after the first async step tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': 0}, clean_step=self.clean_steps[0]) mock_execute.return_value = return_state with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, self.next_clean_step_index) node.refresh() self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(self.clean_steps[1], node.clean_step) self.assertEqual(1, node.driver_internal_info['clean_step_index']) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.clean_steps[1]) def test_do_next_clean_step_continue_from_last_cleaning(self): self._do_next_clean_step_continue_from_last_cleaning(states.CLEANWAIT) def test_do_next_clean_step_manual_continue_from_last_cleaning(self): self._do_next_clean_step_continue_from_last_cleaning(states.CLEANWAIT, manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def _do_next_clean_step_last_step_noop(self, mock_execute, manual=False, retired=False): # Resume where last_step is the last cleaning step, should be noop tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE info = {'clean_steps': self.clean_steps, 'clean_step_index': len(self.clean_steps) - 1} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=info, clean_step=self.clean_steps[-1], retired=retired) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, None) node.refresh() # retired nodes move to manageable upon cleaning if retired: tgt_prov_state = states.MANAGEABLE # Cleaning should be complete without calling additional steps self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertIsNone(node.driver_internal_info['clean_steps']) self.assertFalse(mock_execute.called) def test__do_next_clean_step_automated_last_step_noop(self): self._do_next_clean_step_last_step_noop() def test__do_next_clean_step_manual_last_step_noop(self): self._do_next_clean_step_last_step_noop(manual=True) def test__do_next_clean_step_retired_last_step_change_tgt_state(self): self._do_next_clean_step_last_step_noop(retired=True) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def _do_next_clean_step_all(self, mock_deploy_execute, mock_power_execute, manual=False): # Run all steps from start to finish (all synchronous) tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) def fake_deploy(conductor_obj, task, step): driver_internal_info = task.node.driver_internal_info driver_internal_info['goober'] = 'test' task.node.driver_internal_info = driver_internal_info task.node.save() mock_deploy_execute.side_effect = fake_deploy mock_power_execute.return_value = None with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) node.refresh() # Cleaning should be complete self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertEqual('test', node.driver_internal_info['goober']) self.assertIsNone(node.driver_internal_info['clean_steps']) mock_power_execute.assert_called_once_with(mock.ANY, mock.ANY, self.clean_steps[1]) mock_deploy_execute.assert_has_calls( [mock.call(mock.ANY, mock.ANY, self.clean_steps[0]), mock.call(mock.ANY, mock.ANY, self.clean_steps[2])]) def test_do_next_clean_step_automated_all(self): self._do_next_clean_step_all() def test_do_next_clean_step_manual_all(self): self._do_next_clean_step_all(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _do_next_clean_step_execute_fail(self, tear_mock, mock_execute, manual=False): # When a clean step fails, go to CLEANFAIL tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) mock_execute.side_effect = Exception() with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) tear_mock.assert_called_once_with(task.driver.deploy, task) node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertTrue(node.maintenance) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.clean_steps[0]) def test__do_next_clean_step_automated_execute_fail(self): self._do_next_clean_step_execute_fail() def test__do_next_clean_step_manual_execute_fail(self): self._do_next_clean_step_execute_fail(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def test_do_next_clean_step_oob_reboot(self, mock_execute): # When a clean step fails, go to CLEANWAIT tgt_prov_state = states.MANAGEABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None, 'cleaning_reboot': True}, clean_step={}) mock_execute.side_effect = exception.AgentConnectionFailed( reason='failed') with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) node.refresh() # Make sure we go to CLEANWAIT self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(self.clean_steps[0], node.clean_step) self.assertEqual(0, node.driver_internal_info['clean_step_index']) self.assertFalse(node.driver_internal_info['skip_current_clean_step']) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.clean_steps[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def test_do_next_clean_step_oob_reboot_last_step(self, mock_execute): # Resume where last_step is the last cleaning step tgt_prov_state = states.MANAGEABLE info = {'clean_steps': self.clean_steps, 'cleaning_reboot': True, 'clean_step_index': len(self.clean_steps) - 1} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=info, clean_step=self.clean_steps[-1]) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, None) node.refresh() # Cleaning should be complete without calling additional steps self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertNotIn('cleaning_reboot', node.driver_internal_info) self.assertIsNone(node.driver_internal_info['clean_steps']) self.assertFalse(mock_execute.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def test_do_next_clean_step_oob_reboot_fail(self, tear_mock, mock_execute): # When a clean step fails with no reboot requested go to CLEANFAIL tgt_prov_state = states.MANAGEABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) mock_execute.side_effect = exception.AgentConnectionFailed( reason='failed') with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) tear_mock.assert_called_once_with(task.driver.deploy, task) node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertNotIn('skip_current_clean_step', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertTrue(node.maintenance) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.clean_steps[0]) @mock.patch.object(cleaning, 'LOG', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step', autospec=True) @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _do_next_clean_step_fail_in_tear_down_cleaning( self, tear_mock, power_exec_mock, deploy_exec_mock, log_mock, manual=True): tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) deploy_exec_mock.return_value = None power_exec_mock.return_value = None tear_mock.side_effect = Exception('boom') with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertEqual(1, tear_mock.call_count) self.assertTrue(node.maintenance) deploy_exec_calls = [ mock.call(mock.ANY, mock.ANY, self.clean_steps[0]), mock.call(mock.ANY, mock.ANY, self.clean_steps[2]), ] self.assertEqual(deploy_exec_calls, deploy_exec_mock.call_args_list) power_exec_calls = [ mock.call(mock.ANY, mock.ANY, self.clean_steps[1]), ] self.assertEqual(power_exec_calls, power_exec_mock.call_args_list) log_mock.exception.assert_called_once_with( 'Failed to tear down from cleaning for node {}, reason: boom' .format(node.uuid)) def test__do_next_clean_step_automated_fail_in_tear_down_cleaning(self): self._do_next_clean_step_fail_in_tear_down_cleaning() def test__do_next_clean_step_manual_fail_in_tear_down_cleaning(self): self._do_next_clean_step_fail_in_tear_down_cleaning(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def _do_next_clean_step_no_steps(self, mock_execute, manual=False, fast_track=False): if fast_track: self.config(fast_track=True, group='deploy') for info in ({'clean_steps': None, 'clean_step_index': None, 'agent_url': 'test-url'}, {'clean_steps': None, 'agent_url': 'test-url'}): # Resume where there are no steps, should be a noop tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=info, clean_step={}) with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, None) node.refresh() # Cleaning should be complete without calling additional steps self.assertEqual(tgt_prov_state, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertFalse(mock_execute.called) if fast_track: self.assertEqual('test-url', node.driver_internal_info.get('agent_url')) else: self.assertNotIn('agent_url', node.driver_internal_info) mock_execute.reset_mock() def test__do_next_clean_step_automated_no_steps(self): self._do_next_clean_step_no_steps() def test__do_next_clean_step_manual_no_steps(self): self._do_next_clean_step_no_steps(manual=True) def test__do_next_clean_step_fast_track(self): self._do_next_clean_step_no_steps(fast_track=True) @mock.patch('ironic.drivers.modules.fake.FakePower.execute_clean_step', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_clean_step', autospec=True) def _do_next_clean_step_bad_step_return_value( self, deploy_exec_mock, power_exec_mock, manual=False): # When a clean step fails, go to CLEANFAIL tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'clean_steps': self.clean_steps, 'clean_step_index': None}, clean_step={}) deploy_exec_mock.return_value = "foo" with task_manager.acquire( self.context, node.uuid, shared=False) as task: cleaning.do_next_clean_step(task, 0) node.refresh() # Make sure we go to CLEANFAIL, clear clean_steps self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertTrue(node.maintenance) deploy_exec_mock.assert_called_once_with(mock.ANY, mock.ANY, self.clean_steps[0]) # Make sure we don't execute any other step and return self.assertFalse(power_exec_mock.called) def test__do_next_clean_step_automated_bad_step_return_value(self): self._do_next_clean_step_bad_step_return_value() def test__do_next_clean_step_manual_bad_step_return_value(self): self._do_next_clean_step_bad_step_return_value(manual=True) class DoNodeCleanAbortTestCase(db_base.DbTestCase): @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def _test__do_node_clean_abort(self, step_name, tear_mock): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANFAIL, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': True}, driver_internal_info={ 'clean_step_index': 2, 'cleaning_reboot': True, 'cleaning_polling': True, 'skip_current_clean_step': True}) with task_manager.acquire(self.context, node.uuid) as task: cleaning.do_node_clean_abort(task, step_name=step_name) self.assertIsNotNone(task.node.last_error) tear_mock.assert_called_once_with(task.driver.deploy, task) if step_name: self.assertIn(step_name, task.node.last_error) # assert node's clean_step and metadata was cleaned up self.assertEqual({}, task.node.clean_step) self.assertNotIn('clean_step_index', task.node.driver_internal_info) self.assertNotIn('cleaning_reboot', task.node.driver_internal_info) self.assertNotIn('cleaning_polling', task.node.driver_internal_info) self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) def test__do_node_clean_abort(self): self._test__do_node_clean_abort(None) def test__do_node_clean_abort_with_step_name(self): self._test__do_node_clean_abort('foo') @mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True) def test__do_node_clean_abort_tear_down_fail(self, tear_mock): tear_mock.side_effect = Exception('Surprise') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANFAIL, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': True}) with task_manager.acquire(self.context, node.uuid) as task: cleaning.do_node_clean_abort(task) tear_mock.assert_called_once_with(task.driver.deploy, task) self.assertIsNotNone(task.node.last_error) self.assertIsNotNone(task.node.maintenance_reason) self.assertTrue(task.node.maintenance) self.assertEqual('clean failure', task.node.fault) ironic-15.0.0/ironic/tests/unit/conductor/mgr_utils.py0000664000175000017500000001316513652514273023103 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test utils for Ironic Managers.""" from futurist import periodics import mock from oslo_utils import strutils from oslo_utils import uuidutils from ironic.common import exception from ironic.common import states from ironic.conductor import manager from ironic import objects class CommonMixIn(object): @staticmethod def _create_node(**kwargs): attrs = {'id': 1, 'uuid': uuidutils.generate_uuid(), 'power_state': states.POWER_OFF, 'target_power_state': None, 'maintenance': False, 'reservation': None} attrs.update(kwargs) node = mock.Mock(spec_set=objects.Node) for attr in attrs: setattr(node, attr, attrs[attr]) return node def _create_task(self, node=None, node_attrs=None): if node_attrs is None: node_attrs = {} if node is None: node = self._create_node(**node_attrs) task = mock.Mock(spec_set=['node', 'release_resources', 'spawn_after', 'process_event', 'driver']) task.node = node return task def _get_nodeinfo_list_response(self, nodes=None): if nodes is None: nodes = [self.node] elif not isinstance(nodes, (list, tuple)): nodes = [nodes] return [tuple(getattr(n, c) for c in self.columns) for n in nodes] def _get_acquire_side_effect(self, task_infos): """Helper method to generate a task_manager.acquire() side effect. This accepts a list of information about task mocks to return. task_infos can be a single entity or a list. Each task_info can be a single entity, the task to return, or it can be a tuple of (task, exception_to_raise_on_exit). 'task' can be an exception to raise on __enter__. Examples: _get_acquire_side_effect(self, task): Yield task _get_acquire_side_effect(self, [task, enter_exception(), (task2, exit_exception())]) Yield task on first call to acquire() raise enter_exception() in __enter__ on 2nd call to acquire() Yield task2 on 3rd call to acquire(), but raise exit_exception() on __exit__() """ tasks = [] exit_exceptions = [] if not isinstance(task_infos, list): task_infos = [task_infos] for task_info in task_infos: if isinstance(task_info, tuple): task, exc = task_info else: task = task_info exc = None tasks.append(task) exit_exceptions.append(exc) class FakeAcquire(object): def __init__(fa_self, context, node_id, *args, **kwargs): # We actually verify these arguments via # acquire_mock.call_args_list(). However, this stores the # node_id so we can assert we're returning the correct node # in __enter__(). fa_self.node_id = node_id def __enter__(fa_self): task = tasks.pop(0) if isinstance(task, Exception): raise task # NOTE(comstud): Not ideal to throw this into # a helper, however it's the cleanest way # to verify we're dealing with the correct task/node. if strutils.is_int_like(fa_self.node_id): self.assertEqual(fa_self.node_id, task.node.id) else: self.assertEqual(fa_self.node_id, task.node.uuid) return task def __exit__(fa_self, exc_typ, exc_val, exc_tb): exc = exit_exceptions.pop(0) if exc_typ is None and exc is not None: raise exc return FakeAcquire class ServiceSetUpMixin(object): def setUp(self): super(ServiceSetUpMixin, self).setUp() self.hostname = 'test-host' self.config(node_locked_retry_attempts=1, group='conductor') self.config(node_locked_retry_interval=0, group='conductor') self.service = manager.ConductorManager(self.hostname, 'test-topic') def _stop_service(self): try: objects.Conductor.get_by_hostname(self.context, self.hostname) except exception.ConductorNotFound: return self.service.del_host() def _start_service(self, start_periodic_tasks=False): if start_periodic_tasks: self.service.init_host() else: with mock.patch.object(periodics, 'PeriodicWorker', autospec=True): self.service.init_host() self.addCleanup(self._stop_service) def mock_record_keepalive(func_or_class): return mock.patch.object( manager.ConductorManager, '_conductor_service_record_keepalive', lambda _: None)(func_or_class) ironic-15.0.0/ironic/tests/unit/conductor/test_steps.py0000664000175000017500000011653713652514273023302 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.common import states from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class NodeDeployStepsTestCase(db_base.DbTestCase): def setUp(self): super(NodeDeployStepsTestCase, self).setUp() self.deploy_start = { 'step': 'deploy_start', 'priority': 50, 'interface': 'deploy'} self.power_one = { 'step': 'power_one', 'priority': 40, 'interface': 'power'} self.deploy_middle = { 'step': 'deploy_middle', 'priority': 40, 'interface': 'deploy'} self.deploy_end = { 'step': 'deploy_end', 'priority': 20, 'interface': 'deploy'} self.power_disable = { 'step': 'power_disable', 'priority': 0, 'interface': 'power'} self.deploy_core = { 'step': 'deploy', 'priority': 100, 'interface': 'deploy'} # enabled steps self.deploy_steps = [self.deploy_start, self.power_one, self.deploy_middle, self.deploy_end] # Deploy step with argsinfo. self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy', 'argsinfo': {'arg1': {'description': 'desc1', 'required': True}, 'arg2': {'description': 'desc2'}}} self.node = obj_utils.create_test_node( self.context, driver='fake-hardware') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps', autospec=True) def test__get_deployment_steps(self, mock_mgt_steps, mock_power_steps, mock_deploy_steps): # Test getting deploy steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. mock_power_steps.return_value = [self.power_disable, self.power_one] mock_deploy_steps.return_value = [ self.deploy_start, self.deploy_middle, self.deploy_end] expected = self.deploy_steps + [self.power_disable] with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = conductor_steps._get_deployment_steps(task, enabled=False) self.assertEqual(expected, steps) mock_mgt_steps.assert_called_once_with(mock.ANY, task) mock_power_steps.assert_called_once_with(mock.ANY, task) mock_deploy_steps.assert_called_once_with(mock.ANY, task) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps', autospec=True) def test__get_deploy_steps_unsorted(self, mock_mgt_steps, mock_power_steps, mock_deploy_steps): mock_deploy_steps.return_value = [self.deploy_end, self.deploy_start, self.deploy_middle] with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = conductor_steps._get_deployment_steps(task, enabled=False, sort=False) self.assertEqual(mock_deploy_steps.return_value, steps) mock_mgt_steps.assert_called_once_with(mock.ANY, task) mock_power_steps.assert_called_once_with(mock.ANY, task) mock_deploy_steps.assert_called_once_with(mock.ANY, task) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps', autospec=True) def test__get_deployment_steps_only_enabled( self, mock_mgt_steps, mock_power_steps, mock_deploy_steps): # Test getting only deploy steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. # Should discard zero-priority deploy step. mock_power_steps.return_value = [self.power_one, self.power_disable] mock_deploy_steps.return_value = [self.deploy_end, self.deploy_middle, self.deploy_start] with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: steps = conductor_steps._get_deployment_steps(task, enabled=True) self.assertEqual(self.deploy_steps, steps) mock_mgt_steps.assert_called_once_with(mock.ANY, task) mock_power_steps.assert_called_once_with(mock.ANY, task) mock_deploy_steps.assert_called_once_with(mock.ANY, task) @mock.patch.object(objects.DeployTemplate, 'list_by_names') def test__get_deployment_templates_no_traits(self, mock_list): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: templates = conductor_steps._get_deployment_templates(task) self.assertEqual([], templates) self.assertFalse(mock_list.called) @mock.patch.object(objects.DeployTemplate, 'list_by_names') def test__get_deployment_templates(self, mock_list): traits = ['CUSTOM_DT1', 'CUSTOM_DT2'] node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_info={'traits': traits}) template1 = obj_utils.get_test_deploy_template(self.context) template2 = obj_utils.get_test_deploy_template( self.context, name='CUSTOM_DT2', uuid=uuidutils.generate_uuid(), steps=[{'interface': 'bios', 'step': 'apply_configuration', 'args': {}, 'priority': 1}]) mock_list.return_value = [template1, template2] expected = [template1, template2] with task_manager.acquire( self.context, node.uuid, shared=False) as task: templates = conductor_steps._get_deployment_templates(task) self.assertEqual(expected, templates) mock_list.assert_called_once_with(task.context, traits) def test__get_steps_from_deployment_templates(self): template1 = obj_utils.get_test_deploy_template(self.context) template2 = obj_utils.get_test_deploy_template( self.context, name='CUSTOM_DT2', uuid=uuidutils.generate_uuid(), steps=[{'interface': 'bios', 'step': 'apply_configuration', 'args': {}, 'priority': 1}]) step1 = template1.steps[0] step2 = template2.steps[0] expected = [ { 'interface': step1['interface'], 'step': step1['step'], 'args': step1['args'], 'priority': step1['priority'], }, { 'interface': step2['interface'], 'step': step2['step'], 'args': step2['args'], 'priority': step2['priority'], } ] with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = conductor_steps._get_steps_from_deployment_templates( task, [template1, template2]) self.assertEqual(expected, steps) @mock.patch.object(conductor_steps, '_get_validated_steps_from_templates', autospec=True) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def _test__get_all_deployment_steps(self, user_steps, driver_steps, expected_steps, mock_steps, mock_validated): mock_validated.return_value = user_steps mock_steps.return_value = driver_steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = conductor_steps._get_all_deployment_steps(task) self.assertEqual(expected_steps, steps) mock_validated.assert_called_once_with(task, skip_missing=False) mock_steps.assert_called_once_with(task, enabled=True, sort=False) def test__get_all_deployment_steps_no_steps(self): # Nothing in -> nothing out. user_steps = [] driver_steps = [] expected_steps = [] self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) def test__get_all_deployment_steps_no_user_steps(self): # Only driver steps in -> only driver steps out. user_steps = [] driver_steps = self.deploy_steps expected_steps = self.deploy_steps self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) def test__get_all_deployment_steps_no_driver_steps(self): # Only user steps in -> only user steps out. user_steps = self.deploy_steps driver_steps = [] expected_steps = self.deploy_steps self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) def test__get_all_deployment_steps_user_and_driver_steps(self): # Driver and user steps in -> driver and user steps out. user_steps = self.deploy_steps[:2] driver_steps = self.deploy_steps[2:] expected_steps = self.deploy_steps self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) @mock.patch.object(conductor_steps, '_get_validated_steps_from_templates', autospec=True) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__get_all_deployment_steps_skip_missing(self, mock_steps, mock_validated): user_steps = self.deploy_steps[:2] driver_steps = self.deploy_steps[2:] expected_steps = self.deploy_steps mock_validated.return_value = user_steps mock_steps.return_value = driver_steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = conductor_steps._get_all_deployment_steps( task, skip_missing=True) self.assertEqual(expected_steps, steps) mock_validated.assert_called_once_with(task, skip_missing=True) mock_steps.assert_called_once_with(task, enabled=True, sort=False) def test__get_all_deployment_steps_disable_core_steps(self): # User steps can disable core driver steps. user_steps = [self.deploy_core.copy()] user_steps[0].update({'priority': 0}) driver_steps = [self.deploy_core] expected_steps = [] self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) def test__get_all_deployment_steps_override_driver_steps(self): # User steps override non-core driver steps. user_steps = [step.copy() for step in self.deploy_steps[:2]] user_steps[0].update({'priority': 200}) user_steps[1].update({'priority': 100}) driver_steps = self.deploy_steps expected_steps = user_steps + self.deploy_steps[2:] self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) def test__get_all_deployment_steps_duplicate_user_steps(self): # Duplicate user steps override non-core driver steps. # NOTE(mgoddard): This case is currently prevented by the API and # conductor - the interface/step must be unique across all enabled # steps. This test ensures that we can support this case, in case we # choose to allow it in future. user_steps = [self.deploy_start.copy(), self.deploy_start.copy()] user_steps[0].update({'priority': 200}) user_steps[1].update({'priority': 100}) driver_steps = self.deploy_steps # Each user invocation of the deploy_start step should be included, but # not the default deploy_start from the driver. expected_steps = user_steps + self.deploy_steps[1:] self._test__get_all_deployment_steps(user_steps, driver_steps, expected_steps) @mock.patch.object(conductor_steps, '_get_validated_steps_from_templates', autospec=True) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__get_all_deployment_steps_error(self, mock_steps, mock_validated): mock_validated.side_effect = exception.InvalidParameterValue('foo') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, conductor_steps._get_all_deployment_steps, task) mock_validated.assert_called_once_with(task, skip_missing=False) self.assertFalse(mock_steps.called) @mock.patch.object(conductor_steps, '_get_all_deployment_steps', autospec=True) def test_set_node_deployment_steps(self, mock_steps): mock_steps.return_value = self.deploy_steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: conductor_steps.set_node_deployment_steps(task) self.node.refresh() self.assertEqual(self.deploy_steps, self.node.driver_internal_info['deploy_steps']) self.assertEqual({}, self.node.deploy_step) self.assertIsNone( self.node.driver_internal_info['deploy_step_index']) mock_steps.assert_called_once_with(task, skip_missing=False) @mock.patch.object(conductor_steps, '_get_all_deployment_steps', autospec=True) def test_set_node_deployment_steps_skip_missing(self, mock_steps): mock_steps.return_value = self.deploy_steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: conductor_steps.set_node_deployment_steps(task, skip_missing=True) self.node.refresh() self.assertEqual(self.deploy_steps, self.node.driver_internal_info['deploy_steps']) self.assertEqual({}, self.node.deploy_step) self.assertIsNone( self.node.driver_internal_info['deploy_step_index']) mock_steps.assert_called_once_with(task, skip_missing=True) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps(self, mock_steps): mock_steps.return_value = self.deploy_steps user_steps = [{'step': 'deploy_start', 'interface': 'deploy', 'priority': 100}, {'step': 'power_one', 'interface': 'power', 'priority': 200}] with task_manager.acquire(self.context, self.node.uuid) as task: result = conductor_steps._validate_user_deploy_steps(task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) self.assertEqual(user_steps, result) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_no_steps(self, mock_steps): mock_steps.return_value = self.deploy_steps with task_manager.acquire(self.context, self.node.uuid) as task: conductor_steps._validate_user_deploy_steps(task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_get_steps_exception(self, mock_steps): mock_steps.side_effect = exception.InstanceDeployFailure('bad') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InstanceDeployFailure, conductor_steps._validate_user_deploy_steps, task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_not_supported(self, mock_steps): mock_steps.return_value = self.deploy_steps user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'bad_step', 'interface': 'deploy', 'priority': 100}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "does not support.*bad_step", conductor_steps._validate_user_deploy_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_skip_missing(self, mock_steps): mock_steps.return_value = self.deploy_steps user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'bad_step', 'interface': 'deploy', 'priority': 100}] with task_manager.acquire(self.context, self.node.uuid) as task: result = conductor_steps._validate_user_deploy_steps( task, user_steps, skip_missing=True) self.assertEqual(user_steps[:1], result) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_invalid_arg(self, mock_steps): mock_steps.return_value = self.deploy_steps user_steps = [{'step': 'power_one', 'interface': 'power', 'args': {'arg1': 'val1', 'arg2': 'val2'}, 'priority': 200}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "power_one.*unexpected.*arg1", conductor_steps._validate_user_deploy_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_missing_required_arg(self, mock_steps): mock_steps.return_value = [self.power_one, self.deploy_raid] user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'build_raid', 'interface': 'deploy', 'priority': 100}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "build_raid.*missing.*arg1", conductor_steps._validate_user_deploy_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_disable_non_core(self, mock_steps): # Required arguments don't apply to disabled steps. mock_steps.return_value = [self.power_one, self.deploy_raid] user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'build_raid', 'interface': 'deploy', 'priority': 0}] with task_manager.acquire(self.context, self.node.uuid) as task: result = conductor_steps._validate_user_deploy_steps(task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) self.assertEqual(user_steps, result) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_disable_core(self, mock_steps): mock_steps.return_value = [self.power_one, self.deploy_core] user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'deploy', 'interface': 'deploy', 'priority': 0}] with task_manager.acquire(self.context, self.node.uuid) as task: result = conductor_steps._validate_user_deploy_steps(task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) self.assertEqual(user_steps, result) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_override_core(self, mock_steps): mock_steps.return_value = [self.power_one, self.deploy_core] user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'deploy', 'interface': 'deploy', 'priority': 200}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "deploy.*is a core step", conductor_steps._validate_user_deploy_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True) def test__validate_user_deploy_steps_duplicates(self, mock_steps): mock_steps.return_value = [self.power_one, self.deploy_core] user_steps = [{'step': 'power_one', 'interface': 'power', 'priority': 200}, {'step': 'power_one', 'interface': 'power', 'priority': 100}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "Duplicate deploy steps for " "power.power_one", conductor_steps._validate_user_deploy_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) class NodeCleaningStepsTestCase(db_base.DbTestCase): def setUp(self): super(NodeCleaningStepsTestCase, self).setUp() self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy', 'abortable': True} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] # Manual clean step self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy', 'argsinfo': {'arg1': {'description': 'desc1', 'required': True}, 'arg2': {'description': 'desc2'}}} @mock.patch('ironic.drivers.modules.fake.FakeBIOS.get_clean_steps', lambda self, task: []) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps(self, mock_power_steps, mock_deploy_steps): # Test getting cleaning steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE) mock_power_steps.return_value = [self.power_update] mock_deploy_steps.return_value = [self.deploy_erase, self.deploy_update] with task_manager.acquire( self.context, node.uuid, shared=False) as task: steps = conductor_steps._get_cleaning_steps(task, enabled=False) self.assertEqual(self.clean_steps, steps) @mock.patch('ironic.drivers.modules.fake.FakeBIOS.get_clean_steps', lambda self, task: []) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps_unsorted(self, mock_power_steps, mock_deploy_steps): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.MANAGEABLE) mock_deploy_steps.return_value = [self.deploy_raid, self.deploy_update, self.deploy_erase] with task_manager.acquire( self.context, node.uuid, shared=False) as task: steps = conductor_steps._get_cleaning_steps(task, enabled=False, sort=False) self.assertEqual(mock_deploy_steps.return_value, steps) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps') @mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps') def test__get_cleaning_steps_only_enabled(self, mock_power_steps, mock_deploy_steps): # Test getting only cleaning steps, with one driver returning None, two # conflicting priorities, and asserting they are ordered properly. # Should discard zero-priority (manual) clean step node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE) mock_power_steps.return_value = [self.power_update] mock_deploy_steps.return_value = [self.deploy_erase, self.deploy_update, self.deploy_raid] with task_manager.acquire( self.context, node.uuid, shared=True) as task: steps = conductor_steps._get_cleaning_steps(task, enabled=True) self.assertEqual(self.clean_steps, steps) @mock.patch.object(conductor_steps, '_validate_user_clean_steps') @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test_set_node_cleaning_steps_automated(self, mock_steps, mock_validate_user_steps): mock_steps.return_value = self.clean_steps node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.AVAILABLE, last_error=None, clean_step=None) with task_manager.acquire( self.context, node.uuid, shared=False) as task: conductor_steps.set_node_cleaning_steps(task) node.refresh() self.assertEqual(self.clean_steps, node.driver_internal_info['clean_steps']) self.assertEqual({}, node.clean_step) mock_steps.assert_called_once_with(task, enabled=True) self.assertFalse(mock_validate_user_steps.called) @mock.patch.object(conductor_steps, '_validate_user_clean_steps') @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test_set_node_cleaning_steps_manual(self, mock_steps, mock_validate_user_steps): clean_steps = [self.deploy_raid] mock_steps.return_value = self.clean_steps mock_validate_user_steps.return_value = clean_steps node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANING, target_provision_state=states.MANAGEABLE, last_error=None, clean_step=None, driver_internal_info={'clean_steps': clean_steps}) with task_manager.acquire( self.context, node.uuid, shared=False) as task: conductor_steps.set_node_cleaning_steps(task) node.refresh() self.assertEqual(clean_steps, node.driver_internal_info['clean_steps']) self.assertEqual({}, node.clean_step) self.assertFalse(mock_steps.called) mock_validate_user_steps.assert_called_once_with(task, clean_steps) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'erase_disks', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: result = conductor_steps._validate_user_clean_steps(task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) expected = [{'step': 'update_firmware', 'interface': 'power', 'priority': 10, 'abortable': False}, {'step': 'erase_disks', 'interface': 'deploy', 'priority': 20, 'abortable': True}] self.assertEqual(expected, result) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps_no_steps(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps with task_manager.acquire(self.context, node.uuid) as task: conductor_steps._validate_user_clean_steps(task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps_get_steps_exception(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.side_effect = exception.NodeCleaningFailure('bad') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.NodeCleaningFailure, conductor_steps._validate_user_clean_steps, task, []) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps_not_supported(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = [self.power_update, self.deploy_raid] user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'bad_step', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "does not support.*bad_step", conductor_steps._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps_invalid_arg(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = self.clean_steps user_steps = [{'step': 'update_firmware', 'interface': 'power', 'args': {'arg1': 'val1', 'arg2': 'val2'}}, {'step': 'erase_disks', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "update_firmware.*unexpected.*arg1", conductor_steps._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_cleaning_steps') def test__validate_user_clean_steps_missing_required_arg(self, mock_steps): node = obj_utils.create_test_node(self.context) mock_steps.return_value = [self.power_update, self.deploy_raid] user_steps = [{'step': 'update_firmware', 'interface': 'power'}, {'step': 'build_raid', 'interface': 'deploy'}] with task_manager.acquire(self.context, node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "build_raid.*missing.*arg1", conductor_steps._validate_user_clean_steps, task, user_steps) mock_steps.assert_called_once_with(task, enabled=False, sort=False) @mock.patch.object(conductor_steps, '_get_deployment_templates', autospec=True) @mock.patch.object(conductor_steps, '_get_steps_from_deployment_templates', autospec=True) @mock.patch.object(conductor_steps, '_validate_user_deploy_steps', autospec=True) class GetValidatedStepsFromTemplatesTestCase(db_base.DbTestCase): def setUp(self): super(GetValidatedStepsFromTemplatesTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.template = obj_utils.get_test_deploy_template(self.context) def test_ok(self, mock_validate, mock_steps, mock_templates): mock_templates.return_value = [self.template] steps = [db_utils.get_test_deploy_template_step()] mock_steps.return_value = steps mock_validate.return_value = steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: result = conductor_steps._get_validated_steps_from_templates(task) self.assertEqual(steps, result) mock_templates.assert_called_once_with(task) mock_steps.assert_called_once_with(task, [self.template]) mock_validate.assert_called_once_with(task, steps, mock.ANY, skip_missing=False) def test_skip_missing(self, mock_validate, mock_steps, mock_templates): mock_templates.return_value = [self.template] steps = [db_utils.get_test_deploy_template_step()] mock_steps.return_value = steps mock_validate.return_value = steps with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: result = conductor_steps._get_validated_steps_from_templates( task, skip_missing=True) self.assertEqual(steps, result) mock_templates.assert_called_once_with(task) mock_steps.assert_called_once_with(task, [self.template]) mock_validate.assert_called_once_with(task, steps, mock.ANY, skip_missing=True) def test_invalid_parameter_value(self, mock_validate, mock_steps, mock_templates): mock_templates.return_value = [self.template] mock_validate.side_effect = exception.InvalidParameterValue('fake') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.InvalidParameterValue, conductor_steps._get_validated_steps_from_templates, task) def test_instance_deploy_failure(self, mock_validate, mock_steps, mock_templates): mock_templates.return_value = [self.template] mock_validate.side_effect = exception.InstanceDeployFailure('foo') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.InstanceDeployFailure, conductor_steps._get_validated_steps_from_templates, task) @mock.patch.object(conductor_steps, '_get_validated_steps_from_templates', autospec=True) class ValidateDeployTemplatesTestCase(db_base.DbTestCase): def setUp(self): super(ValidateDeployTemplatesTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') def test_ok(self, mock_validated): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: result = conductor_steps.validate_deploy_templates(task) self.assertIsNone(result) mock_validated.assert_called_once_with(task, skip_missing=False) def test_skip_missing(self, mock_validated): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: result = conductor_steps.validate_deploy_templates( task, skip_missing=True) self.assertIsNone(result) mock_validated.assert_called_once_with(task, skip_missing=True) def test_error(self, mock_validated): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: mock_validated.side_effect = exception.InvalidParameterValue('foo') self.assertRaises(exception.InvalidParameterValue, conductor_steps.validate_deploy_templates, task) mock_validated.assert_called_once_with(task, skip_missing=False) ironic-15.0.0/ironic/tests/unit/conductor/test_allocations.py0000664000175000017500000010515613652514273024447 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for functionality related to allocations.""" import mock import oslo_messaging as messaging from oslo_utils import uuidutils from ironic.common import exception from ironic.conductor import allocations from ironic.conductor import manager from ironic.conductor import task_manager from ironic import objects from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils @mgr_utils.mock_record_keepalive class AllocationTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(manager.ConductorManager, '_spawn_worker', autospec=True) def test_create_allocation(self, mock_spawn): # In this test we mock spawn_worker, so that the actual processing does # not happen, and the allocation stays in the "allocating" state. allocation = obj_utils.get_test_allocation(self.context, extra={'test': 'one'}) self._start_service() mock_spawn.assert_any_call(self.service, self.service._resume_allocations, mock.ANY) mock_spawn.reset_mock() res = self.service.create_allocation(self.context, allocation) self.assertEqual({'test': 'one'}, res['extra']) self.assertEqual('allocating', res['state']) self.assertIsNotNone(res['uuid']) self.assertEqual(self.service.conductor.id, res['conductor_affinity']) res = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual({'test': 'one'}, res['extra']) self.assertEqual('allocating', res['state']) self.assertIsNotNone(res['uuid']) self.assertEqual(self.service.conductor.id, res['conductor_affinity']) mock_spawn.assert_called_once_with(self.service, allocations.do_allocate, self.context, mock.ANY) @mock.patch.object(manager.ConductorManager, '_spawn_worker', mock.Mock()) @mock.patch.object(allocations, 'backfill_allocation', autospec=True) def test_create_allocation_with_node_id(self, mock_backfill): node = obj_utils.create_test_node(self.context) allocation = obj_utils.get_test_allocation(self.context, node_id=node.id) self._start_service() res = self.service.create_allocation(self.context, allocation) mock_backfill.assert_called_once_with(self.context, allocation, node.id) self.assertEqual('allocating', res['state']) self.assertIsNotNone(res['uuid']) self.assertEqual(self.service.conductor.id, res['conductor_affinity']) # create_allocation purges node_id, and since we stub out # backfill_allocation, it does not get populated. self.assertIsNone(res['node_id']) res = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('allocating', res['state']) self.assertIsNotNone(res['uuid']) self.assertEqual(self.service.conductor.id, res['conductor_affinity']) def test_destroy_allocation_without_node(self): allocation = obj_utils.create_test_allocation(self.context) self.service.destroy_allocation(self.context, allocation) self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_uuid, self.context, allocation['uuid']) def test_destroy_allocation_with_node(self): node = obj_utils.create_test_node(self.context) allocation = obj_utils.create_test_allocation(self.context, node_id=node['id']) node.instance_uuid = allocation['uuid'] node.allocation_id = allocation['id'] node.save() self.service.destroy_allocation(self.context, allocation) self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_uuid, self.context, allocation['uuid']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_destroy_allocation_with_active_node(self): node = obj_utils.create_test_node(self.context, provision_state='active') allocation = obj_utils.create_test_allocation(self.context, node_id=node['id']) node.instance_uuid = allocation['uuid'] node.allocation_id = allocation['id'] node.save() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_allocation, self.context, allocation) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) objects.Allocation.get_by_uuid(self.context, allocation['uuid']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_destroy_allocation_with_transient_node(self): node = obj_utils.create_test_node(self.context, target_provision_state='active', provision_state='deploying') allocation = obj_utils.create_test_allocation(self.context, node_id=node['id']) node.instance_uuid = allocation['uuid'] node.allocation_id = allocation['id'] node.save() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_allocation, self.context, allocation) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) objects.Allocation.get_by_uuid(self.context, allocation['uuid']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_destroy_allocation_with_node_in_maintenance(self): node = obj_utils.create_test_node(self.context, provision_state='active', maintenance=True) allocation = obj_utils.create_test_allocation(self.context, node_id=node['id']) node.instance_uuid = allocation['uuid'] node.allocation_id = allocation['id'] node.save() self.service.destroy_allocation(self.context, allocation) self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_uuid, self.context, allocation['uuid']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) @mock.patch.object(allocations, 'do_allocate', autospec=True) def test_resume_allocations(self, mock_allocate): another_conductor = obj_utils.create_test_conductor( self.context, id=42, hostname='another-host') self._start_service() obj_utils.create_test_allocation( self.context, state='active', conductor_affinity=self.service.conductor.id) obj_utils.create_test_allocation( self.context, state='allocating', conductor_affinity=another_conductor.id) allocation = obj_utils.create_test_allocation( self.context, state='allocating', conductor_affinity=self.service.conductor.id) self.service._resume_allocations(self.context) mock_allocate.assert_called_once_with(self.context, mock.ANY) actual = mock_allocate.call_args[0][1] self.assertEqual(allocation.uuid, actual.uuid) self.assertIsInstance(allocation, objects.Allocation) @mock.patch.object(allocations, 'do_allocate', autospec=True) def test_check_orphaned_allocations(self, mock_allocate): alive_conductor = obj_utils.create_test_conductor( self.context, id=42, hostname='alive') dead_conductor = obj_utils.create_test_conductor( self.context, id=43, hostname='dead') obj_utils.create_test_allocation( self.context, state='allocating', conductor_affinity=alive_conductor.id) allocation = obj_utils.create_test_allocation( self.context, state='allocating', conductor_affinity=dead_conductor.id) self._start_service() with mock.patch.object(self.dbapi, 'get_offline_conductors', autospec=True) as mock_conds: mock_conds.return_value = [dead_conductor.id] self.service._check_orphan_allocations(self.context) mock_allocate.assert_called_once_with(self.context, mock.ANY) actual = mock_allocate.call_args[0][1] self.assertEqual(allocation.uuid, actual.uuid) self.assertIsInstance(allocation, objects.Allocation) allocation = self.dbapi.get_allocation_by_id(allocation.id) self.assertEqual(self.service.conductor.id, allocation.conductor_affinity) @mock.patch('time.sleep', lambda _: None) class DoAllocateTestCase(db_base.DbTestCase): def test_success(self): node = obj_utils.create_test_node(self.context, power_state='power on', resource_class='x-large', provision_state='available') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') allocations.do_allocate(self.context, allocation) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_with_traits(self): obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), power_state='power on', resource_class='x-large', provision_state='available') node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), power_state='power on', resource_class='x-large', provision_state='available') db_utils.create_test_node_traits(['tr1', 'tr2'], node_id=node.id) allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large', traits=['tr2']) allocations.do_allocate(self.context, allocation) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) self.assertEqual(allocation['traits'], ['tr2']) def test_with_candidates(self): obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), power_state='power on', resource_class='x-large', provision_state='available') node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), power_state='power on', resource_class='x-large', provision_state='available') allocation = obj_utils.create_test_allocation( self.context, resource_class='x-large', candidate_nodes=[node['uuid']]) allocations.do_allocate(self.context, allocation) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) self.assertEqual([node['uuid']], allocation['candidate_nodes']) @mock.patch.object(task_manager, 'acquire', autospec=True, side_effect=task_manager.acquire) def test_nodes_filtered_out(self, mock_acquire): # Resource class does not match obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-small', power_state='power off', provision_state='available') # Provision state is not available obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state='power off', provision_state='manageable') # Power state is undefined obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state=None, provision_state='available') # Maintenance mode is on obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), maintenance=True, resource_class='x-large', power_state='power off', provision_state='available') # Already associated obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state='power off', provision_state='available') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') allocations.do_allocate(self.context, allocation) self.assertIn('no available nodes', allocation['last_error']) self.assertIn('x-large', allocation['last_error']) self.assertEqual('error', allocation['state']) # All nodes are filtered out on the database level. self.assertFalse(mock_acquire.called) @mock.patch.object(task_manager, 'acquire', autospec=True, side_effect=task_manager.acquire) def test_nodes_filtered_out_project(self, mock_acquire): # Owner and lessee do not match obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), owner='54321', resource_class='x-large', power_state='power off', provision_state='available') obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), lessee='54321', resource_class='x-large', power_state='power off', provision_state='available') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large', owner='12345') allocations.do_allocate(self.context, allocation) self.assertIn('no available nodes', allocation['last_error']) self.assertIn('x-large', allocation['last_error']) self.assertEqual('error', allocation['state']) # All nodes are filtered out on the database level. self.assertFalse(mock_acquire.called) @mock.patch.object(task_manager, 'acquire', autospec=True, side_effect=task_manager.acquire) def test_nodes_locked(self, mock_acquire): self.config(node_locked_retry_attempts=2, group='conductor') node1 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), maintenance=False, resource_class='x-large', power_state='power off', provision_state='available', reservation='example.com') node2 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state='power off', provision_state='available', reservation='example.com') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') allocations.do_allocate(self.context, allocation) self.assertIn('could not reserve any of 2', allocation['last_error']) self.assertEqual('error', allocation['state']) self.assertEqual(6, mock_acquire.call_count) # NOTE(dtantsur): node are tried in random order by design, so we # cannot directly use assert_has_calls. Check that all nodes are tried # before going into retries (rather than each tried 3 times in a row). nodes = [call[0][1] for call in mock_acquire.call_args_list] for offset in (0, 2, 4): self.assertEqual(set(nodes[offset:offset + 2]), {node1.uuid, node2.uuid}) @mock.patch.object(task_manager, 'acquire', autospec=True) def test_nodes_changed_after_lock(self, mock_acquire): nodes = [obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state='power off', provision_state='available') for _ in range(5)] for node in nodes: db_utils.create_test_node_trait(trait='tr1', node_id=node.id) # Modify nodes in-memory so that they no longer match the allocation: # Resource class does not match nodes[0].resource_class = 'x-small' # Provision state is not available nodes[1].provision_state = 'deploying' # Maintenance mode is on nodes[2].maintenance = True # Already associated nodes[3].instance_uuid = uuidutils.generate_uuid() # Traits changed nodes[4].traits.objects[:] = [] mock_acquire.side_effect = [ mock.MagicMock(**{'__enter__.return_value.node': node}) for node in nodes ] allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large', traits=['tr1']) allocations.do_allocate(self.context, allocation) self.assertIn('all nodes were filtered out', allocation['last_error']) self.assertEqual('error', allocation['state']) # No retries for these failures. self.assertEqual(5, mock_acquire.call_count) @mock.patch.object(task_manager, 'acquire', autospec=True, side_effect=task_manager.acquire) def test_nodes_candidates_do_not_match(self, mock_acquire): obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', power_state='power off', provision_state='available') # Resource class does not match node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), power_state='power on', resource_class='x-small', provision_state='available') allocation = obj_utils.create_test_allocation( self.context, resource_class='x-large', candidate_nodes=[node['uuid']]) allocations.do_allocate(self.context, allocation) self.assertIn('none of the requested nodes', allocation['last_error']) self.assertIn('x-large', allocation['last_error']) self.assertEqual('error', allocation['state']) # All nodes are filtered out on the database level. self.assertFalse(mock_acquire.called) class BackfillAllocationTestCase(db_base.DbTestCase): def test_with_associated_node(self): uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, instance_uuid=uuid, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, uuid=uuid, resource_class='x-large') allocations.backfill_allocation(self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_with_unassociated_node(self): node = obj_utils.create_test_node(self.context, instance_uuid=None, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') allocations.backfill_allocation(self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_with_candidate_nodes(self): node = obj_utils.create_test_node(self.context, instance_uuid=None, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation( self.context, candidate_nodes=[node.uuid], resource_class='x-large') allocations.backfill_allocation(self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_without_resource_class(self): uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, instance_uuid=uuid, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, uuid=uuid, resource_class=None) allocations.backfill_allocation(self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertIsNone(allocation['last_error']) self.assertEqual('active', allocation['state']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(allocation['uuid'], node['instance_uuid']) self.assertEqual(allocation['id'], node['allocation_id']) def test_node_associated_with_another_instance(self): other_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, instance_uuid=other_uuid, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') self.assertRaises(exception.NodeAssociated, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('associated', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(other_uuid, node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_non_existing_node(self): allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') self.assertRaises(exception.NodeNotFound, allocations.backfill_allocation, self.context, allocation, 42) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('Node 42 could not be found', allocation['last_error']) self.assertIsNone(allocation['node_id']) def test_uuid_associated_with_another_instance(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), instance_uuid=uuid, resource_class='x-large', provision_state='active') node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, uuid=uuid, resource_class='x-large') self.assertRaises(exception.InstanceAssociated, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('associated', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_resource_class_mismatch(self): node = obj_utils.create_test_node(self.context, resource_class='x-small', provision_state='active') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') self.assertRaises(exception.AllocationFailed, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('resource class', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_traits_mismatch(self): node = obj_utils.create_test_node(self.context, resource_class='x-large', provision_state='active') db_utils.create_test_node_traits(['tr1', 'tr2'], node_id=node.id) allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large', traits=['tr1', 'tr3']) self.assertRaises(exception.AllocationFailed, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('traits', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_state_not_active(self): node = obj_utils.create_test_node(self.context, resource_class='x-large', provision_state='available') allocation = obj_utils.create_test_allocation(self.context, resource_class='x-large') self.assertRaises(exception.AllocationFailed, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('must be in the "active" state', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) def test_candidate_nodes_mismatch(self): node = obj_utils.create_test_node(self.context, resource_class='x-large', provision_state='active') allocation = obj_utils.create_test_allocation( self.context, candidate_nodes=[uuidutils.generate_uuid()], resource_class='x-large') self.assertRaises(exception.AllocationFailed, allocations.backfill_allocation, self.context, allocation, node.id) allocation = objects.Allocation.get_by_uuid(self.context, allocation['uuid']) self.assertEqual('error', allocation['state']) self.assertIn('Candidate nodes', allocation['last_error']) self.assertIsNone(allocation['node_id']) node = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertIsNone(node['instance_uuid']) self.assertIsNone(node['allocation_id']) ironic-15.0.0/ironic/tests/unit/conductor/test_task_manager.py0000664000175000017500000013551113652514273024571 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for :class:`ironic.conductor.task_manager`.""" import futurist import mock from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import fsm from ironic.common import states from ironic.conductor import notification_utils from ironic.conductor import task_manager from ironic import objects from ironic.objects import fields from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils @mock.patch.object(objects.Node, 'get') @mock.patch.object(objects.Node, 'release') @mock.patch.object(objects.Node, 'reserve') @mock.patch.object(driver_factory, 'build_driver_for_task') @mock.patch.object(objects.Port, 'list_by_node_id') @mock.patch.object(objects.Portgroup, 'list_by_node_id') @mock.patch.object(objects.VolumeConnector, 'list_by_node_id') @mock.patch.object(objects.VolumeTarget, 'list_by_node_id') class TaskManagerTestCase(db_base.DbTestCase): def setUp(self): super(TaskManagerTestCase, self).setUp() self.host = 'test-host' self.config(host=self.host) self.config(node_locked_retry_attempts=1, group='conductor') self.config(node_locked_retry_interval=0, group='conductor') self.node = obj_utils.create_test_node(self.context) self.future_mock = mock.Mock(spec=['cancel', 'add_done_callback']) def test_excl_lock(self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id') as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(get_volconn_mock.return_value, task.volume_connectors) self.assertEqual(get_voltgt_mock.return_value, task.volume_targets) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertFalse(task.shared) build_driver_mock.assert_called_once_with(task) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) get_volconn_mock.assert_called_once_with(self.context, self.node.id) get_voltgt_mock.assert_called_once_with(self.context, self.node.id) release_mock.assert_called_once_with(self.context, self.host, self.node.id) def test_no_driver(self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', load_driver=False) as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(get_volconn_mock.return_value, task.volume_connectors) self.assertEqual(get_voltgt_mock.return_value, task.volume_targets) self.assertIsNone(task.driver) self.assertFalse(task.shared) self.assertFalse(build_driver_mock.called) def test_excl_nested_acquire( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node2 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware') reserve_mock.return_value = self.node get_ports_mock.return_value = mock.sentinel.ports1 get_portgroups_mock.return_value = mock.sentinel.portgroups1 get_volconn_mock.return_value = mock.sentinel.volconn1 get_voltgt_mock.return_value = mock.sentinel.voltgt1 build_driver_mock.return_value = mock.sentinel.driver1 with task_manager.TaskManager(self.context, 'node-id1') as task: reserve_mock.return_value = node2 get_ports_mock.return_value = mock.sentinel.ports2 get_portgroups_mock.return_value = mock.sentinel.portgroups2 get_volconn_mock.return_value = mock.sentinel.volconn2 get_voltgt_mock.return_value = mock.sentinel.voltgt2 build_driver_mock.return_value = mock.sentinel.driver2 with task_manager.TaskManager(self.context, 'node-id2') as task2: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(mock.sentinel.ports1, task.ports) self.assertEqual(mock.sentinel.portgroups1, task.portgroups) self.assertEqual(mock.sentinel.volconn1, task.volume_connectors) self.assertEqual(mock.sentinel.voltgt1, task.volume_targets) self.assertEqual(mock.sentinel.driver1, task.driver) self.assertFalse(task.shared) self.assertEqual(self.context, task2.context) self.assertEqual(node2, task2.node) self.assertEqual(mock.sentinel.ports2, task2.ports) self.assertEqual(mock.sentinel.portgroups2, task2.portgroups) self.assertEqual(mock.sentinel.volconn2, task2.volume_connectors) self.assertEqual(mock.sentinel.voltgt2, task2.volume_targets) self.assertEqual(mock.sentinel.driver2, task2.driver) self.assertFalse(task2.shared) self.assertEqual([mock.call(task), mock.call(task2)], build_driver_mock.call_args_list) self.assertEqual([mock.call(self.context, 'node-id1'), mock.call(self.context, 'node-id2')], node_get_mock.call_args_list) self.assertEqual([mock.call(self.context, self.host, 'node-id1'), mock.call(self.context, self.host, 'node-id2')], reserve_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.id), mock.call(self.context, node2.id)], get_ports_mock.call_args_list) # release should be in reverse order self.assertEqual([mock.call(self.context, self.host, node2.id), mock.call(self.context, self.host, self.node.id)], release_mock.call_args_list) def test_excl_lock_exception_then_lock( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): retry_attempts = 3 self.config(node_locked_retry_attempts=retry_attempts, group='conductor') # Fail on the first lock attempt, succeed on the second. reserve_mock.side_effect = [exception.NodeLocked(node='foo', host='foo'), self.node] with task_manager.TaskManager(self.context, 'fake-node-id') as task: self.assertFalse(task.shared) expected_calls = [mock.call(self.context, self.host, 'fake-node-id')] * 2 reserve_mock.assert_has_calls(expected_calls) self.assertEqual(2, reserve_mock.call_count) def test_excl_lock_exception_no_retries( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): retry_attempts = 3 self.config(node_locked_retry_attempts=retry_attempts, group='conductor') # Fail on the first lock attempt, succeed on the second. reserve_mock.side_effect = [exception.NodeLocked(node='foo', host='foo'), self.node] self.assertRaises(exception.NodeLocked, task_manager.TaskManager, self.context, 'fake-node-id', retry=False) reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') def test_excl_lock_reserve_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): retry_attempts = 3 self.config(node_locked_retry_attempts=retry_attempts, group='conductor') reserve_mock.side_effect = exception.NodeLocked(node='foo', host='foo') self.assertRaises(exception.NodeLocked, task_manager.TaskManager, self.context, 'fake-node-id') node_get_mock.assert_called_with(self.context, 'fake-node-id') reserve_mock.assert_called_with(self.context, self.host, 'fake-node-id') self.assertEqual(retry_attempts, reserve_mock.call_count) self.assertFalse(get_ports_mock.called) self.assertFalse(get_portgroups_mock.called) self.assertFalse(get_volconn_mock.called) self.assertFalse(get_voltgt_mock.called) self.assertFalse(build_driver_mock.called) self.assertFalse(release_mock.called) def test_excl_lock_get_ports_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_ports_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') node_get_mock.assert_called_once_with(self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) def test_excl_lock_get_portgroups_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_portgroups_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') node_get_mock.assert_called_once_with(self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_portgroups_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) def test_excl_lock_get_volconn_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_volconn_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_volconn_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(get_voltgt_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') def test_excl_lock_get_voltgt_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node get_voltgt_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_voltgt_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) release_mock.assert_called_once_with(self.context, self.host, self.node.id) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') def test_excl_lock_build_driver_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node build_driver_mock.side_effect = ( exception.DriverNotFound(driver_name='foo')) self.assertRaises(exception.DriverNotFound, task_manager.TaskManager, self.context, 'fake-node-id') node_get_mock.assert_called_once_with(self.context, 'fake-node-id') reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) build_driver_mock.assert_called_once_with(mock.ANY) release_mock.assert_called_once_with(self.context, self.host, self.node.id) def test_shared_lock( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', shared=True) as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(get_volconn_mock.return_value, task.volume_connectors) self.assertEqual(get_voltgt_mock.return_value, task.volume_targets) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertTrue(task.shared) build_driver_mock.assert_called_once_with(task) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) get_volconn_mock.assert_called_once_with(self.context, self.node.id) get_voltgt_mock.assert_called_once_with(self.context, self.node.id) def test_shared_lock_node_get_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.side_effect = exception.NodeNotFound(node='foo') self.assertRaises(exception.NodeNotFound, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') self.assertFalse(get_ports_mock.called) self.assertFalse(get_portgroups_mock.called) self.assertFalse(get_volconn_mock.called) self.assertFalse(get_voltgt_mock.called) self.assertFalse(build_driver_mock.called) def test_shared_lock_get_ports_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_ports_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) def test_shared_lock_get_portgroups_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_portgroups_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_portgroups_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) def test_shared_lock_get_volconn_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_volconn_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_volconn_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(get_voltgt_mock.called) def test_shared_lock_get_voltgt_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node get_voltgt_mock.side_effect = exception.IronicException('foo') self.assertRaises(exception.IronicException, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_voltgt_mock.assert_called_once_with(self.context, self.node.id) self.assertFalse(build_driver_mock.called) def test_shared_lock_build_driver_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node build_driver_mock.side_effect = ( exception.DriverNotFound(driver_name='foo')) self.assertRaises(exception.DriverNotFound, task_manager.TaskManager, self.context, 'fake-node-id', shared=True) self.assertFalse(reserve_mock.called) self.assertFalse(release_mock.called) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) get_volconn_mock.assert_called_once_with(self.context, self.node.id) get_voltgt_mock.assert_called_once_with(self.context, self.node.id) build_driver_mock.assert_called_once_with(mock.ANY) def test_upgrade_lock( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): node_get_mock.return_value = self.node reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'fake-node-id', shared=True, purpose='ham') as task: self.assertEqual(self.context, task.context) self.assertEqual(self.node, task.node) self.assertEqual(get_ports_mock.return_value, task.ports) self.assertEqual(get_portgroups_mock.return_value, task.portgroups) self.assertEqual(get_volconn_mock.return_value, task.volume_connectors) self.assertEqual(get_voltgt_mock.return_value, task.volume_targets) self.assertEqual(build_driver_mock.return_value, task.driver) self.assertTrue(task.shared) self.assertFalse(reserve_mock.called) task.upgrade_lock() self.assertFalse(task.shared) self.assertEqual('ham', task._purpose) # second upgrade does nothing except changes the purpose task.upgrade_lock(purpose='spam') self.assertFalse(task.shared) self.assertEqual('spam', task._purpose) build_driver_mock.assert_called_once_with(mock.ANY) # make sure reserve() was called only once reserve_mock.assert_called_once_with(self.context, self.host, 'fake-node-id') release_mock.assert_called_once_with(self.context, self.host, self.node.id) node_get_mock.assert_called_once_with(self.context, 'fake-node-id') get_ports_mock.assert_called_once_with(self.context, self.node.id) get_portgroups_mock.assert_called_once_with(self.context, self.node.id) get_volconn_mock.assert_called_once_with(self.context, self.node.id) get_voltgt_mock.assert_called_once_with(self.context, self.node.id) def test_upgrade_lock_refreshes_fsm(self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): reserve_mock.return_value = self.node node_get_mock.return_value = self.node with task_manager.acquire(self.context, 'fake-node-id', shared=True) as task1: self.assertEqual(states.AVAILABLE, task1.node.provision_state) with task_manager.acquire(self.context, 'fake-node-id', shared=False) as task2: # move the node to manageable task2.process_event('manage') self.assertEqual(states.MANAGEABLE, task1.node.provision_state) # now upgrade our shared task and try to go to cleaning # this will explode if task1's FSM doesn't get refreshed task1.upgrade_lock() task1.process_event('provide') self.assertEqual(states.CLEANING, task1.node.provision_state) @mock.patch.object(task_manager.TaskManager, '_notify_provision_state_change', autospec=True) def test_spawn_after( self, notify_mock, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock(return_value=self.future_mock) task_release_mock = mock.Mock() reserve_mock.return_value = self.node with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') self.future_mock.add_done_callback.assert_called_once_with( task._thread_release_resources) self.assertFalse(self.future_mock.cancel.called) # Since we mocked link(), we're testing that __exit__ didn't # release resources pending the finishing of the background # thread self.assertFalse(task_release_mock.called) notify_mock.assert_called_once_with(task) def test_spawn_after_exception_while_yielded( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock() task_release_mock = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock raise exception.IronicException('foo') self.assertRaises(exception.IronicException, _test_it) self.assertFalse(spawn_mock.called) task_release_mock.assert_called_once_with() @mock.patch.object(task_manager.TaskManager, '_notify_provision_state_change', autospec=True) def test_spawn_after_spawn_fails( self, notify_mock, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): spawn_mock = mock.Mock(side_effect=exception.IronicException('foo')) task_release_mock = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() self.assertFalse(notify_mock.called) def test_spawn_after_link_fails( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): self.future_mock.add_done_callback.side_effect = ( exception.IronicException('foo')) spawn_mock = mock.Mock(return_value=self.future_mock) task_release_mock = mock.Mock() thr_release_mock = mock.Mock(spec_set=[]) reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task._thread_release_resources = thr_release_mock task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') self.future_mock.add_done_callback.assert_called_once_with( thr_release_mock) self.future_mock.cancel.assert_called_once_with() task_release_mock.assert_called_once_with() def test_spawn_after_on_error_hook( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): expected_exception = exception.IronicException('foo') spawn_mock = mock.Mock(side_effect=expected_exception) task_release_mock = mock.Mock() on_error_handler = mock.Mock() reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.set_spawn_error_hook(on_error_handler, 'fake-argument') task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() on_error_handler.assert_called_once_with(expected_exception, 'fake-argument') def test_spawn_after_on_error_hook_exception( self, get_voltgt_mock, get_volconn_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): expected_exception = exception.IronicException('foo') spawn_mock = mock.Mock(side_effect=expected_exception) task_release_mock = mock.Mock() # Raise an exception within the on_error handler on_error_handler = mock.Mock(side_effect=Exception('unexpected')) on_error_handler.__name__ = 'foo_method' reserve_mock.return_value = self.node def _test_it(): with task_manager.TaskManager(self.context, 'node-id') as task: task.set_spawn_error_hook(on_error_handler, 'fake-argument') task.spawn_after(spawn_mock, 1, 2, foo='bar', cat='meow') task.release_resources = task_release_mock # Make sure the original exception is the one raised self.assertRaises(exception.IronicException, _test_it) spawn_mock.assert_called_once_with(1, 2, foo='bar', cat='meow') task_release_mock.assert_called_once_with() on_error_handler.assert_called_once_with(expected_exception, 'fake-argument') @mock.patch.object(states.machine, 'copy') def test_init_prepares_fsm( self, copy_mock, get_volconn_mock, get_voltgt_mock, get_portgroups_mock, get_ports_mock, build_driver_mock, reserve_mock, release_mock, node_get_mock): m = mock.Mock(spec=fsm.FSM) reserve_mock.return_value = self.node copy_mock.return_value = m t = task_manager.TaskManager('fake', 'fake') copy_mock.assert_called_once_with() self.assertIs(m, t.fsm) m.initialize.assert_called_once_with( start_state=self.node.provision_state, target_state=self.node.target_provision_state) class TaskManagerStateModelTestCases(tests_base.TestCase): def setUp(self): super(TaskManagerStateModelTestCases, self).setUp() self.fsm = mock.Mock(spec=fsm.FSM) self.node = mock.Mock(spec=objects.Node) self.task = mock.Mock(spec=task_manager.TaskManager) self.task.fsm = self.fsm self.task.node = self.node def test_release_clears_resources(self): t = self.task t.release_resources = task_manager.TaskManager.release_resources t.driver = mock.Mock() t.ports = mock.Mock() t.portgroups = mock.Mock() t.volume_connectors = mock.Mock() t.volume_targets = mock.Mock() t.shared = True t._purpose = 'purpose' t._debug_timer = mock.Mock() t._debug_timer.elapsed.return_value = 3.14 t.release_resources(t) self.assertIsNone(t.node) self.assertIsNone(t.driver) self.assertIsNone(t.ports) self.assertIsNone(t.portgroups) self.assertIsNone(t.volume_connectors) self.assertIsNone(t.volume_targets) self.assertIsNone(t.fsm) def test_process_event_fsm_raises(self): self.task.process_event = task_manager.TaskManager.process_event self.fsm.process_event.side_effect = exception.InvalidState('test') self.assertRaises( exception.InvalidState, self.task.process_event, self.task, 'fake') self.assertEqual(0, self.task.spawn_after.call_count) self.assertFalse(self.task.node.save.called) def test_process_event_sets_callback(self): cb = mock.Mock() arg = mock.Mock() kwarg = mock.Mock() self.task.process_event = task_manager.TaskManager.process_event self.task.process_event( self.task, 'fake', callback=cb, call_args=[arg], call_kwargs={'mock': kwarg}) self.fsm.process_event.assert_called_once_with('fake', target_state=None) self.task.spawn_after.assert_called_with(cb, arg, mock=kwarg) self.assertEqual(1, self.task.node.save.call_count) self.assertIsNone(self.node.last_error) def test_process_event_sets_callback_and_error_handler(self): arg = mock.Mock() cb = mock.Mock() er = mock.Mock() kwarg = mock.Mock() provision_state = 'provision_state' target_provision_state = 'target' self.node.provision_state = provision_state self.node.target_provision_state = target_provision_state self.task.process_event = task_manager.TaskManager.process_event self.task.process_event( self.task, 'fake', callback=cb, call_args=[arg], call_kwargs={'mock': kwarg}, err_handler=er) self.task.set_spawn_error_hook.assert_called_once_with( er, self.node, provision_state, target_provision_state) self.fsm.process_event.assert_called_once_with('fake', target_state=None) self.task.spawn_after.assert_called_with(cb, arg, mock=kwarg) self.assertEqual(1, self.task.node.save.call_count) self.assertIsNone(self.node.last_error) self.assertNotEqual(provision_state, self.node.provision_state) self.assertNotEqual(target_provision_state, self.node.target_provision_state) def test_process_event_sets_target_state(self): event = 'fake' tgt_state = 'target' provision_state = 'provision_state' target_provision_state = 'target_provision_state' self.node.provision_state = provision_state self.node.target_provision_state = target_provision_state self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, event, target_state=tgt_state) self.fsm.process_event.assert_called_once_with(event, target_state=tgt_state) self.assertEqual(1, self.task.node.save.call_count) self.assertNotEqual(provision_state, self.node.provision_state) self.assertNotEqual(target_provision_state, self.node.target_provision_state) def test_process_event_callback_stable_state(self): callback = mock.Mock() for state in states.STABLE_STATES: self.node.provision_state = state self.node.target_provision_state = 'target' self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, 'fake', callback=callback) # assert the target state is set when callback is passed self.assertNotEqual(states.NOSTATE, self.task.node.target_provision_state) def test_process_event_no_callback_stable_state(self): for state in states.STABLE_STATES: self.node.provision_state = state self.node.target_provision_state = 'target' self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, 'fake') # assert the target state was cleared when moving to a # stable state self.assertEqual(states.NOSTATE, self.task.node.target_provision_state) def test_process_event_no_callback_notify(self): self.task.process_event = task_manager.TaskManager.process_event self.task.process_event(self.task, 'fake') self.task._notify_provision_state_change.assert_called_once_with() @task_manager.require_exclusive_lock def _req_excl_lock_method(*args, **kwargs): return (args, kwargs) class ExclusiveLockDecoratorTestCase(tests_base.TestCase): def setUp(self): super(ExclusiveLockDecoratorTestCase, self).setUp() self.task = mock.Mock(spec=task_manager.TaskManager) self.task.context = self.context self.args_task_first = (self.task, 1, 2) self.args_task_second = (1, self.task, 2) self.kwargs = dict(cat='meow', dog='wuff') def test_with_excl_lock_task_first_arg(self): self.task.shared = False (args, kwargs) = _req_excl_lock_method(*self.args_task_first, **self.kwargs) self.assertEqual(self.args_task_first, args) self.assertEqual(self.kwargs, kwargs) def test_with_excl_lock_task_second_arg(self): self.task.shared = False (args, kwargs) = _req_excl_lock_method(*self.args_task_second, **self.kwargs) self.assertEqual(self.args_task_second, args) self.assertEqual(self.kwargs, kwargs) def test_with_shared_lock_task_first_arg(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, _req_excl_lock_method, *self.args_task_first, **self.kwargs) def test_with_shared_lock_task_second_arg(self): self.task.shared = True self.assertRaises(exception.ExclusiveLockRequired, _req_excl_lock_method, *self.args_task_second, **self.kwargs) class ThreadExceptionTestCase(tests_base.TestCase): def setUp(self): super(ThreadExceptionTestCase, self).setUp() self.node = mock.Mock(spec=objects.Node) self.node.last_error = None self.task = mock.Mock(spec=task_manager.TaskManager) self.task.node = self.node self.task._write_exception = task_manager.TaskManager._write_exception self.future_mock = mock.Mock(spec_set=['exception']) def async_method_foo(): pass self.task._spawn_args = (async_method_foo,) def test_set_node_last_error(self): self.future_mock.exception.return_value = Exception('fiasco') self.task._write_exception(self.task, self.future_mock) self.node.save.assert_called_once_with() self.assertIn('fiasco', self.node.last_error) self.assertIn('async_method_foo', self.node.last_error) def test_set_node_last_error_exists(self): self.future_mock.exception.return_value = Exception('fiasco') self.node.last_error = 'oops' self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.assertFalse(self.future_mock.exception.called) self.assertEqual('oops', self.node.last_error) def test_set_node_last_error_no_error(self): self.future_mock.exception.return_value = None self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.future_mock.exception.assert_called_once_with() self.assertIsNone(self.node.last_error) @mock.patch.object(task_manager.LOG, 'exception', spec_set=True, autospec=True) def test_set_node_last_error_cancelled(self, log_mock): self.future_mock.exception.side_effect = futurist.CancelledError() self.task._write_exception(self.task, self.future_mock) self.assertFalse(self.node.save.called) self.future_mock.exception.assert_called_once_with() self.assertIsNone(self.node.last_error) self.assertTrue(log_mock.called) @mock.patch.object(notification_utils, 'emit_provision_set_notification', autospec=True) class ProvisionNotifyTestCase(tests_base.TestCase): def setUp(self): super(ProvisionNotifyTestCase, self).setUp() self.node = mock.Mock(spec=objects.Node) self.task = mock.Mock(spec=task_manager.TaskManager) self.task.node = self.node notifier = task_manager.TaskManager._notify_provision_state_change self.task.notifier = notifier self.task._prev_target_provision_state = 'oldtarget' self.task._event = 'event' def test_notify_no_state_change(self, emit_mock): self.task._event = None self.task.notifier(self.task) self.assertFalse(emit_mock.called) def test_notify_error_state(self, emit_mock): self.task._event = 'fail' self.task._prev_provision_state = 'fake' self.task.notifier(self.task) emit_mock.assert_called_once_with(self.task, fields.NotificationLevel.ERROR, fields.NotificationStatus.ERROR, 'fake', 'oldtarget', 'fail') self.assertIsNone(self.task._event) def test_notify_unstable_to_unstable(self, emit_mock): self.node.provision_state = states.DEPLOYING self.task._prev_provision_state = states.DEPLOYWAIT self.task.notifier(self.task) emit_mock.assert_called_once_with(self.task, fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, states.DEPLOYWAIT, 'oldtarget', 'event') def test_notify_stable_to_unstable(self, emit_mock): self.node.provision_state = states.DEPLOYING self.task._prev_provision_state = states.AVAILABLE self.task.notifier(self.task) emit_mock.assert_called_once_with(self.task, fields.NotificationLevel.INFO, fields.NotificationStatus.START, states.AVAILABLE, 'oldtarget', 'event') def test_notify_unstable_to_stable(self, emit_mock): self.node.provision_state = states.ACTIVE self.task._prev_provision_state = states.DEPLOYING self.task.notifier(self.task) emit_mock.assert_called_once_with(self.task, fields.NotificationLevel.INFO, fields.NotificationStatus.END, states.DEPLOYING, 'oldtarget', 'event') def test_notify_stable_to_stable(self, emit_mock): self.node.provision_state = states.MANAGEABLE self.task._prev_provision_state = states.AVAILABLE self.task.notifier(self.task) emit_mock.assert_called_once_with(self.task, fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, states.AVAILABLE, 'oldtarget', 'event') def test_notify_resource_released(self, emit_mock): node = mock.Mock(spec=objects.Node) node.provision_state = states.DEPLOYING node.target_provision_state = states.ACTIVE task = mock.Mock(spec=task_manager.TaskManager) task._prev_provision_state = states.AVAILABLE task._prev_target_provision_state = states.NOSTATE task._event = 'event' task.node = None task._saved_node = node notifier = task_manager.TaskManager._notify_provision_state_change task.notifier = notifier task.notifier(task) task_arg = emit_mock.call_args[0][0] self.assertEqual(node, task_arg.node) self.assertIsNot(task, task_arg) def test_notify_only_once(self, emit_mock): self.node.provision_state = states.DEPLOYING self.task._prev_provision_state = states.AVAILABLE self.task.notifier(self.task) self.assertIsNone(self.task._event) self.task.notifier(self.task) self.assertEqual(1, emit_mock.call_count) self.assertIsNone(self.task._event) ironic-15.0.0/ironic/tests/unit/conductor/test_notification_utils.py0000664000175000017500000002264713652514273026050 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for ironic-conductor notification utilities.""" import mock from oslo_versionedobjects.exception import VersionedObjectsException from ironic.common import exception from ironic.common import states from ironic.conductor import notification_utils as notif_utils from ironic.conductor import task_manager from ironic.objects import fields from ironic.objects import node as node_objects from ironic.objects import notification from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class TestNotificationUtils(db_base.DbTestCase): def setUp(self): super(TestNotificationUtils, self).setUp() self.config(notification_level='debug') self.node = obj_utils.create_test_node(self.context) self.task = mock.Mock(spec_set=['context', 'driver', 'node', 'upgrade_lock', 'shared']) self.task.node = self.node @mock.patch.object(notif_utils, '_emit_conductor_node_notification') def test_emit_power_state_corrected_notification(self, mock_cond_emit): notif_utils.emit_power_state_corrected_notification( self.task, states.POWER_ON) mock_cond_emit.assert_called_once_with( self.task, node_objects.NodeCorrectedPowerStateNotification, node_objects.NodeCorrectedPowerStatePayload, 'power_state_corrected', fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, from_power=states.POWER_ON ) @mock.patch.object(notif_utils, '_emit_conductor_node_notification') def test_emit_power_set_notification(self, mock_cond_emit): notif_utils.emit_power_set_notification( self.task, fields.NotificationLevel.DEBUG, fields.NotificationStatus.END, states.POWER_ON) mock_cond_emit.assert_called_once_with( self.task, node_objects.NodeSetPowerStateNotification, node_objects.NodeSetPowerStatePayload, 'power_set', fields.NotificationLevel.DEBUG, fields.NotificationStatus.END, to_power=states.POWER_ON ) @mock.patch.object(notif_utils, '_emit_conductor_node_notification') def test_emit_console_notification(self, mock_cond_emit): notif_utils.emit_console_notification( self.task, 'console_set', fields.NotificationStatus.END) mock_cond_emit.assert_called_once_with( self.task, node_objects.NodeConsoleNotification, node_objects.NodePayload, 'console_set', fields.NotificationLevel.INFO, fields.NotificationStatus.END, ) @mock.patch.object(notif_utils, '_emit_conductor_node_notification') def test_emit_console_notification_error_status(self, mock_cond_emit): notif_utils.emit_console_notification( self.task, 'console_set', fields.NotificationStatus.ERROR) mock_cond_emit.assert_called_once_with( self.task, node_objects.NodeConsoleNotification, node_objects.NodePayload, 'console_set', fields.NotificationLevel.ERROR, fields.NotificationStatus.ERROR, ) @mock.patch.object(notification, 'mask_secrets') def test__emit_conductor_node_notification(self, mock_secrets): mock_notify_method = mock.Mock() # Required for exception handling mock_notify_method.__name__ = 'MockNotificationConstructor' mock_payload_method = mock.Mock() mock_payload_method.__name__ = 'MockPayloadConstructor' mock_kwargs = {'mock0': mock.Mock(), 'mock1': mock.Mock()} notif_utils._emit_conductor_node_notification( self.task, mock_notify_method, mock_payload_method, 'fake_action', fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, **mock_kwargs ) mock_payload_method.assert_called_once_with( self.task.node, **mock_kwargs) mock_secrets.assert_called_once_with(mock_payload_method.return_value) mock_notify_method.assert_called_once_with( publisher=mock.ANY, event_type=mock.ANY, level=fields.NotificationLevel.INFO, payload=mock_payload_method.return_value ) mock_notify_method.return_value.emit.assert_called_once_with( self.task.context) def test__emit_conductor_node_notification_known_payload_exc(self): """Test exception caught for a known payload exception.""" mock_notify_method = mock.Mock() # Required for exception handling mock_notify_method.__name__ = 'MockNotificationConstructor' mock_payload_method = mock.Mock() mock_payload_method.__name__ = 'MockPayloadConstructor' mock_kwargs = {'mock0': mock.Mock(), 'mock1': mock.Mock()} mock_payload_method.side_effect = exception.NotificationSchemaKeyError notif_utils._emit_conductor_node_notification( self.task, mock_notify_method, mock_payload_method, 'fake_action', fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, **mock_kwargs ) self.assertFalse(mock_notify_method.called) @mock.patch.object(notification, 'mask_secrets') def test__emit_conductor_node_notification_known_notify_exc(self, mock_secrets): """Test exception caught for a known notification exception.""" mock_notify_method = mock.Mock() # Required for exception handling mock_notify_method.__name__ = 'MockNotificationConstructor' mock_payload_method = mock.Mock() mock_payload_method.__name__ = 'MockPayloadConstructor' mock_kwargs = {'mock0': mock.Mock(), 'mock1': mock.Mock()} mock_notify_method.side_effect = VersionedObjectsException notif_utils._emit_conductor_node_notification( self.task, mock_notify_method, mock_payload_method, 'fake_action', fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, **mock_kwargs ) self.assertFalse(mock_notify_method.return_value.emit.called) class ProvisionNotifyTestCase(tests_base.TestCase): @mock.patch('ironic.objects.node.NodeSetProvisionStateNotification') def test_emit_notification(self, provision_mock): provision_mock.__name__ = 'NodeSetProvisionStateNotification' self.config(host='fake-host') node = obj_utils.get_test_node(self.context, provision_state='fake state', target_provision_state='fake target', instance_info={'foo': 'baz'}) task = mock.Mock(spec=task_manager.TaskManager) task.node = node test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils.emit_provision_set_notification( task, test_level, test_status, 'fake_old', 'fake_old_target', 'event') init_kwargs = provision_mock.call_args[1] publisher = init_kwargs['publisher'] event_type = init_kwargs['event_type'] level = init_kwargs['level'] payload = init_kwargs['payload'] self.assertEqual('fake-host', publisher.host) self.assertEqual('ironic-conductor', publisher.service) self.assertEqual('node', event_type.object) self.assertEqual('provision_set', event_type.action) self.assertEqual(test_status, event_type.status) self.assertEqual(test_level, level) self.assertEqual(node.uuid, payload.uuid) self.assertEqual('fake state', payload.provision_state) self.assertEqual('fake target', payload.target_provision_state) self.assertEqual('fake_old', payload.previous_provision_state) self.assertEqual('fake_old_target', payload.previous_target_provision_state) self.assertEqual({'foo': 'baz'}, payload.instance_info) def test_mask_secrets(self): test_info = {'configdrive': 'fake_drive', 'image_url': 'fake-url', 'some_value': 'fake-value'} node = obj_utils.get_test_node(self.context, instance_info=test_info) notification.mask_secrets(node) self.assertEqual('******', node.instance_info['configdrive']) self.assertEqual('******', node.instance_info['image_url']) self.assertEqual('fake-value', node.instance_info['some_value']) ironic-15.0.0/ironic/tests/unit/conductor/test_base_manager.py0000664000175000017500000005775713652514273024560 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic BaseConductorManager.""" import collections import uuid import eventlet import futurist from futurist import periodics from ironic_lib import mdns import mock from oslo_config import cfg from oslo_db import exception as db_exception from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import states from ironic.conductor import base_manager from ironic.conductor import manager from ironic.conductor import notification_utils from ironic.conductor import task_manager from ironic.drivers import fake_hardware from ironic.drivers import generic from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic import objects from ironic.objects import fields from ironic.tests import base as tests_base from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF @mgr_utils.mock_record_keepalive class StartStopTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_start_registers_conductor(self): self.assertRaises(exception.ConductorNotFound, objects.Conductor.get_by_hostname, self.context, self.hostname) self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) def test_start_clears_conductor_locks(self): node = obj_utils.create_test_node(self.context, reservation=self.hostname) node.save() self._start_service() node.refresh() self.assertIsNone(node.reservation) def test_stop_clears_conductor_locks(self): node = obj_utils.create_test_node(self.context, reservation=self.hostname) node.save() self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) self.service.del_host() node.refresh() self.assertIsNone(node.reservation) def test_stop_unregisters_conductor(self): self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) self.service.del_host() self.assertRaises(exception.ConductorNotFound, objects.Conductor.get_by_hostname, self.context, self.hostname) def test_stop_doesnt_unregister_conductor(self): self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) self.service.del_host(deregister=False) res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) @mock.patch.object(manager.ConductorManager, 'init_host') def test_stop_uninitialized_conductor(self, mock_init): self._start_service() self.service.del_host() @mock.patch.object(driver_factory.HardwareTypesFactory, '__getitem__', lambda *args: mock.MagicMock()) @mock.patch.object(driver_factory, 'default_interface', autospec=True) def test_start_registers_driver_names(self, mock_def_iface): init_names = ['fake1', 'fake2'] restart_names = ['fake3', 'fake4'] mock_def_iface.return_value = 'fake' df = driver_factory.HardwareTypesFactory() with mock.patch.object(df._extension_manager, 'names') as mock_names: # verify driver names are registered self.config(enabled_hardware_types=init_names) mock_names.return_value = init_names self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(init_names, res['drivers']) self._stop_service() # verify that restart registers new driver names self.config(enabled_hardware_types=restart_names) mock_names.return_value = restart_names self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(restart_names, res['drivers']) @mock.patch.object(base_manager.BaseConductorManager, '_register_and_validate_hardware_interfaces', autospec=True) @mock.patch.object(driver_factory, 'all_interfaces', autospec=True) @mock.patch.object(driver_factory, 'hardware_types', autospec=True) def test_start_registers_driver_specific_tasks(self, mock_hw_types, mock_ifaces, mock_reg_hw_ifaces): class TestHwType(generic.GenericHardware): @property def supported_management_interfaces(self): return [] @property def supported_power_interfaces(self): return [] # This should not be collected, since we don't collect periodic # tasks from hardware types @periodics.periodic(spacing=100500) def task(self): pass class TestInterface(object): @periodics.periodic(spacing=100500) def iface(self): pass class TestInterface2(object): @periodics.periodic(spacing=100500) def iface(self): pass hw_type = TestHwType() iface1 = TestInterface() iface2 = TestInterface2() expected = [iface1.iface, iface2.iface] mock_hw_types.return_value = {'fake1': hw_type} mock_ifaces.return_value = { 'management': {'fake1': iface1}, 'power': {'fake2': iface2} } self._start_service(start_periodic_tasks=True) tasks = {c[0] for c in self.service._periodic_task_callables} for item in expected: self.assertTrue(periodics.is_periodic(item)) self.assertIn(item, tasks) # no periodic tasks from the hardware type self.assertTrue(periodics.is_periodic(hw_type.task)) self.assertNotIn(hw_type.task, tasks) @mock.patch.object(driver_factory.HardwareTypesFactory, '__init__') def test_start_fails_on_missing_driver(self, mock_df): mock_df.side_effect = exception.DriverNotFound('test') with mock.patch.object(self.dbapi, 'register_conductor') as mock_reg: self.assertRaises(exception.DriverNotFound, self.service.init_host) self.assertTrue(mock_df.called) self.assertFalse(mock_reg.called) def test_start_fails_on_no_enabled_interfaces(self): self.config(enabled_boot_interfaces=[]) self.assertRaisesRegex(exception.ConfigInvalid, 'options enabled_boot_interfaces', self.service.init_host) @mock.patch.object(base_manager, 'LOG') @mock.patch.object(driver_factory, 'HardwareTypesFactory') def test_start_fails_on_hw_types(self, ht_mock, log_mock): driver_factory_mock = mock.MagicMock(names=[]) ht_mock.return_value = driver_factory_mock self.assertRaises(exception.NoDriversLoaded, self.service.init_host) self.assertTrue(log_mock.error.called) ht_mock.assert_called_once_with() @mock.patch.object(base_manager, 'LOG') @mock.patch.object(base_manager.BaseConductorManager, '_register_and_validate_hardware_interfaces') @mock.patch.object(base_manager.BaseConductorManager, 'del_host') def test_start_fails_hw_type_register(self, del_mock, reg_mock, log_mock): reg_mock.side_effect = exception.DriverNotFound('hw-type') self.assertRaises(exception.DriverNotFound, self.service.init_host) self.assertTrue(log_mock.error.called) del_mock.assert_called_once_with() def test_prevent_double_start(self): self._start_service() self.assertRaisesRegex(RuntimeError, 'already running', self.service.init_host) def test_start_recover_nodes_stuck(self): state_trans = [ (states.DEPLOYING, states.DEPLOYFAIL), (states.CLEANING, states.CLEANFAIL), (states.VERIFYING, states.ENROLL), (states.INSPECTING, states.INSPECTFAIL), (states.ADOPTING, states.ADOPTFAIL), (states.RESCUING, states.RESCUEFAIL), (states.UNRESCUING, states.UNRESCUEFAIL), ] nodes = [obj_utils.create_test_node(self.context, uuid=uuid.uuid4(), driver='fake-hardware', provision_state=state[0]) for state in state_trans] self._start_service() for node, state in zip(nodes, state_trans): node.refresh() self.assertEqual(state[1], node.provision_state, 'Test failed when recovering from %s' % state[0]) @mock.patch.object(base_manager, 'LOG') def test_warning_on_low_workers_pool(self, log_mock): CONF.set_override('workers_pool_size', 3, 'conductor') self._start_service() self.assertTrue(log_mock.warning.called) @mock.patch.object(eventlet.greenpool.GreenPool, 'waitall') def test_del_host_waits_on_workerpool(self, wait_mock): self._start_service() self.service.del_host() self.assertTrue(wait_mock.called) def test_conductor_shutdown_flag(self): self._start_service() self.assertFalse(self.service._shutdown) self.service.del_host() self.assertTrue(self.service._shutdown) @mock.patch.object(deploy_utils, 'get_ironic_api_url', autospec=True) @mock.patch.object(mdns, 'Zeroconf', autospec=True) def test_start_with_mdns(self, mock_zc, mock_api_url): CONF.set_override('debug', False) CONF.set_override('enable_mdns', True, 'conductor') self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) mock_zc.return_value.register_service.assert_called_once_with( 'baremetal', mock_api_url.return_value, params={}) @mock.patch.object(deploy_utils, 'get_ironic_api_url', autospec=True) @mock.patch.object(mdns, 'Zeroconf', autospec=True) def test_start_with_mdns_and_debug(self, mock_zc, mock_api_url): CONF.set_override('debug', True) CONF.set_override('enable_mdns', True, 'conductor') self._start_service() res = objects.Conductor.get_by_hostname(self.context, self.hostname) self.assertEqual(self.hostname, res['hostname']) mock_zc.return_value.register_service.assert_called_once_with( 'baremetal', mock_api_url.return_value, params={'ipa_debug': True}) def test_del_host_with_mdns(self): mock_zc = mock.Mock(spec=mdns.Zeroconf) self.service._zeroconf = mock_zc self._start_service() self.service.del_host() mock_zc.close.assert_called_once_with() self.assertIsNone(self.service._zeroconf) class CheckInterfacesTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test__check_enabled_interfaces_success(self): base_manager._check_enabled_interfaces() def test__check_enabled_interfaces_failure(self): self.config(enabled_boot_interfaces=[]) self.assertRaisesRegex(exception.ConfigInvalid, 'options enabled_boot_interfaces', base_manager._check_enabled_interfaces) class KeepAliveTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test__conductor_service_record_keepalive(self): self._start_service() # avoid wasting time at the event.wait() CONF.set_override('heartbeat_interval', 0, 'conductor') with mock.patch.object(self.dbapi, 'touch_conductor') as mock_touch: with mock.patch.object(self.service._keepalive_evt, 'is_set') as mock_is_set: mock_is_set.side_effect = [False, True] self.service._conductor_service_record_keepalive() mock_touch.assert_called_once_with(self.hostname) def test__conductor_service_record_keepalive_failed_db_conn(self): self._start_service() # avoid wasting time at the event.wait() CONF.set_override('heartbeat_interval', 0, 'conductor') with mock.patch.object(self.dbapi, 'touch_conductor') as mock_touch: mock_touch.side_effect = [None, db_exception.DBConnectionError(), None] with mock.patch.object(self.service._keepalive_evt, 'is_set') as mock_is_set: mock_is_set.side_effect = [False, False, False, True] self.service._conductor_service_record_keepalive() self.assertEqual(3, mock_touch.call_count) def test__conductor_service_record_keepalive_failed_error(self): self._start_service() # avoid wasting time at the event.wait() CONF.set_override('heartbeat_interval', 0, 'conductor') with mock.patch.object(self.dbapi, 'touch_conductor') as mock_touch: mock_touch.side_effect = [None, Exception(), None] with mock.patch.object(self.service._keepalive_evt, 'is_set') as mock_is_set: mock_is_set.side_effect = [False, False, False, True] self.service._conductor_service_record_keepalive() self.assertEqual(3, mock_touch.call_count) class ManagerSpawnWorkerTestCase(tests_base.TestCase): def setUp(self): super(ManagerSpawnWorkerTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.executor = mock.Mock(spec=futurist.GreenThreadPoolExecutor) self.service._executor = self.executor def test__spawn_worker(self): self.service._spawn_worker('fake', 1, 2, foo='bar', cat='meow') self.executor.submit.assert_called_once_with( 'fake', 1, 2, foo='bar', cat='meow') def test__spawn_worker_none_free(self): self.executor.submit.side_effect = futurist.RejectedSubmission() self.assertRaises(exception.NoFreeConductorWorker, self.service._spawn_worker, 'fake') @mock.patch.object(objects.Conductor, 'unregister_all_hardware_interfaces', autospec=True) @mock.patch.object(objects.Conductor, 'register_hardware_interfaces', autospec=True) @mock.patch.object(driver_factory, 'default_interface', autospec=True) @mock.patch.object(driver_factory, 'enabled_supported_interfaces', autospec=True) @mgr_utils.mock_record_keepalive class RegisterInterfacesTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(RegisterInterfacesTestCase, self).setUp() self._start_service() def test__register_and_validate_hardware_interfaces(self, esi_mock, default_mock, reg_mock, unreg_mock): # these must be same order as esi_mock side effect hardware_types = collections.OrderedDict(( ('fake-hardware', fake_hardware.FakeHardware()), ('manual-management', generic.ManualManagementHardware), )) esi_mock.side_effect = [ collections.OrderedDict(( ('management', ['fake', 'noop']), ('deploy', ['agent', 'iscsi']), )), collections.OrderedDict(( ('management', ['fake']), ('deploy', ['agent', 'fake']), )), ] default_mock.side_effect = ('fake', 'agent', 'fake', 'agent') expected_calls = [ mock.call(mock.ANY, 'fake-hardware', 'management', ['fake', 'noop'], 'fake'), mock.call(mock.ANY, 'fake-hardware', 'deploy', ['agent', 'iscsi'], 'agent'), mock.call(mock.ANY, 'manual-management', 'management', ['fake'], 'fake'), mock.call(mock.ANY, 'manual-management', 'deploy', ['agent', 'fake'], 'agent'), ] self.service._register_and_validate_hardware_interfaces(hardware_types) unreg_mock.assert_called_once_with(mock.ANY) # we're iterating over dicts, don't worry about order reg_mock.assert_has_calls(expected_calls) def test__register_and_validate_no_valid_default(self, esi_mock, default_mock, reg_mock, unreg_mock): # these must be same order as esi_mock side effect hardware_types = collections.OrderedDict(( ('fake-hardware', fake_hardware.FakeHardware()), )) esi_mock.side_effect = [ collections.OrderedDict(( ('management', ['fake', 'noop']), ('deploy', ['agent', 'iscsi']), )), ] default_mock.side_effect = exception.NoValidDefaultForInterface("boo") self.assertRaises( exception.NoValidDefaultForInterface, self.service._register_and_validate_hardware_interfaces, hardware_types) default_mock.assert_called_once_with( hardware_types['fake-hardware'], mock.ANY, driver_name='fake-hardware') unreg_mock.assert_called_once_with(mock.ANY) self.assertFalse(reg_mock.called) @mock.patch.object(fake.FakeConsole, 'start_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') class StartConsolesTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test__start_consoles(self, mock_notify, mock_start_console): obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware', console_enabled=True ) obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), driver='fake-hardware' ) self._start_service() self.service._start_consoles(self.context) self.assertEqual(2, mock_start_console.call_count) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_restore', fields.NotificationStatus.START), mock.call(mock.ANY, 'console_restore', fields.NotificationStatus.END)]) def test__start_consoles_no_console_enabled(self, mock_notify, mock_start_console): obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=False) self._start_service() self.service._start_consoles(self.context) self.assertFalse(mock_start_console.called) self.assertFalse(mock_notify.called) def test__start_consoles_failed(self, mock_notify, mock_start_console): test_node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) self._start_service() mock_start_console.side_effect = Exception() self.service._start_consoles(self.context) mock_start_console.assert_called_once_with(mock.ANY, mock.ANY) test_node.refresh() self.assertFalse(test_node.console_enabled) self.assertIsNotNone(test_node.last_error) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_restore', fields.NotificationStatus.START), mock.call(mock.ANY, 'console_restore', fields.NotificationStatus.ERROR)]) @mock.patch.object(base_manager, 'LOG') def test__start_consoles_node_locked(self, log_mock, mock_notify, mock_start_console): test_node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True, reservation='fake-host') self._start_service() self.service._start_consoles(self.context) self.assertFalse(mock_start_console.called) test_node.refresh() self.assertTrue(test_node.console_enabled) self.assertIsNone(test_node.last_error) self.assertTrue(log_mock.warning.called) self.assertFalse(mock_notify.called) @mock.patch.object(base_manager, 'LOG') def test__start_consoles_node_not_found(self, log_mock, mock_notify, mock_start_console): test_node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) self._start_service() with mock.patch.object(task_manager, 'acquire') as mock_acquire: mock_acquire.side_effect = exception.NodeNotFound(node='not found') self.service._start_consoles(self.context) self.assertFalse(mock_start_console.called) test_node.refresh() self.assertTrue(test_node.console_enabled) self.assertIsNone(test_node.last_error) self.assertTrue(log_mock.warning.called) self.assertFalse(mock_notify.called) class MiscTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(MiscTestCase, self).setUp() self._start_service() def test__fail_transient_state(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING) self.service._fail_transient_state(states.DEPLOYING, 'unknown err') node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) def test__fail_transient_state_maintenance(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', maintenance=True, provision_state=states.DEPLOYING) self.service._fail_transient_state(states.DEPLOYING, 'unknown err') node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) ironic-15.0.0/ironic/tests/unit/conductor/test_manager.py0000664000175000017500000134107713652514273023556 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Ironic ManagerService.""" from collections import namedtuple import datetime import queue import re import eventlet from futurist import waiters import mock from oslo_config import cfg import oslo_messaging as messaging from oslo_utils import uuidutils from oslo_versionedobjects import base as ovo_base from oslo_versionedobjects import fields from ironic.common import boot_devices from ironic.common import components from ironic.common import driver_factory from ironic.common import exception from ironic.common import images from ironic.common import indicator_states from ironic.common import nova from ironic.common import states from ironic.conductor import cleaning from ironic.conductor import deployments from ironic.conductor import manager from ironic.conductor import notification_utils from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.db import api as dbapi from ironic.drivers import base as drivers_base from ironic.drivers.modules import fake from ironic.drivers.modules.network import flat as n_flat from ironic import objects from ironic.objects import base as obj_base from ironic.objects import fields as obj_fields from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF @mgr_utils.mock_record_keepalive class ChangeNodePowerStateTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_change_node_power_state_power_on(self, get_power_mock): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. get_power_mock.return_value = states.POWER_OFF node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() self.service.change_node_power_state(self.context, node.uuid, states.POWER_ON) self._stop_service() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) node.refresh() self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) # Verify the reservation has been cleared by # background task's link callback. self.assertIsNone(node.reservation) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_change_node_power_state_soft_power_off_timeout(self, get_power_mock): # Test change_node_power_state with timeout optional parameter # including integration with conductor.utils.node_power_action and # lower. get_power_mock.return_value = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_ON) self._start_service() self.service.change_node_power_state(self.context, node.uuid, states.SOFT_POWER_OFF, timeout=2) self._stop_service() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) node.refresh() self.assertEqual(states.POWER_OFF, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) # Verify the reservation has been cleared by # background task's link callback. self.assertIsNone(node.reservation) @mock.patch.object(conductor_utils, 'node_power_action') def test_change_node_power_state_node_already_locked(self, pwr_act_mock): # Test change_node_power_state with mocked # conductor.utils.node_power_action. fake_reservation = 'fake-reserv' pwr_state = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=pwr_state, reservation=fake_reservation) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # In this test worker should not be spawned, but waiting to make sure # the below perform_mock assertion is valid. self._stop_service() self.assertFalse(pwr_act_mock.called, 'node_power_action has been ' 'unexpectedly called.') # Verify existing reservation wasn't broken. node.refresh() self.assertEqual(fake_reservation, node.reservation) def test_change_node_power_state_worker_pool_full(self): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. initial_state = states.POWER_OFF node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=initial_state) self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) spawn_mock.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, timeout=mock.ANY) node.refresh() self.assertEqual(initial_state, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # Verify the picked reservation has been cleared due to full pool. self.assertIsNone(node.reservation) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) def test_change_node_power_state_exception_in_background_task( self, get_power_mock, set_power_mock): # Test change_node_power_state including integration with # conductor.utils.node_power_action and lower. initial_state = states.POWER_OFF node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=initial_state) self._start_service() get_power_mock.return_value = states.POWER_OFF new_state = states.POWER_ON set_power_mock.side_effect = exception.PowerStateFailure( pstate=new_state ) self.service.change_node_power_state(self.context, node.uuid, new_state) self._stop_service() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) set_power_mock.assert_called_once_with(mock.ANY, mock.ANY, new_state, timeout=None) node.refresh() self.assertEqual(initial_state, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNotNone(node.last_error) # Verify the reservation has been cleared by background task's # link callback despite exception in background task. self.assertIsNone(node.reservation) @mock.patch.object(fake.FakePower, 'validate', autospec=True) def test_change_node_power_state_validate_fail(self, validate_mock): # Test change_node_power_state where task.driver.power.validate # fails and raises an exception initial_state = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=initial_state) self._start_service() validate_mock.side_effect = exception.InvalidParameterValue( 'wrong power driver info') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() validate_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') def test_node_set_power_state_notif_success(self, mock_notif): # Test that successfully changing a node's power state sends the # correct .start and .end notifications self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() self.service.change_node_power_state(self.context, node.uuid, states.POWER_ON) # Give async worker a chance to finish self._stop_service() # 2 notifications should be sent: 1 .start and 1 .end self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.end', obj_fields.NotificationLevel.INFO) @mock.patch.object(fake.FakePower, 'get_power_state', autospec=True) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') def test_node_set_power_state_notif_get_power_fail(self, mock_notif, get_power_mock): # Test that correct notifications are sent when changing node power # state and retrieving the node's current power state fails self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() get_power_mock.side_effect = Exception('I have failed') self.service.change_node_power_state(self.context, node.uuid, states.POWER_ON) # Give async worker a chance to finish self._stop_service() get_power_mock.assert_called_once_with(mock.ANY, mock.ANY) # 2 notifications should be sent: 1 .start and 1 .error self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') def test_node_set_power_state_notif_set_power_fail(self, mock_notif, set_power_mock): # Test that correct notifications are sent when changing node power # state and setting the node's power state fails self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() set_power_mock.side_effect = Exception('I have failed') self.service.change_node_power_state(self.context, node.uuid, states.POWER_ON) # Give async worker a chance to finish self._stop_service() set_power_mock.assert_called_once_with(mock.ANY, mock.ANY, states.POWER_ON, timeout=None) # 2 notifications should be sent: 1 .start and 1 .error self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.error', obj_fields.NotificationLevel.ERROR) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') def test_node_set_power_state_notif_spawn_fail(self, mock_notif): # Test that failure notification is not sent when spawning the # background conductor worker fails self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.POWER_ON) spawn_mock.assert_called_once_with( conductor_utils.node_power_action, mock.ANY, states.POWER_ON, timeout=None) self.assertFalse(mock_notif.called) @mock.patch('ironic.objects.node.NodeSetPowerStateNotification') def test_node_set_power_state_notif_no_state_change(self, mock_notif): # Test that correct notifications are sent when changing node power # state and no state change is necessary self.config(notification_level='info') self.config(host='my-host') # Required for exception handling mock_notif.__name__ = 'NodeSetPowerStateNotification' node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) self._start_service() self.service.change_node_power_state(self.context, node.uuid, states.POWER_OFF) # Give async worker a chance to finish self._stop_service() # 2 notifications should be sent: 1 .start and 1 .end self.assertEqual(2, mock_notif.call_count) self.assertEqual(2, mock_notif.return_value.emit.call_count) first_notif_args = mock_notif.call_args_list[0][1] second_notif_args = mock_notif.call_args_list[1][1] self.assertNotificationEqual(first_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.start', obj_fields.NotificationLevel.INFO) self.assertNotificationEqual(second_notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_set.end', obj_fields.NotificationLevel.INFO) @mock.patch.object(fake.FakePower, 'get_supported_power_states', autospec=True) def test_change_node_power_state_unsupported_state(self, supported_mock): # Test change_node_power_state where unsupported power state raises # an exception initial_state = states.POWER_ON node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=initial_state) self._start_service() supported_mock.return_value = [ states.POWER_ON, states.POWER_OFF, states.REBOOT] exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.change_node_power_state, self.context, node.uuid, states.SOFT_POWER_OFF) self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() supported_mock.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(states.POWER_ON, node.power_state) self.assertIsNone(node.target_power_state) self.assertIsNone(node.last_error) @mgr_utils.mock_record_keepalive class CreateNodeTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_create_node(self): node = obj_utils.get_test_node(self.context, driver='fake-hardware', extra={'test': 'one'}) res = self.service.create_node(self.context, node) self.assertEqual({'test': 'one'}, res['extra']) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual({'test': 'one'}, res['extra']) @mock.patch.object(driver_factory, 'check_and_update_node_interfaces', autospec=True) def test_create_node_validation_fails(self, mock_validate): node = obj_utils.get_test_node(self.context, driver='fake-hardware', extra={'test': 'one'}) mock_validate.side_effect = exception.InterfaceNotFoundInEntrypoint( 'boom') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.create_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InterfaceNotFoundInEntrypoint, exc.exc_info[0]) self.assertRaises(exception.NotFound, objects.Node.get_by_uuid, self.context, node['uuid']) @mgr_utils.mock_record_keepalive class UpdateNodeTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_update_node(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', extra={'test': 'one'}) # check that ManagerService.update_node actually updates the node node.extra = {'test': 'two'} res = self.service.update_node(self.context, node) self.assertEqual({'test': 'two'}, res['extra']) def test_update_node_maintenance_set_false(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', maintenance=True, fault='clean failure', maintenance_reason='reason') # check that ManagerService.update_node actually updates the node node.maintenance = False res = self.service.update_node(self.context, node) self.assertFalse(res['maintenance']) self.assertIsNone(res['maintenance_reason']) self.assertIsNone(res['fault']) def test_update_node_protected_set(self): for state in ('active', 'rescue'): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=state) node.protected = True res = self.service.update_node(self.context, node) self.assertTrue(res['protected']) self.assertIsNone(res['protected_reason']) def test_update_node_protected_unset(self): # NOTE(dtantsur): we allow unsetting protected in any state to make # sure a node cannot get stuck in it. for state in ('active', 'rescue', 'rescue failed'): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=state, protected=True, protected_reason='reason') # check that ManagerService.update_node actually updates the node node.protected = False res = self.service.update_node(self.context, node) self.assertFalse(res['protected']) self.assertIsNone(res['protected_reason']) def test_update_node_protected_invalid_state(self): node = obj_utils.create_test_node(self.context, provision_state='available') node.protected = True exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertFalse(res['protected']) self.assertIsNone(res['protected_reason']) def test_update_node_protected_reason_without_protected(self): node = obj_utils.create_test_node(self.context, provision_state='active') node.protected_reason = 'reason!' exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertFalse(res['protected']) self.assertIsNone(res['protected_reason']) def test_update_node_retired_set(self): for state in ('active', 'rescue', 'manageable'): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=state) node.retired = True res = self.service.update_node(self.context, node) self.assertTrue(res['retired']) self.assertIsNone(res['retired_reason']) def test_update_node_retired_invalid_state(self): # NOTE(arne_wiebalck): nodes in available cannot be 'retired'. # This is to ensure backwards comaptibility. node = obj_utils.create_test_node(self.context, provision_state='available') node.retired = True exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertFalse(res['retired']) self.assertIsNone(res['retired_reason']) def test_update_node_retired_unset(self): for state in ('active', 'manageable', 'rescue', 'rescue failed'): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), provision_state=state, retired=True, retired_reason='EOL') # check that ManagerService.update_node actually updates the node node.retired = False res = self.service.update_node(self.context, node) self.assertFalse(res['retired']) self.assertIsNone(res['retired_reason']) def test_update_node_retired_reason_without_retired(self): node = obj_utils.create_test_node(self.context, provision_state='active') node.retired_reason = 'warranty expired' exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertFalse(res['retired']) self.assertIsNone(res['retired_reason']) def test_update_node_already_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', extra={'test': 'one'}) # check that it fails if something else has locked it already with task_manager.acquire(self.context, node['id'], shared=False): node.extra = {'test': 'two'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # verify change did not happen res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual({'test': 'one'}, res['extra']) def test_update_node_already_associated(self): old_instance = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=old_instance) node.instance_uuid = uuidutils.generate_uuid() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeAssociated, exc.exc_info[0]) # verify change did not happen res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(old_instance, res['instance_uuid']) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') def _test_associate_node(self, power_state, mock_get_power_state): mock_get_power_state.return_value = power_state node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=None, power_state=states.NOSTATE) uuid1 = uuidutils.generate_uuid() uuid2 = uuidutils.generate_uuid() node.instance_uuid = uuid1 self.service.update_node(self.context, node) # Check if the change was applied node.instance_uuid = uuid2 node.refresh() self.assertEqual(uuid1, node.instance_uuid) def test_associate_node_powered_off(self): self._test_associate_node(states.POWER_OFF) def test_associate_node_powered_on(self): self._test_associate_node(states.POWER_ON) def test_update_node_invalid_driver(self): existing_driver = 'fake-hardware' wrong_driver = 'wrong-driver' node = obj_utils.create_test_node(self.context, driver=existing_driver, extra={'test': 'one'}, instance_uuid=None) # check that it fails because driver not found node.driver = wrong_driver node.driver_info = {} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) self.assertEqual(exception.DriverNotFound, exc.exc_info[0]) # verify change did not happen node.refresh() self.assertEqual(existing_driver, node.driver) def test_update_node_from_invalid_driver(self): existing_driver = 'fake-hardware' wrong_driver = 'wrong-driver' node = obj_utils.create_test_node(self.context, driver=wrong_driver) node.driver = existing_driver result = self.service.update_node(self.context, node) self.assertEqual(existing_driver, result.driver) node.refresh() self.assertEqual(existing_driver, node.driver) UpdateInterfaces = namedtuple('UpdateInterfaces', ('old', 'new')) # NOTE(dtantsur): "old" interfaces here do not match the defaults, so that # we can test resetting them. IFACE_UPDATE_DICT = { 'boot_interface': UpdateInterfaces('pxe', 'fake'), 'console_interface': UpdateInterfaces('no-console', 'fake'), 'deploy_interface': UpdateInterfaces('iscsi', 'fake'), 'inspect_interface': UpdateInterfaces('no-inspect', 'fake'), 'management_interface': UpdateInterfaces(None, 'fake'), 'network_interface': UpdateInterfaces('noop', 'flat'), 'power_interface': UpdateInterfaces(None, 'fake'), 'raid_interface': UpdateInterfaces('no-raid', 'fake'), 'rescue_interface': UpdateInterfaces('no-rescue', 'fake'), 'storage_interface': UpdateInterfaces('fake', 'noop'), } def _create_node_with_interfaces(self, prov_state, maintenance=False): old_ifaces = {} for iface_name, ifaces in self.IFACE_UPDATE_DICT.items(): old_ifaces[iface_name] = ifaces.old node = obj_utils.create_test_node(self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), provision_state=prov_state, maintenance=maintenance, **old_ifaces) return node def _test_update_node_interface_allowed(self, node, iface_name, new_iface): setattr(node, iface_name, new_iface) self.service.update_node(self.context, node) node.refresh() self.assertEqual(new_iface, getattr(node, iface_name)) def _test_update_node_interface_in_allowed_state(self, prov_state, maintenance=False): node = self._create_node_with_interfaces(prov_state, maintenance=maintenance) for iface_name, ifaces in self.IFACE_UPDATE_DICT.items(): self._test_update_node_interface_allowed(node, iface_name, ifaces.new) node.destroy() def test_update_node_interface_in_allowed_state(self): for state in [states.ENROLL, states.MANAGEABLE, states.INSPECTING, states.INSPECTWAIT, states.AVAILABLE]: self._test_update_node_interface_in_allowed_state(state) def test_update_node_interface_in_maintenance(self): self._test_update_node_interface_in_allowed_state(states.ACTIVE, maintenance=True) def _test_update_node_interface_not_allowed(self, node, iface_name, new_iface): old_iface = getattr(node, iface_name) setattr(node, iface_name, new_iface) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) self.assertEqual(exception.InvalidState, exc.exc_info[0]) node.refresh() self.assertEqual(old_iface, getattr(node, iface_name)) def _test_update_node_interface_in_not_allowed_state(self, prov_state): node = self._create_node_with_interfaces(prov_state) for iface_name, ifaces in self.IFACE_UPDATE_DICT.items(): self._test_update_node_interface_not_allowed(node, iface_name, ifaces.new) node.destroy() def test_update_node_interface_in_not_allowed_state(self): for state in [states.ACTIVE, states.DELETING]: self._test_update_node_interface_in_not_allowed_state(state) def _test_update_node_interface_invalid(self, node, iface_name): old_iface = getattr(node, iface_name) setattr(node, iface_name, 'invalid') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) self.assertEqual(exception.InterfaceNotFoundInEntrypoint, exc.exc_info[0]) node.refresh() self.assertEqual(old_iface, getattr(node, iface_name)) def test_update_node_interface_invalid(self): node = self._create_node_with_interfaces(states.MANAGEABLE) for iface_name in self.IFACE_UPDATE_DICT: self._test_update_node_interface_invalid(node, iface_name) def test_update_node_with_reset_interfaces(self): # Modify only one interface at a time for iface_name, ifaces in self.IFACE_UPDATE_DICT.items(): node = self._create_node_with_interfaces(states.AVAILABLE) setattr(node, iface_name, ifaces.new) # Updating a driver is mandatory for reset_interfaces to work node.driver = 'fake-hardware' self.service.update_node(self.context, node, reset_interfaces=True) node.refresh() self.assertEqual(ifaces.new, getattr(node, iface_name)) # Other interfaces must be reset to their defaults for other_iface_name, ifaces in self.IFACE_UPDATE_DICT.items(): if other_iface_name == iface_name: continue # For this to work, the "old" interfaces in IFACE_UPDATE_DICT # must not match the defaults. self.assertNotEqual(ifaces.old, getattr(node, other_iface_name), "%s does not match the default after " "reset with setting %s: %s" % (other_iface_name, iface_name, getattr(node, other_iface_name))) def _test_update_node_change_resource_class(self, state, resource_class=None, new_resource_class='new', expect_error=False, maintenance=False): node = obj_utils.create_test_node(self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), provision_state=state, resource_class=resource_class, maintenance=maintenance) self.addCleanup(node.destroy) node.resource_class = new_resource_class if expect_error: exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) expected_msg_regex = \ (r'^Node {} can not have resource_class updated unless it is ' r'in one of allowed \(.*\) states.$').format( re.escape(node.uuid)) self.assertRegex(str(exc.exc_info[1]), expected_msg_regex) # verify change did not happen res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual(resource_class, res['resource_class']) else: self.service.update_node(self.context, node) res = objects.Node.get_by_uuid(self.context, node['uuid']) self.assertEqual('new', res['resource_class']) def test_update_resource_class_allowed_state(self): for state in [states.ENROLL, states.MANAGEABLE, states.INSPECTING, states.AVAILABLE]: self._test_update_node_change_resource_class( state, resource_class='old', expect_error=False) def test_update_resource_class_no_previous_value(self): for state in [states.ENROLL, states.MANAGEABLE, states.INSPECTING, states.AVAILABLE, states.ACTIVE]: self._test_update_node_change_resource_class( state, resource_class=None, expect_error=False) def test_update_resource_class_not_allowed(self): self._test_update_node_change_resource_class( states.ACTIVE, resource_class='old', new_resource_class='new', expect_error=True) self._test_update_node_change_resource_class( states.ACTIVE, resource_class='old', new_resource_class=None, expect_error=True) self._test_update_node_change_resource_class( states.ACTIVE, resource_class='old', new_resource_class=None, expect_error=True, maintenance=True) def test_update_node_hardware_type(self): existing_hardware = 'fake-hardware' existing_interface = 'fake' new_hardware = 'manual-management' new_interface = 'pxe' node = obj_utils.create_test_node(self.context, driver=existing_hardware, boot_interface=existing_interface) node.driver = new_hardware node.boot_interface = new_interface self.service.update_node(self.context, node) node.refresh() self.assertEqual(new_hardware, node.driver) self.assertEqual(new_interface, node.boot_interface) def test_update_node_deleting_allocation(self): node = obj_utils.create_test_node(self.context) alloc = obj_utils.create_test_allocation(self.context) # Establish cross-linking between the node and the allocation alloc.node_id = node.id alloc.save() node.refresh() self.assertEqual(alloc.id, node.allocation_id) self.assertEqual(alloc.uuid, node.instance_uuid) node.instance_uuid = None res = self.service.update_node(self.context, node) self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_id, self.context, alloc.id) self.assertIsNone(res['instance_uuid']) self.assertIsNone(res['allocation_id']) node.refresh() self.assertIsNone(node.instance_uuid) self.assertIsNone(node.allocation_id) def test_update_node_deleting_allocation_forbidden(self): node = obj_utils.create_test_node(self.context, provision_state='active', maintenance=False) alloc = obj_utils.create_test_allocation(self.context) # Establish cross-linking between the node and the allocation alloc.node_id = node.id alloc.save() node.refresh() self.assertEqual(alloc.id, node.allocation_id) self.assertEqual(alloc.uuid, node.instance_uuid) node.instance_uuid = None exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_node, self.context, node) self.assertEqual(exception.InvalidState, exc.exc_info[0]) node.refresh() self.assertEqual(alloc.id, node.allocation_id) self.assertEqual(alloc.uuid, node.instance_uuid) def test_update_node_deleting_allocation_in_maintenance(self): node = obj_utils.create_test_node(self.context, provision_state='active', maintenance=True) alloc = obj_utils.create_test_allocation(self.context) # Establish cross-linking between the node and the allocation alloc.node_id = node.id alloc.save() node.refresh() self.assertEqual(alloc.id, node.allocation_id) self.assertEqual(alloc.uuid, node.instance_uuid) node.instance_uuid = None res = self.service.update_node(self.context, node) self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_id, self.context, alloc.id) self.assertIsNone(res['instance_uuid']) self.assertIsNone(res['allocation_id']) node.refresh() self.assertIsNone(node.instance_uuid) self.assertIsNone(node.allocation_id) @mgr_utils.mock_record_keepalive class VendorPassthruTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(task_manager.TaskManager, 'upgrade_lock') @mock.patch.object(task_manager.TaskManager, 'spawn_after') def test_vendor_passthru_async(self, mock_spawn, mock_upgrade): node = obj_utils.create_test_node(self.context, vendor_interface='fake') info = {'bar': 'baz'} self._start_service() response = self.service.vendor_passthru(self.context, node.uuid, 'second_method', 'POST', info) # Waiting to make sure the below assertions are valid. self._stop_service() # Assert spawn_after was called self.assertTrue(mock_spawn.called) self.assertIsNone(response['return']) self.assertTrue(response['async']) # Assert lock was upgraded to an exclusive one self.assertEqual(1, mock_upgrade.call_count) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch.object(task_manager.TaskManager, 'upgrade_lock') @mock.patch.object(task_manager.TaskManager, 'spawn_after') def test_vendor_passthru_sync(self, mock_spawn, mock_upgrade): node = obj_utils.create_test_node(self.context, driver='fake-hardware') info = {'bar': 'meow'} self._start_service() response = self.service.vendor_passthru(self.context, node.uuid, 'third_method_sync', 'POST', info) # Waiting to make sure the below assertions are valid. self._stop_service() # Assert no workers were used self.assertFalse(mock_spawn.called) self.assertTrue(response['return']) self.assertFalse(response['async']) # Assert lock was upgraded to an exclusive one self.assertEqual(1, mock_upgrade.call_count) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch.object(task_manager.TaskManager, 'upgrade_lock') @mock.patch.object(task_manager.TaskManager, 'spawn_after') def test_vendor_passthru_shared_lock(self, mock_spawn, mock_upgrade): node = obj_utils.create_test_node(self.context, driver='fake-hardware') info = {'bar': 'woof'} self._start_service() response = self.service.vendor_passthru(self.context, node.uuid, 'fourth_method_shared_lock', 'POST', info) # Waiting to make sure the below assertions are valid. self._stop_service() # Assert spawn_after was called self.assertTrue(mock_spawn.called) self.assertIsNone(response['return']) self.assertTrue(response['async']) # Assert lock was never upgraded to an exclusive one self.assertFalse(mock_upgrade.called) node.refresh() self.assertIsNone(node.last_error) # Verify there's no reservation on the node self.assertIsNone(node.reservation) def test_vendor_passthru_http_method_not_supported(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self._start_service() # GET not supported by first_method exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'second_method', 'GET', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_node_already_locked(self): fake_reservation = 'test_reserv' node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation=fake_reservation) info = {'bar': 'baz'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'second_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify the existing reservation is not broken. self.assertEqual(fake_reservation, node.reservation) def test_vendor_passthru_unsupported_method(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') info = {'bar': 'baz'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'unsupported_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_missing_method_parameters(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') info = {'invalid_param': 'whatever'} self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'second_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def test_vendor_passthru_worker_pool_full(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') info = {'bar': 'baz'} self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vendor_passthru, self.context, node.uuid, 'second_method', 'POST', info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) # Waiting to make sure the below assertions are valid. self._stop_service() node.refresh() self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch.object(driver_factory, 'get_interface', autospec=True) def test_get_node_vendor_passthru_methods(self, mock_iface): fake_routes = {'test_method': {'async': True, 'description': 'foo', 'http_methods': ['POST'], 'func': None}} mock_iface.return_value.vendor_routes = fake_routes node = obj_utils.create_test_node(self.context, driver='fake-hardware') self._start_service() data = self.service.get_node_vendor_passthru_methods(self.context, node.uuid) # The function reference should not be returned del fake_routes['test_method']['func'] self.assertEqual(fake_routes, data) @mock.patch.object(driver_factory, 'get_interface') @mock.patch.object(manager.ConductorManager, '_spawn_worker') def test_driver_vendor_passthru_sync(self, mock_spawn, mock_get_if): expected = {'foo': 'bar'} vendor_mock = mock.Mock(spec=drivers_base.VendorInterface) mock_get_if.return_value = vendor_mock driver_name = 'fake-hardware' test_method = mock.MagicMock(return_value=expected) vendor_mock.driver_routes = { 'test_method': {'func': test_method, 'async': False, 'attach': False, 'http_methods': ['POST']}} self.service.init_host() # init_host() called _spawn_worker because of the heartbeat mock_spawn.reset_mock() # init_host() called get_interface during driver loading mock_get_if.reset_mock() vendor_args = {'test': 'arg'} response = self.service.driver_vendor_passthru( self.context, driver_name, 'test_method', 'POST', vendor_args) # Assert that the vendor interface has no custom # driver_vendor_passthru() self.assertFalse(hasattr(vendor_mock, 'driver_vendor_passthru')) self.assertEqual(expected, response['return']) self.assertFalse(response['async']) test_method.assert_called_once_with(self.context, **vendor_args) # No worker was spawned self.assertFalse(mock_spawn.called) mock_get_if.assert_called_once_with(mock.ANY, 'vendor', 'fake') @mock.patch.object(driver_factory, 'get_interface', autospec=True) @mock.patch.object(manager.ConductorManager, '_spawn_worker', autospec=True) def test_driver_vendor_passthru_async(self, mock_spawn, mock_iface): test_method = mock.MagicMock() mock_iface.return_value.driver_routes = { 'test_sync_method': {'func': test_method, 'async': True, 'attach': False, 'http_methods': ['POST']}} self.service.init_host() # init_host() called _spawn_worker because of the heartbeat mock_spawn.reset_mock() vendor_args = {'test': 'arg'} response = self.service.driver_vendor_passthru( self.context, 'fake-hardware', 'test_sync_method', 'POST', vendor_args) self.assertIsNone(response['return']) self.assertTrue(response['async']) mock_spawn.assert_called_once_with(self.service, test_method, self.context, **vendor_args) @mock.patch.object(driver_factory, 'get_interface', autospec=True) def test_driver_vendor_passthru_http_method_not_supported(self, mock_iface): mock_iface.return_value.driver_routes = { 'test_method': {'func': mock.MagicMock(), 'async': True, 'http_methods': ['POST']}} self.service.init_host() # GET not supported by test_method exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake-hardware', 'test_method', 'GET', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_driver_vendor_passthru_method_not_supported(self): # Test for when the vendor interface is set, but hasn't passed a # driver_passthru_mapping to MixinVendorInterface self.service.init_host() exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake-hardware', 'test_method', 'POST', {}) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_driver_vendor_passthru_driver_not_found(self): self.service.init_host() self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'does_not_exist', 'test_method', 'POST', {}) @mock.patch.object(driver_factory, 'default_interface', autospec=True) def test_driver_vendor_passthru_no_default_interface(self, mock_def_iface): self.service.init_host() # NOTE(rloo): service.init_host() will call # driver_factory.default_interface() and we want these to # succeed, so we set the side effect *after* that call. mock_def_iface.reset_mock() mock_def_iface.side_effect = exception.NoValidDefaultForInterface('no') exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake-hardware', 'test_method', 'POST', {}) mock_def_iface.assert_called_once_with(mock.ANY, 'vendor', driver_name='fake-hardware') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoValidDefaultForInterface, exc.exc_info[0]) @mock.patch.object(driver_factory, 'get_interface', autospec=True) def test_get_driver_vendor_passthru_methods(self, mock_get_if): vendor_mock = mock.Mock(spec=drivers_base.VendorInterface) mock_get_if.return_value = vendor_mock driver_name = 'fake-hardware' fake_routes = {'test_method': {'async': True, 'description': 'foo', 'http_methods': ['POST'], 'func': None}} vendor_mock.driver_routes = fake_routes self.service.init_host() # init_host() will call get_interface mock_get_if.reset_mock() data = self.service.get_driver_vendor_passthru_methods(self.context, driver_name) # The function reference should not be returned del fake_routes['test_method']['func'] self.assertEqual(fake_routes, data) mock_get_if.assert_called_once_with(mock.ANY, 'vendor', 'fake') @mock.patch.object(driver_factory, 'default_interface', autospec=True) def test_get_driver_vendor_passthru_methods_no_default_interface( self, mock_def_iface): self.service.init_host() # NOTE(rloo): service.init_host() will call # driver_factory.default_interface() and we want these to # succeed, so we set the side effect *after* that call. mock_def_iface.reset_mock() mock_def_iface.side_effect = exception.NoValidDefaultForInterface('no') exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.get_driver_vendor_passthru_methods, self.context, 'fake-hardware') mock_def_iface.assert_called_once_with(mock.ANY, 'vendor', driver_name='fake-hardware') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoValidDefaultForInterface, exc.exc_info[0]) @mock.patch.object(driver_factory, 'get_interface', autospec=True) def test_driver_vendor_passthru_validation_failed(self, mock_iface): mock_iface.return_value.driver_validate.side_effect = ( exception.MissingParameterValue('error')) test_method = mock.Mock() mock_iface.return_value.driver_routes = { 'test_method': {'func': test_method, 'async': False, 'http_methods': ['POST']}} self.service.init_host() exc = self.assertRaises(messaging.ExpectedException, self.service.driver_vendor_passthru, self.context, 'fake-hardware', 'test_method', 'POST', {}) self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) self.assertFalse(test_method.called) @mgr_utils.mock_record_keepalive @mock.patch.object(images, 'is_whole_disk_image') class ServiceDoNodeDeployTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_do_node_deploy_invalid_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # test that node deploy fails if the node is already provisioned node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) self.assertFalse(mock_iwdi.called) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) def test_do_node_deploy_maintenance(self, mock_iwdi): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake-hardware', maintenance=True) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) self.assertFalse(mock_iwdi.called) def _test_do_node_deploy_validate_fail(self, mock_validate, mock_iwdi): mock_iwdi.return_value = False # InvalidParameterValue should be re-raised as InstanceDeployFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceDeployFailure, exc.exc_info[0]) self.assertEqual(exc.exc_info[1].code, 400) # Check the message of InstanceDeployFailure. In a # messaging.rpc.ExpectedException sys.exc_info() is stored in exc_info # in the exception object. So InstanceDeployFailure will be in # exc_info[1] self.assertIn(r'node 1be26c0b-03f2-4d2e-ae87-c02d7f33c123', str(exc.exc_info[1])) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.validate') def test_do_node_deploy_validate_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_deploy_power_validate_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) @mock.patch.object(conductor_utils, 'validate_instance_info_traits') def test_do_node_deploy_traits_validate_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) @mock.patch.object(conductor_steps, 'validate_deploy_templates') def test_do_node_deploy_validate_template_fail(self, mock_validate, mock_iwdi): self._test_do_node_deploy_validate_fail(mock_validate, mock_iwdi) def test_do_node_deploy_partial_ok(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() thread = self.service._spawn_worker(lambda: None) with mock.patch.object(self.service, '_spawn_worker', autospec=True) as mock_spawn: mock_spawn.return_value = thread node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.AVAILABLE, driver_internal_info={'agent_url': 'url'}) self.service.do_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_spawn.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, None) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) self.assertNotIn('agent_url', node.driver_internal_info) def test_do_node_deploy_rebuild_active_state_error(self, mock_iwdi): # Tests manager.do_node_deploy() & deployments.do_next_deploy_step(), # when getting an unexpected state returned from a deploy_step. mock_iwdi.return_value = True self._start_service() # NOTE(rloo): We have to mock this here as opposed to using a # decorator. With a decorator, when initialization is done, the # mocked deploy() method isn't considered a deploy step. So we defer # mock'ing until after the init is done. with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = states.DEPLOYING node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE, instance_info={'image_source': uuidutils.generate_uuid(), 'kernel': 'aaaa', 'ramdisk': 'bbbb'}, driver_internal_info={'is_whole_disk_image': False}) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) # Verify instance_info values have been cleared. self.assertNotIn('kernel', node.instance_info) self.assertNotIn('ramdisk', node.instance_info) mock_iwdi.assert_called_once_with(self.context, node.instance_info) # Verify is_whole_disk_image reflects correct value on rebuild. self.assertTrue(node.driver_internal_info['is_whole_disk_image']) self.assertIsNone(node.driver_internal_info['deploy_steps']) def test_do_node_deploy_rebuild_active_state_waiting(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # NOTE(rloo): We have to mock this here as opposed to using a # decorator. With a decorator, when initialization is done, the # mocked deploy() method isn't considered a deploy step. So we defer # mock'ing until after the init is done. with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = states.DEPLOYWAIT node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE, instance_info={'image_source': uuidutils.generate_uuid()}) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYWAIT, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) self.assertEqual(1, len(node.driver_internal_info['deploy_steps'])) def test_do_node_deploy_rebuild_active_state_done(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # NOTE(rloo): We have to mock this here as opposed to using a # decorator. With a decorator, when initialization is done, the # mocked deploy() method isn't considered a deploy step. So we defer # mock'ing until after the init is done. with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = None node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) self.assertIsNone(node.driver_internal_info['deploy_steps']) def test_do_node_deploy_rebuild_deployfail_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # NOTE(rloo): We have to mock this here as opposed to using a # decorator. With a decorator, when initialization is done, the # mocked deploy() method isn't considered a deploy step. So we defer # mock'ing until after the init is done. with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = None node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYFAIL, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) self.assertIsNone(node.driver_internal_info['deploy_steps']) def test_do_node_deploy_rebuild_error_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # NOTE(rloo): We have to mock this here as opposed to using a # decorator. With a decorator, when initialization is done, the # mocked deploy() method isn't considered a deploy step. So we defer # mock'ing until after the init is done. with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = None node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ERROR, target_provision_state=states.NOSTATE) self.service.do_node_deploy(self.context, node.uuid, rebuild=True) self._stop_service() node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) # last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) self.assertIsNone(node.driver_internal_info['deploy_steps']) def test_do_node_deploy_rebuild_from_available_state(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() # test node will not rebuild if state is AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid'], rebuild=True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) # Last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) self.assertFalse(mock_iwdi.called) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) def test_do_node_deploy_rebuild_protected(self, mock_iwdi): mock_iwdi.return_value = False self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.ACTIVE, protected=True) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node['uuid'], rebuild=True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeProtected, exc.exc_info[0]) # Last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) self.assertFalse(mock_iwdi.called) def test_do_node_deploy_worker_pool_full(self, mock_iwdi): mock_iwdi.return_value = False prv_state = states.AVAILABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver='fake-hardware') self._start_service() with mock.patch.object(self.service, '_spawn_worker', autospec=True) as mock_spawn: mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_deploy, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) mock_iwdi.assert_called_once_with(self.context, node.instance_info) self.assertFalse(node.driver_internal_info['is_whole_disk_image']) @mgr_utils.mock_record_keepalive class ContinueNodeDeployTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(ContinueNodeDeployTestCase, self).setUp() self.deploy_start = { 'step': 'deploy_start', 'priority': 50, 'interface': 'deploy'} self.deploy_end = { 'step': 'deploy_end', 'priority': 20, 'interface': 'deploy'} self.in_band_step = { 'step': 'deploy_middle', 'priority': 30, 'interface': 'deploy'} self.deploy_steps = [self.deploy_start, self.deploy_end] @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy_worker_pool_full(self, mock_spawn): # Test the appropriate exception is raised if the worker pool is full prv_state = states.DEPLOYWAIT tgt_prv_state = states.ACTIVE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() self.assertRaises(exception.NoFreeConductorWorker, self.service.continue_node_deploy, self.context, node.uuid) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy_wrong_state(self, mock_spawn): # Test the appropriate exception is raised if node isn't already # in DEPLOYWAIT state prv_state = states.DEPLOYFAIL tgt_prv_state = states.ACTIVE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() self.assertRaises(exception.InvalidStateRequested, self.service.continue_node_deploy, self.context, node.uuid) self._stop_service() node.refresh() # Make sure node wasn't modified self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy(self, mock_spawn): # test a node can continue deploying via RPC prv_state = states.DEPLOYWAIT tgt_prv_state = states.ACTIVE driver_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0, 'steps_validated': True} node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver_internal_info=driver_info, deploy_step=self.deploy_steps[0]) self._start_service() self.service.continue_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) mock_spawn.assert_called_with(mock.ANY, deployments.do_next_deploy_step, mock.ANY, 1, mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy_first_agent_boot(self, mock_spawn, mock_get_steps): new_steps = [self.deploy_start, self.in_band_step, self.deploy_end] mock_get_steps.return_value = new_steps prv_state = states.DEPLOYWAIT tgt_prv_state = states.ACTIVE driver_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0} node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver_internal_info=driver_info, deploy_step=self.deploy_steps[0]) self._start_service() self.service.continue_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertTrue(node.driver_internal_info['steps_validated']) self.assertEqual(new_steps, node.driver_internal_info['deploy_steps']) mock_spawn.assert_called_with(mock.ANY, deployments.do_next_deploy_step, mock.ANY, 1, mock.ANY) @mock.patch.object(task_manager.TaskManager, 'process_event', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy_deprecated(self, mock_spawn, mock_event): # TODO(rloo): delete this when we remove support for handling # deploy steps; node will always be in DEPLOYWAIT then. # test a node can continue deploying via RPC prv_state = states.DEPLOYING tgt_prv_state = states.ACTIVE driver_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0, 'steps_validated': True} self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver_internal_info=driver_info, deploy_step=self.deploy_steps[0]) self.service.continue_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYING, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) mock_spawn.assert_called_with(mock.ANY, deployments.do_next_deploy_step, mock.ANY, 1, mock.ANY) self.assertFalse(mock_event.called) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def _continue_node_deploy_skip_step(self, mock_spawn, skip=True): # test that skipping current step mechanism works driver_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0, 'steps_validated': True} if not skip: driver_info['skip_current_deploy_step'] = skip node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.MANAGEABLE, driver_internal_info=driver_info, deploy_step=self.deploy_steps[0]) self._start_service() self.service.continue_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() if skip: expected_step_index = 1 else: self.assertNotIn( 'skip_current_deploy_step', node.driver_internal_info) expected_step_index = 0 mock_spawn.assert_called_with(mock.ANY, deployments.do_next_deploy_step, mock.ANY, expected_step_index, mock.ANY) def test_continue_node_deploy_skip_step(self): self._continue_node_deploy_skip_step() def test_continue_node_deploy_no_skip_step(self): self._continue_node_deploy_skip_step(skip=False) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_deploy_polling(self, mock_spawn): # test that deployment_polling flag is cleared driver_info = {'deploy_steps': self.deploy_steps, 'deploy_step_index': 0, 'deployment_polling': True, 'steps_validated': True} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.MANAGEABLE, driver_internal_info=driver_info, deploy_step=self.deploy_steps[0]) self._start_service() self.service.continue_node_deploy(self.context, node.uuid) self._stop_service() node.refresh() self.assertNotIn('deployment_polling', node.driver_internal_info) mock_spawn.assert_called_with(mock.ANY, deployments.do_next_deploy_step, mock.ANY, 1, mock.ANY) @mgr_utils.mock_record_keepalive class CheckTimeoutsTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeDeploy.clean_up') def test__check_deploy_timeouts(self, mock_cleanup): self._start_service() CONF.set_override('deploy_callback_timeout', 1, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_deploy_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) mock_cleanup.assert_called_once_with(mock.ANY) def _check_cleanwait_timeouts(self, manual=False): self._start_service() CONF.set_override('clean_callback_timeout', 1, group='conductor') tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0), clean_step={ 'interface': 'deploy', 'step': 'erase_devices'}, driver_internal_info={ 'cleaning_reboot': manual, 'clean_step_index': 0}) self.service._check_cleanwait_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Test that cleaning parameters have been purged in order # to prevent looping of the cleaning sequence self.assertEqual({}, node.clean_step) self.assertNotIn('clean_step_index', node.driver_internal_info) self.assertNotIn('cleaning_reboot', node.driver_internal_info) def test__check_cleanwait_timeouts_automated_clean(self): self._check_cleanwait_timeouts() def test__check_cleanwait_timeouts_manual_clean(self): self._check_cleanwait_timeouts(manual=True) @mock.patch('ironic.drivers.modules.fake.FakeRescue.clean_up') @mock.patch.object(conductor_utils, 'node_power_action') def test_check_rescuewait_timeouts(self, node_power_mock, mock_clean_up): self._start_service() CONF.set_override('rescue_callback_timeout', 1, group='conductor') tgt_prov_state = states.RESCUE node = obj_utils.create_test_node( self.context, driver='fake-hardware', rescue_interface='fake', network_interface='flat', provision_state=states.RESCUEWAIT, target_provision_state=tgt_prov_state, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_rescuewait_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.RESCUEFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertIn('Timeout reached while waiting for rescue ramdisk', node.last_error) mock_clean_up.assert_called_once_with(mock.ANY) node_power_mock.assert_called_once_with(mock.ANY, states.POWER_OFF) @mgr_utils.mock_record_keepalive class DoNodeTearDownTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_do_node_tear_down_invalid_state(self): self._start_service() # test node.provision_state is incorrect for tear_down node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) def test_do_node_tear_down_protected(self): self._start_service() # test node.provision_state is incorrect for tear_down node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.ACTIVE, protected=True) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node['uuid']) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeProtected, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_tear_down_validate_fail(self, mock_validate): # InvalidParameterValue should be re-raised as InstanceDeployFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceDeployFailure, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def test_do_node_tear_down_driver_raises_error(self, mock_tear_down): # test when driver.deploy.tear_down raises exception node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DELETING, target_provision_state=states.AVAILABLE, instance_info={'foo': 'bar'}, driver_internal_info={'is_whole_disk_image': False}) task = task_manager.TaskManager(self.context, node.uuid) self._start_service() mock_tear_down.side_effect = exception.InstanceDeployFailure('test') self.assertRaises(exception.InstanceDeployFailure, self.service._do_node_tear_down, task, node.provision_state) node.refresh() self.assertEqual(states.ERROR, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNotNone(node.last_error) # Assert instance_info was erased self.assertEqual({}, node.instance_info) mock_tear_down.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.fake.FakeConsole.stop_console') def test_do_node_tear_down_console_raises_error(self, mock_console): # test when _set_console_mode raises exception node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DELETING, target_provision_state=states.AVAILABLE, instance_info={'foo': 'bar'}, console_enabled=True, driver_internal_info={'is_whole_disk_image': False}) task = task_manager.TaskManager(self.context, node.uuid) self._start_service() mock_console.side_effect = exception.ConsoleError('test') self.assertRaises(exception.ConsoleError, self.service._do_node_tear_down, task, node.provision_state) node.refresh() self.assertEqual(states.ERROR, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNotNone(node.last_error) # Assert instance_info was erased self.assertEqual({}, node.instance_info) mock_console.assert_called_once_with(task) # TODO(TheJulia): Since we're functionally bound to neutron support # by default, the fake drivers still invoke neutron. @mock.patch('ironic.drivers.modules.fake.FakeConsole.stop_console') @mock.patch('ironic.common.neutron.unbind_neutron_port') @mock.patch('ironic.conductor.cleaning.do_node_clean') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def _test__do_node_tear_down_ok(self, mock_tear_down, mock_clean, mock_unbind, mock_console, enabled_console=False, with_allocation=False): # test when driver.deploy.tear_down succeeds node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DELETING, target_provision_state=states.AVAILABLE, instance_uuid=(uuidutils.generate_uuid() if not with_allocation else None), instance_info={'foo': 'bar'}, console_enabled=enabled_console, driver_internal_info={'is_whole_disk_image': False, 'clean_steps': {}, 'root_uuid_or_disk_id': 'foo', 'instance': {'ephemeral_gb': 10}}) port = obj_utils.create_test_port( self.context, node_id=node.id, internal_info={'tenant_vif_port_id': 'foo'}) if with_allocation: alloc = obj_utils.create_test_allocation(self.context) # Establish cross-linking between the node and the allocation alloc.node_id = node.id alloc.save() node.refresh() task = task_manager.TaskManager(self.context, node.uuid) self._start_service() self.service._do_node_tear_down(task, node.provision_state) node.refresh() port.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) self.assertIsNone(node.instance_uuid) self.assertIsNone(node.allocation_id) self.assertEqual({}, node.instance_info) self.assertNotIn('instance', node.driver_internal_info) self.assertNotIn('clean_steps', node.driver_internal_info) self.assertNotIn('root_uuid_or_disk_id', node.driver_internal_info) self.assertNotIn('is_whole_disk_image', node.driver_internal_info) mock_tear_down.assert_called_once_with(task) mock_clean.assert_called_once_with(task) self.assertEqual({}, port.internal_info) mock_unbind.assert_called_once_with('foo', context=mock.ANY) if enabled_console: mock_console.assert_called_once_with(task) else: self.assertFalse(mock_console.called) if with_allocation: self.assertRaises(exception.AllocationNotFound, objects.Allocation.get_by_id, self.context, alloc.id) def test__do_node_tear_down_ok_without_console(self): self._test__do_node_tear_down_ok(enabled_console=False) def test__do_node_tear_down_ok_with_console(self): self._test__do_node_tear_down_ok(enabled_console=True) def test__do_node_tear_down_with_allocation(self): self._test__do_node_tear_down_ok(with_allocation=True) @mock.patch('ironic.drivers.modules.fake.FakeRescue.clean_up') @mock.patch('ironic.conductor.cleaning.do_node_clean') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.tear_down') def _test_do_node_tear_down_from_state(self, init_state, is_rescue_state, mock_tear_down, mock_clean, mock_rescue_clean): node = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), provision_state=init_state, target_provision_state=states.AVAILABLE, driver_internal_info={'is_whole_disk_image': False}) self._start_service() self.service.do_node_tear_down(self.context, node.uuid) self._stop_service() node.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) self.assertEqual({}, node.instance_info) mock_tear_down.assert_called_once_with(mock.ANY) mock_clean.assert_called_once_with(mock.ANY) if is_rescue_state: mock_rescue_clean.assert_called_once_with(mock.ANY) else: self.assertFalse(mock_rescue_clean.called) def test__do_node_tear_down_from_valid_states(self): valid_states = [states.ACTIVE, states.DEPLOYWAIT, states.DEPLOYFAIL, states.ERROR] for state in valid_states: self._test_do_node_tear_down_from_state(state, False) valid_rescue_states = [states.RESCUEWAIT, states.RESCUE, states.UNRESCUEFAIL, states.RESCUEFAIL] for state in valid_rescue_states: self._test_do_node_tear_down_from_state(state, True) # NOTE(tenbrae): partial tear-down was broken. A node left in a state of # DELETING could not have tear_down called on it a second # time Thus, I have removed the unit test, which faultily # asserted only that a node could be left in a state of # incomplete deletion -- not that such a node's deletion # could later be completed. @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_node_tear_down_worker_pool_full(self, mock_spawn): prv_state = states.ACTIVE tgt_prv_state = states.NOSTATE fake_instance_info = {'foo': 'bar'} driver_internal_info = {'is_whole_disk_image': False} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, instance_info=fake_instance_info, driver_internal_info=driver_internal_info, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_tear_down, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Assert instance_info/driver_internal_info was not touched self.assertEqual(fake_instance_info, node.instance_info) self.assertEqual(driver_internal_info, node.driver_internal_info) # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mgr_utils.mock_record_keepalive class DoProvisioningActionTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provisioning_action_worker_pool_full(self, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_provisioning_action, self.context, node.uuid, 'provide') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provision_action_provide(self, mock_spawn): # test when a node is cleaned going from manageable to available node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=states.AVAILABLE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'provide') node.refresh() # Node will be moved to AVAILABLE after cleaning, not tested here self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service, cleaning.do_node_clean, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provision_action_provide_in_maintenance(self, mock_spawn): CONF.set_override('allow_provisioning_in_maintenance', False, group='conductor') # test when a node is cleaned going from manageable to available node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=None, maintenance=True) self._start_service() mock_spawn.reset_mock() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_provisioning_action, self.context, node.uuid, 'provide') # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) node.refresh() self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertIsNone(node.last_error) self.assertFalse(mock_spawn.called) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provision_action_manage(self, mock_spawn): # test when a node is verified going from enroll to manageable node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ENROLL, target_provision_state=states.MANAGEABLE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'manage') node.refresh() # Node will be moved to MANAGEABLE after verification, not tested here self.assertEqual(states.VERIFYING, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service, self.service._do_node_verify, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def _do_provision_action_abort(self, mock_spawn, manual=False): tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'abort') node.refresh() # Node will be moved to tgt_prov_state after cleaning, not tested here self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with( self.service, cleaning.do_node_clean_abort, mock.ANY) def test_do_provision_action_abort_automated_clean(self): self._do_provision_action_abort() def test_do_provision_action_abort_manual_clean(self): self._do_provision_action_abort(manual=True) def test_do_provision_action_abort_clean_step_not_abortable(self): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=states.AVAILABLE, clean_step={'step': 'foo', 'abortable': False}) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'abort') node.refresh() # Assert the current clean step was marked to be aborted later self.assertIn('abort_after', node.clean_step) self.assertTrue(node.clean_step['abort_after']) # Make sure things stays as it was before self.assertEqual(states.CLEANWAIT, node.provision_state) self.assertEqual(states.AVAILABLE, node.target_provision_state) @mgr_utils.mock_record_keepalive class DoNodeCleanTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(DoNodeCleanTestCase, self).setUp() self.config(automated_clean=True, group='conductor') self.power_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'power'} self.deploy_update = { 'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'} self.deploy_erase = { 'step': 'erase_disks', 'priority': 20, 'interface': 'deploy'} # Automated cleaning should be executed in this order self.clean_steps = [self.deploy_erase, self.power_update, self.deploy_update] self.next_clean_step_index = 1 # Manual clean step self.deploy_raid = { 'step': 'build_raid', 'priority': 0, 'interface': 'deploy'} @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_maintenance(self, mock_validate): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE, maintenance=True, maintenance_reason='reason') self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) self.assertFalse(mock_validate.called) @mock.patch('ironic.conductor.task_manager.TaskManager.process_event', autospec=True) def _test_do_node_clean_validate_fail(self, mock_validate, mock_process): mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) mock_validate.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_process.called) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_power_validate_fail(self, mock_validate): self._test_do_node_clean_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) def test_do_node_clean_network_validate_fail(self, mock_validate): self._test_do_node_clean_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_invalid_state(self, mock_power_valid, mock_network_valid): # test node.provision_state is incorrect for clean node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ENROLL, target_provision_state=states.NOSTATE) self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, []) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) mock_power_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_network_valid.assert_called_once_with(mock.ANY, mock.ANY) node.refresh() self.assertNotIn('clean_steps', node.driver_internal_info) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_ok(self, mock_power_valid, mock_network_valid, mock_spawn): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE, last_error='old error') self._start_service() clean_steps = [self.deploy_raid] self.service.do_node_clean(self.context, node.uuid, clean_steps) mock_power_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_network_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_spawn.assert_called_with( self.service, cleaning.do_node_clean, mock.ANY, clean_steps) node.refresh() # Node will be moved to CLEANING self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertNotIn('clean_steps', node.driver_internal_info) self.assertIsNone(node.last_error) @mock.patch('ironic.conductor.utils.remove_agent_url') @mock.patch('ironic.conductor.utils.is_fast_track') @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_ok_fast_track( self, mock_power_valid, mock_network_valid, mock_spawn, mock_is_fast_track, mock_remove_agent_url): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, driver_internal_info={'agent_url': 'meow'}) mock_is_fast_track.return_value = True self._start_service() clean_steps = [self.deploy_raid] self.service.do_node_clean(self.context, node.uuid, clean_steps) mock_power_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_network_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_spawn.assert_called_with( self.service, cleaning.do_node_clean, mock.ANY, clean_steps) node.refresh() # Node will be moved to CLEANING self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertNotIn('clean_steps', node.driver_internal_info) mock_is_fast_track.assert_called_once_with(mock.ANY) mock_remove_agent_url.assert_not_called() @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakePower.validate', autospec=True) def test_do_node_clean_worker_pool_full(self, mock_power_valid, mock_network_valid, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state) self._start_service() clean_steps = [self.deploy_raid] mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_clean, self.context, node.uuid, clean_steps) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() mock_power_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_network_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_spawn.assert_called_with( self.service, cleaning.do_node_clean, mock.ANY, clean_steps) node.refresh() # Make sure states were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_clean_worker_pool_full(self, mock_spawn): # Test the appropriate exception is raised if the worker pool is full prv_state = states.CLEANWAIT tgt_prv_state = states.AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() self.assertRaises(exception.NoFreeConductorWorker, self.service.continue_node_clean, self.context, node.uuid) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_clean_wrong_state(self, mock_spawn): # Test the appropriate exception is raised if node isn't already # in CLEANWAIT state prv_state = states.DELETING tgt_prv_state = states.AVAILABLE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None) self._start_service() self.assertRaises(exception.InvalidStateRequested, self.service.continue_node_clean, self.context, node.uuid) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def _continue_node_clean(self, return_state, mock_spawn, manual=False): # test a node can continue cleaning via RPC prv_state = return_state tgt_prv_state = states.MANAGEABLE if manual else states.AVAILABLE driver_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0} node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.CLEANING, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) mock_spawn.assert_called_with(self.service, cleaning.do_next_clean_step, mock.ANY, self.next_clean_step_index) def test_continue_node_clean_automated(self): self._continue_node_clean(states.CLEANWAIT) def test_continue_node_clean_manual(self): self._continue_node_clean(states.CLEANWAIT, manual=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def _continue_node_clean_skip_step(self, mock_spawn, skip=True): # test that skipping current step mechanism works driver_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0} if not skip: driver_info['skip_current_clean_step'] = skip node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=states.MANAGEABLE, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() if skip: expected_step_index = 1 else: self.assertNotIn( 'skip_current_clean_step', node.driver_internal_info) expected_step_index = 0 mock_spawn.assert_called_with(self.service, cleaning.do_next_clean_step, mock.ANY, expected_step_index) def test_continue_node_clean_skip_step(self): self._continue_node_clean_skip_step() def test_continue_node_clean_no_skip_step(self): self._continue_node_clean_skip_step(skip=False) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_continue_node_clean_polling(self, mock_spawn): # test that cleaning_polling flag is cleared driver_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0, 'cleaning_polling': True} node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=states.MANAGEABLE, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertNotIn('cleaning_polling', node.driver_internal_info) mock_spawn.assert_called_with(self.service, cleaning.do_next_clean_step, mock.ANY, 1) def _continue_node_clean_abort(self, manual=False): last_clean_step = self.clean_steps[0] last_clean_step['abortable'] = False last_clean_step['abort_after'] = True driver_info = {'clean_steps': self.clean_steps, 'clean_step_index': 0} tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(states.CLEANFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # assert the clean step name is in the last error message self.assertIn(self.clean_steps[0]['step'], node.last_error) def test_continue_node_clean_automated_abort(self): self._continue_node_clean_abort() def test_continue_node_clean_manual_abort(self): self._continue_node_clean_abort(manual=True) def _continue_node_clean_abort_last_clean_step(self, manual=False): last_clean_step = self.clean_steps[0] last_clean_step['abortable'] = False last_clean_step['abort_after'] = True driver_info = {'clean_steps': [self.clean_steps[0]], 'clean_step_index': 0} tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.CLEANWAIT, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info=driver_info, clean_step=self.clean_steps[0]) self._start_service() self.service.continue_node_clean(self.context, node.uuid) self._stop_service() node.refresh() self.assertEqual(tgt_prov_state, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertIsNone(node.last_error) def test_continue_node_clean_automated_abort_last_clean_step(self): self._continue_node_clean_abort_last_clean_step() def test_continue_node_clean_manual_abort_last_clean_step(self): self._continue_node_clean_abort_last_clean_step(manual=True) class DoNodeRescueTestCase(mgr_utils.CommonMixIn, mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.conductor.task_manager.acquire', autospec=True) def test_do_node_rescue(self, mock_acquire): self._start_service() task = self._create_task( node_attrs=dict(driver='fake-hardware', provision_state=states.ACTIVE, instance_info={}, driver_internal_info={'agent_url': 'url'})) mock_acquire.side_effect = self._get_acquire_side_effect(task) self.service.do_node_rescue(self.context, task.node.uuid, "password") task.process_event.assert_called_once_with( 'rescue', callback=self.service._spawn_worker, call_args=(self.service._do_node_rescue, task), err_handler=conductor_utils.spawn_rescue_error_handler) self.assertIn('rescue_password', task.node.instance_info) self.assertIn('hashed_rescue_password', task.node.instance_info) self.assertNotIn('agent_url', task.node.driver_internal_info) def test_do_node_rescue_invalid_state(self): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', network_interface='noop', provision_state=states.AVAILABLE, instance_info={}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_rescue, self.context, node.uuid, "password") node.refresh() self.assertNotIn('rescue_password', node.instance_info) self.assertNotIn('hashed_rescue_password', node.instance_info) self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) def _test_do_node_rescue_when_validate_fail(self, mock_validate): # InvalidParameterValue should be re-raised as InstanceRescueFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, target_provision_state=states.NOSTATE, instance_info={}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_rescue, self.context, node.uuid, "password") node.refresh() self.assertNotIn('hashed_rescue_password', node.instance_info) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceRescueFailure, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakeRescue.validate') def test_do_node_rescue_when_rescue_validate_fail(self, mock_validate): self._test_do_node_rescue_when_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_rescue_when_power_validate_fail(self, mock_validate): self._test_do_node_rescue_when_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.validate') def test_do_node_rescue_when_network_validate_fail(self, mock_validate): self._test_do_node_rescue_when_validate_fail(mock_validate) def test_do_node_rescue_maintenance(self): node = obj_utils.create_test_node( self.context, driver='fake-hardware', network_interface='noop', provision_state=states.ACTIVE, maintenance=True, target_provision_state=states.NOSTATE, instance_info={}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_rescue, self.context, node['uuid'], "password") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) @mock.patch('ironic.drivers.modules.fake.FakeRescue.rescue') def test__do_node_rescue_returns_rescuewait(self, mock_rescue): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUING, instance_info={'rescue_password': 'password', 'hashed_rescue_password': '1234'}) with task_manager.TaskManager(self.context, node.uuid) as task: mock_rescue.return_value = states.RESCUEWAIT self.service._do_node_rescue(task) node.refresh() self.assertEqual(states.RESCUEWAIT, node.provision_state) self.assertEqual(states.RESCUE, node.target_provision_state) self.assertIn('rescue_password', node.instance_info) self.assertIn('hashed_rescue_password', node.instance_info) @mock.patch('ironic.drivers.modules.fake.FakeRescue.rescue') def test__do_node_rescue_returns_rescue(self, mock_rescue): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUING, instance_info={ 'rescue_password': 'password', 'hashed_rescue_password': '1234'}) with task_manager.TaskManager(self.context, node.uuid) as task: mock_rescue.return_value = states.RESCUE self.service._do_node_rescue(task) node.refresh() self.assertEqual(states.RESCUE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIn('rescue_password', node.instance_info) self.assertIn('hashed_rescue_password', node.instance_info) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeRescue.rescue') def test__do_node_rescue_errors(self, mock_rescue, mock_log): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUING, instance_info={ 'rescue_password': 'password', 'hashed_rescue_password': '1234'}) mock_rescue.side_effect = exception.InstanceRescueFailure( 'failed to rescue') with task_manager.TaskManager(self.context, node.uuid) as task: self.assertRaises(exception.InstanceRescueFailure, self.service._do_node_rescue, task) node.refresh() self.assertEqual(states.RESCUEFAIL, node.provision_state) self.assertEqual(states.RESCUE, node.target_provision_state) self.assertNotIn('rescue_password', node.instance_info) self.assertNotIn('hashed_rescue_password', node.instance_info) self.assertTrue(node.last_error.startswith('Failed to rescue')) self.assertTrue(mock_log.error.called) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeRescue.rescue') def test__do_node_rescue_bad_state(self, mock_rescue, mock_log): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUING, instance_info={ 'rescue_password': 'password', 'hashed_rescue_password': '1234'}) mock_rescue.return_value = states.ACTIVE with task_manager.TaskManager(self.context, node.uuid) as task: self.service._do_node_rescue(task) node.refresh() self.assertEqual(states.RESCUEFAIL, node.provision_state) self.assertEqual(states.RESCUE, node.target_provision_state) self.assertNotIn('rescue_password', node.instance_info) self.assertNotIn('hashed_rescue_password', node.instance_info) self.assertTrue(node.last_error.startswith('Failed to rescue')) self.assertTrue(mock_log.error.called) @mock.patch('ironic.conductor.task_manager.acquire', autospec=True) def test_do_node_unrescue(self, mock_acquire): self._start_service() task = self._create_task( node_attrs=dict(driver='fake-hardware', provision_state=states.RESCUE, driver_internal_info={'agent_url': 'url'})) mock_acquire.side_effect = self._get_acquire_side_effect(task) self.service.do_node_unrescue(self.context, task.node.uuid) task.node.refresh() self.assertNotIn('agent_url', task.node.driver_internal_info) task.process_event.assert_called_once_with( 'unrescue', callback=self.service._spawn_worker, call_args=(self.service._do_node_unrescue, task), err_handler=conductor_utils.provisioning_error_handler) def test_do_node_unrescue_invalid_state(self): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_unrescue, self.context, node.uuid) self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_do_node_unrescue_validate_fail(self, mock_validate): # InvalidParameterValue should be re-raised as InstanceUnrescueFailure mock_validate.side_effect = exception.InvalidParameterValue('error') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUE, target_provision_state=states.NOSTATE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_unrescue, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InstanceUnrescueFailure, exc.exc_info[0]) def test_do_node_unrescue_maintenance(self): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUE, maintenance=True, target_provision_state=states.NOSTATE, instance_info={}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_node_unrescue, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeInMaintenance, exc.exc_info[0]) # This is a sync operation last_error should be None. node.refresh() self.assertIsNone(node.last_error) @mock.patch('ironic.drivers.modules.fake.FakeRescue.unrescue') def test__do_node_unrescue(self, mock_unrescue): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.UNRESCUING, target_provision_state=states.ACTIVE, instance_info={}) with task_manager.TaskManager(self.context, node.uuid) as task: mock_unrescue.return_value = states.ACTIVE self.service._do_node_unrescue(task) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeRescue.unrescue') def test__do_node_unrescue_ironic_error(self, mock_unrescue, mock_log): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.UNRESCUING, target_provision_state=states.ACTIVE, instance_info={}) mock_unrescue.side_effect = exception.InstanceUnrescueFailure( 'Unable to unrescue') with task_manager.TaskManager(self.context, node.uuid) as task: self.assertRaises(exception.InstanceUnrescueFailure, self.service._do_node_unrescue, task) node.refresh() self.assertEqual(states.UNRESCUEFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertTrue('Unable to unrescue' in node.last_error) self.assertTrue(mock_log.error.called) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeRescue.unrescue') def test__do_node_unrescue_other_error(self, mock_unrescue, mock_log): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.UNRESCUING, target_provision_state=states.ACTIVE, instance_info={}) mock_unrescue.side_effect = RuntimeError('Some failure') with task_manager.TaskManager(self.context, node.uuid) as task: self.assertRaises(RuntimeError, self.service._do_node_unrescue, task) node.refresh() self.assertEqual(states.UNRESCUEFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertTrue('Some failure' in node.last_error) self.assertTrue(mock_log.exception.called) @mock.patch('ironic.drivers.modules.fake.FakeRescue.unrescue') def test__do_node_unrescue_bad_state(self, mock_unrescue): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.UNRESCUING, instance_info={}) mock_unrescue.return_value = states.RESCUEWAIT with task_manager.TaskManager(self.context, node.uuid) as task: self.service._do_node_unrescue(task) node.refresh() self.assertEqual(states.UNRESCUEFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertTrue('Driver returned unexpected state' in node.last_error) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_provision_rescue_abort(self, mock_spawn): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUEWAIT, target_provision_state=states.RESCUE, instance_info={'rescue_password': 'password'}) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'abort') node.refresh() self.assertEqual(states.RESCUEFAIL, node.provision_state) self.assertIsNone(node.last_error) self.assertNotIn('rescue_password', node.instance_info) mock_spawn.assert_called_with( self.service, self.service._do_node_rescue_abort, mock.ANY) @mock.patch.object(fake.FakeRescue, 'clean_up', autospec=True) def test__do_node_rescue_abort(self, clean_up_mock): node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUEFAIL, target_provision_state=states.RESCUE, driver_internal_info={'agent_url': 'url'}) with task_manager.acquire(self.context, node.uuid) as task: self.service._do_node_rescue_abort(task) clean_up_mock.assert_called_once_with(task.driver.rescue, task) self.assertIsNotNone(task.node.last_error) self.assertFalse(task.node.maintenance) self.assertNotIn('agent_url', task.node.driver_internal_info) @mock.patch.object(fake.FakeRescue, 'clean_up', autospec=True) def test__do_node_rescue_abort_clean_up_fail(self, clean_up_mock): clean_up_mock.side_effect = Exception('Surprise') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.RESCUEFAIL) with task_manager.acquire(self.context, node.uuid) as task: self.service._do_node_rescue_abort(task) clean_up_mock.assert_called_once_with(task.driver.rescue, task) self.assertIsNotNone(task.node.last_error) self.assertIsNotNone(task.node.maintenance_reason) self.assertTrue(task.node.maintenance) self.assertEqual('rescue abort failure', task.node.fault) @mgr_utils.mock_record_keepalive class DoNodeVerifyTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.objects.node.NodeCorrectedPowerStateNotification') @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify(self, mock_validate, mock_get_power_state, mock_notif): self._start_service() mock_get_power_state.return_value = states.POWER_OFF # Required for exception handling mock_notif.__name__ = 'NodeCorrectedPowerStateNotification' node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() # 1 notification should be sent - # baremetal.node.power_state_corrected.success mock_notif.assert_called_once_with(publisher=mock.ANY, event_type=mock.ANY, level=mock.ANY, payload=mock.ANY) mock_notif.return_value.emit.assert_called_once_with(mock.ANY) node.refresh() mock_validate.assert_called_once_with(task) mock_get_power_state.assert_called_once_with(task) self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertIsNone(node.last_error) self.assertEqual(states.POWER_OFF, node.power_state) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify_validation_fails(self, mock_validate, mock_get_power_state): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) mock_validate.side_effect = RuntimeError("boom") with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() node.refresh() mock_validate.assert_called_once_with(task) self.assertEqual(states.ENROLL, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertTrue(node.last_error) self.assertFalse(mock_get_power_state.called) @mock.patch('ironic.drivers.modules.fake.FakePower.get_power_state') @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test__do_node_verify_get_state_fails(self, mock_validate, mock_get_power_state): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.VERIFYING, target_provision_state=states.MANAGEABLE, last_error=None, power_state=states.NOSTATE) mock_get_power_state.side_effect = RuntimeError("boom") with task_manager.acquire( self.context, node['id'], shared=False) as task: self.service._do_node_verify(task) self._stop_service() node.refresh() mock_get_power_state.assert_called_once_with(task) self.assertEqual(states.ENROLL, node.provision_state) self.assertIsNone(node.target_provision_state) self.assertTrue(node.last_error) @mgr_utils.mock_record_keepalive class MiscTestCase(mgr_utils.ServiceSetUpMixin, mgr_utils.CommonMixIn, db_base.DbTestCase): def test__mapped_to_this_conductor(self): self._start_service() n = db_utils.get_test_node() self.assertTrue(self.service._mapped_to_this_conductor( n['uuid'], 'fake-hardware', '')) self.assertFalse(self.service._mapped_to_this_conductor( n['uuid'], 'fake-hardware', 'foogroup')) self.assertFalse(self.service._mapped_to_this_conductor(n['uuid'], 'otherdriver', '')) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_dynamic_driver_interfaces(self, mock_iwdi): mock_iwdi.return_value = False target_raid_config = {'logical_disks': [{'size_gb': 1, 'raid_level': '1'}]} node = obj_utils.create_test_node( self.context, driver='fake-hardware', target_raid_config=target_raid_config, network_interface='noop') ret = self.service.validate_driver_interfaces(self.context, node.uuid) expected = {'console': {'result': True}, 'power': {'result': True}, 'inspect': {'result': True}, 'management': {'result': True}, 'boot': {'result': True}, 'raid': {'result': True}, 'deploy': {'result': True}, 'network': {'result': True}, 'storage': {'result': True}, 'rescue': {'result': True}, 'bios': {'result': True}} self.assertEqual(expected, ret) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(fake.FakeDeploy, 'validate', autospec=True) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces_validation_fail(self, mock_iwdi, mock_val): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake-hardware', network_interface='noop') reason = 'fake reason' mock_val.side_effect = exception.InvalidParameterValue(reason) ret = self.service.validate_driver_interfaces(self.context, node.uuid) self.assertFalse(ret['deploy']['result']) self.assertEqual(reason, ret['deploy']['reason']) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(fake.FakeDeploy, 'validate', autospec=True) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces_validation_fail_unexpected( self, mock_iwdi, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware') mock_val.side_effect = Exception('boom') ret = self.service.validate_driver_interfaces(self.context, node.uuid) reason = ('Unexpected exception, traceback saved ' 'into log by ironic conductor service ' 'that is running on test-host: boom') self.assertFalse(ret['deploy']['result']) self.assertEqual(reason, ret['deploy']['reason']) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces_validation_fail_instance_traits( self, mock_iwdi): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake-hardware', network_interface='noop') with mock.patch( 'ironic.conductor.utils.validate_instance_info_traits' ) as ii_traits: reason = 'fake reason' ii_traits.side_effect = exception.InvalidParameterValue(reason) ret = self.service.validate_driver_interfaces(self.context, node.uuid) self.assertFalse(ret['deploy']['result']) self.assertEqual(reason, ret['deploy']['reason']) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(images, 'is_whole_disk_image') def test_validate_driver_interfaces_validation_fail_deploy_templates( self, mock_iwdi): mock_iwdi.return_value = False node = obj_utils.create_test_node(self.context, driver='fake-hardware', network_interface='noop') with mock.patch( 'ironic.conductor.steps.validate_deploy_templates' ) as mock_validate: reason = 'fake reason' mock_validate.side_effect = exception.InvalidParameterValue(reason) ret = self.service.validate_driver_interfaces(self.context, node.uuid) self.assertFalse(ret['deploy']['result']) self.assertEqual(reason, ret['deploy']['reason']) mock_iwdi.assert_called_once_with(self.context, node.instance_info) @mock.patch.object(manager.ConductorManager, '_fail_if_in_state', autospec=True) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') def test_iter_nodes(self, mock_nodeinfo_list, mock_mapped, mock_fail_if_state): self._start_service() self.columns = ['uuid', 'driver', 'conductor_group', 'id'] nodes = [self._create_node(id=i, driver='fake-hardware', conductor_group='') for i in range(2)] mock_nodeinfo_list.return_value = self._get_nodeinfo_list_response( nodes) mock_mapped.side_effect = [True, False] result = list(self.service.iter_nodes(fields=['id'], filters=mock.sentinel.filters)) self.assertEqual([(nodes[0].uuid, 'fake-hardware', '', 0)], result) mock_nodeinfo_list.assert_called_once_with( columns=self.columns, filters=mock.sentinel.filters) expected_calls = [mock.call(mock.ANY, mock.ANY, {'provision_state': 'deploying', 'reserved': False}, 'deploying', 'provision_updated_at', last_error=mock.ANY), mock.call(mock.ANY, mock.ANY, {'provision_state': 'cleaning', 'reserved': False}, 'cleaning', 'provision_updated_at', last_error=mock.ANY)] mock_fail_if_state.assert_has_calls(expected_calls) @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') def test_iter_nodes_shutdown(self, mock_nodeinfo_list): self._start_service() self.columns = ['uuid', 'driver', 'conductor_group', 'id'] nodes = [self._create_node(driver='fake-hardware')] mock_nodeinfo_list.return_value = self._get_nodeinfo_list_response( nodes) self.service._shutdown = True result = list(self.service.iter_nodes(fields=['id'], filters=mock.sentinel.filters)) self.assertEqual([], result) @mgr_utils.mock_record_keepalive class ConsoleTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_set_console_mode_worker_pool_full(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self._start_service() with mock.patch.object(self.service, '_spawn_worker') as spawn_mock: spawn_mock.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_console_mode, self.context, node.uuid, True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() spawn_mock.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(notification_utils, 'emit_console_notification') def test_set_console_mode_enabled(self, mock_notify): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self._start_service() self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() node.refresh() self.assertTrue(node.console_enabled) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.END)]) @mock.patch.object(notification_utils, 'emit_console_notification') def test_set_console_mode_disabled(self, mock_notify): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) self._start_service() self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() node.refresh() self.assertFalse(node.console_enabled) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.END)]) @mock.patch.object(fake.FakeConsole, 'validate', autospec=True) def test_set_console_mode_validation_fail(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware', last_error=None) self._start_service() mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_console_mode, self.context, node.uuid, True) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mock.patch.object(fake.FakeConsole, 'start_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_set_console_mode_start_fail(self, mock_notify, mock_sc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', last_error=None, console_enabled=False) self._start_service() mock_sc.side_effect = exception.IronicException('test-error') self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() mock_sc.assert_called_once_with(mock.ANY, mock.ANY) node.refresh() self.assertIsNotNone(node.last_error) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.ERROR)]) @mock.patch.object(fake.FakeConsole, 'stop_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_set_console_mode_stop_fail(self, mock_notify, mock_sc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', last_error=None, console_enabled=True) self._start_service() mock_sc.side_effect = exception.IronicException('test-error') self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() mock_sc.assert_called_once_with(mock.ANY, mock.ANY) node.refresh() self.assertIsNotNone(node.last_error) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.ERROR)]) @mock.patch.object(fake.FakeConsole, 'start_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_enable_console_already_enabled(self, mock_notify, mock_sc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) self._start_service() self.service.set_console_mode(self.context, node.uuid, True) self._stop_service() self.assertFalse(mock_sc.called) self.assertFalse(mock_notify.called) @mock.patch.object(fake.FakeConsole, 'stop_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_disable_console_already_disabled(self, mock_notify, mock_sc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=False) self._start_service() self.service.set_console_mode(self.context, node.uuid, False) self._stop_service() self.assertFalse(mock_sc.called) self.assertFalse(mock_notify.called) @mock.patch.object(fake.FakeConsole, 'get_console', autospec=True) def test_get_console(self, mock_gc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) console_info = {'test': 'test info'} mock_gc.return_value = console_info data = self.service.get_console_information(self.context, node.uuid) self.assertEqual(console_info, data) def test_get_console_disabled(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=False) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_console_information, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeConsoleNotEnabled, exc.exc_info[0]) @mock.patch.object(fake.FakeConsole, 'validate', autospec=True) def test_get_console_validate_fail(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_console_information, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class DestroyNodeTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_destroy_node(self): self._start_service() for state in states.DELETE_ALLOWED_STATES: node = obj_utils.create_test_node(self.context, provision_state=state) self.service.destroy_node(self.context, node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_reserved(self): self._start_service() fake_reservation = 'fake-reserv' node = obj_utils.create_test_node(self.context, reservation=fake_reservation) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) # Verify existing reservation wasn't broken. node.refresh() self.assertEqual(fake_reservation, node.reservation) def test_destroy_node_associated(self): self._start_service() node = obj_utils.create_test_node( self.context, instance_uuid=uuidutils.generate_uuid()) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeAssociated, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_with_allocation(self): # Nodes with allocations can be deleted in maintenance node = obj_utils.create_test_node(self.context, provision_state=states.ACTIVE, maintenance=True) alloc = obj_utils.create_test_allocation(self.context) # Establish cross-linking between the node and the allocation alloc.node_id = node.id alloc.save() node.refresh() self.service.destroy_node(self.context, node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) self.assertRaises(exception.AllocationNotFound, self.dbapi.get_allocation_by_id, alloc.id) def test_destroy_node_invalid_provision_state(self): self._start_service() node = obj_utils.create_test_node(self.context, provision_state=states.ACTIVE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_protected_provision_state_available(self): CONF.set_override('allow_deleting_available_nodes', False, group='conductor') self._start_service() node = obj_utils.create_test_node(self.context, provision_state=states.AVAILABLE) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_protected(self): self._start_service() node = obj_utils.create_test_node(self.context, provision_state=states.ACTIVE, protected=True, # Even in maintenance the protected # nodes are not deleted maintenance=True) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_node, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeProtected, exc.exc_info[0]) # Verify reservation was released. node.refresh() self.assertIsNone(node.reservation) def test_destroy_node_allowed_in_maintenance(self): self._start_service() node = obj_utils.create_test_node( self.context, instance_uuid=uuidutils.generate_uuid(), provision_state=states.ACTIVE, maintenance=True) self.service.destroy_node(self.context, node.uuid) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) def test_destroy_node_power_off(self): self._start_service() node = obj_utils.create_test_node(self.context, power_state=states.POWER_OFF) self.service.destroy_node(self.context, node.uuid) @mock.patch.object(fake.FakeConsole, 'stop_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_destroy_node_console_enabled(self, mock_notify, mock_sc): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) self.service.destroy_node(self.context, node.uuid) mock_sc.assert_called_once_with(mock.ANY, mock.ANY) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.END)]) @mock.patch.object(fake.FakeConsole, 'stop_console', autospec=True) @mock.patch.object(notification_utils, 'emit_console_notification') def test_destroy_node_console_disable_fail(self, mock_notify, mock_sc): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) mock_sc.side_effect = Exception() self.service.destroy_node(self.context, node.uuid) mock_sc.assert_called_once_with(mock.ANY, mock.ANY) self.assertRaises(exception.NodeNotFound, self.dbapi.get_node_by_uuid, node.uuid) mock_notify.assert_has_calls( [mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.START), mock.call(mock.ANY, 'console_set', obj_fields.NotificationStatus.ERROR)]) @mock.patch.object(fake.FakePower, 'set_power_state', autospec=True) def test_destroy_node_adopt_failed_no_power_change(self, mock_power): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.ADOPTFAIL) self.service.destroy_node(self.context, node.uuid) self.assertFalse(mock_power.called) @mgr_utils.mock_record_keepalive class CreatePortTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_create_port(self, mock_validate): node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.get_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) res = self.service.create_port(self.context, port) self.assertEqual({'foo': 'bar'}, res.extra) res = objects.Port.get_by_uuid(self.context, port['uuid']) self.assertEqual({'foo': 'bar'}, res.extra) mock_validate.assert_called_once_with(mock.ANY, port) def test_create_port_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') port = obj_utils.get_test_port(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.create_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) self.assertRaises(exception.PortNotFound, port.get_by_uuid, self.context, port.uuid) @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_create_port_mac_exists(self, mock_validate): node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id) port = obj_utils.get_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid()) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.create_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MACAlreadyExists, exc.exc_info[0]) self.assertRaises(exception.PortNotFound, port.get_by_uuid, self.context, port.uuid) @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_create_port_physnet_validation_failure_conflict(self, mock_validate): mock_validate.side_effect = exception.Conflict node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.get_test_port(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.create_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.Conflict, exc.exc_info[0]) self.assertRaises(exception.PortNotFound, port.get_by_uuid, self.context, port.uuid) @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_create_port_physnet_validation_failure_inconsistent( self, mock_validate): mock_validate.side_effect = exception.PortgroupPhysnetInconsistent( portgroup='pg1', physical_networks='physnet1, physnet2') node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.get_test_port(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.create_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.PortgroupPhysnetInconsistent, exc.exc_info[0]) self.assertRaises(exception.PortNotFound, port.get_by_uuid, self.context, port.uuid) @mgr_utils.mock_record_keepalive class UpdatePortTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(conductor_utils, 'validate_port_physnet') @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port(self, mock_val, mock_pc, mock_vpp): node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} port.extra = new_extra res = self.service.update_port(self.context, port) self.assertEqual(new_extra, res.extra) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) mock_vpp.assert_called_once_with(mock.ANY, port) def test_update_port_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') port = obj_utils.create_test_port(self.context, node_id=node.id) port.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_port_changed_failure(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id) old_address = port.address port.address = '11:22:33:44:55:bb' mock_pc.side_effect = (exception.FailedToUpdateMacOnPort('boom')) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) mock_val.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(exception.FailedToUpdateMacOnPort, exc.exc_info[0]) port.refresh() self.assertEqual(old_address, port.address) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_address_active_node(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=None, provision_state='active') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) old_address = port.address new_address = '11:22:33:44:55:bb' port.address = new_address exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) port.refresh() self.assertEqual(old_address, port.address) self.assertFalse(mock_pc.called) self.assertFalse(mock_val.called) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_address_maintenance(self, mock_val, mock_pc): node = obj_utils.create_test_node( self.context, driver='fake-hardware', maintenance=True, instance_uuid=uuidutils.generate_uuid(), provision_state='active') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) new_address = '11:22:33:44:55:bb' port.address = new_address res = self.service.update_port(self.context, port) self.assertEqual(new_address, res.address) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_portgroup_active_node(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=None, provision_state='active') pg1 = obj_utils.create_test_portgroup(self.context, node_id=node.id) pg2 = obj_utils.create_test_portgroup( self.context, node_id=node.id, name='bar', address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid()) port = obj_utils.create_test_port(self.context, node_id=node.id, portgroup_id=pg1.id) port.portgroup_id = pg2.id exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidState, exc.exc_info[0]) port.refresh() self.assertEqual(pg1.id, port.portgroup_id) self.assertFalse(mock_pc.called) self.assertFalse(mock_val.called) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_portgroup_enroll_node(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=None, provision_state='enroll') pg1 = obj_utils.create_test_portgroup(self.context, node_id=node.id) pg2 = obj_utils.create_test_portgroup( self.context, node_id=node.id, name='bar', address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid()) port = obj_utils.create_test_port(self.context, node_id=node.id, portgroup_id=pg1.id) port.portgroup_id = pg2.id self.service.update_port(self.context, port) port.refresh() self.assertEqual(pg2.id, port.portgroup_id) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) mock_val.assert_called_once_with(mock.ANY, mock.ANY) def test_update_port_node_deleting_state(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DELETING) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) old_pxe = port.pxe_enabled port.pxe_enabled = True exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) self.assertEqual(exception.InvalidState, exc.exc_info[0]) port.refresh() self.assertEqual(old_pxe, port.pxe_enabled) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_node_manageable_state(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.MANAGEABLE) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) port.pxe_enabled = True self.service.update_port(self.context, port) port.refresh() self.assertEqual(True, port.pxe_enabled) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_to_node_in_inspect_wait_state(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) port.pxe_enabled = True self.service.update_port(self.context, port) port.refresh() self.assertEqual(True, port.pxe_enabled) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_node_active_state_and_maintenance(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.ACTIVE, maintenance=True) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) port.pxe_enabled = True self.service.update_port(self.context, port) port.refresh() self.assertEqual(True, port.pxe_enabled) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) @mock.patch.object(n_flat.FlatNetwork, 'port_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_port_physnet_maintenance(self, mock_val, mock_pc): node = obj_utils.create_test_node( self.context, driver='fake-hardware', maintenance=True, instance_uuid=uuidutils.generate_uuid(), provision_state='active') port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) new_physnet = 'physnet1' port.physical_network = new_physnet res = self.service.update_port(self.context, port) self.assertEqual(new_physnet, res.physical_network) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, port) def test_update_port_physnet_node_deleting_state(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DELETING) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'foo': 'bar'}) old_physnet = port.physical_network port.physical_network = 'physnet1' exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) self.assertEqual(exception.InvalidState, exc.exc_info[0]) port.refresh() self.assertEqual(old_physnet, port.physical_network) @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_update_port_physnet_validation_failure_conflict(self, mock_validate): mock_validate.side_effect = exception.Conflict node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid()) port.extra = {'foo': 'bar'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.Conflict, exc.exc_info[0]) mock_validate.assert_called_once_with(mock.ANY, port) @mock.patch.object(conductor_utils, 'validate_port_physnet') def test_update_port_physnet_validation_failure_inconsistent( self, mock_validate): mock_validate.side_effect = exception.PortgroupPhysnetInconsistent( portgroup='pg1', physical_networks='physnet1, physnet2') node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid()) port.extra = {'foo': 'bar'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.PortgroupPhysnetInconsistent, exc.exc_info[0]) mock_validate.assert_called_once_with(mock.ANY, port) @mgr_utils.mock_record_keepalive class SensorsTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test__filter_out_unsupported_types_all(self): self._start_service() CONF.set_override('send_sensor_data_types', ['All'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} self.assertEqual(expected_result, actual_result) def test__filter_out_unsupported_types_part(self): self._start_service() CONF.set_override('send_sensor_data_types', ['t1'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {"t1": {'f1': 'v1'}} self.assertEqual(expected_result, actual_result) def test__filter_out_unsupported_types_non(self): self._start_service() CONF.set_override('send_sensor_data_types', ['t3'], group='conductor') fake_sensors_data = {"t1": {'f1': 'v1'}, "t2": {'f1': 'v1'}} actual_result = ( self.service._filter_out_unsupported_types(fake_sensors_data)) expected_result = {} self.assertEqual(expected_result, actual_result) @mock.patch.object(messaging.Notifier, 'info', autospec=True) @mock.patch.object(task_manager, 'acquire') def test_send_sensor_task(self, acquire_mock, notifier_mock): nodes = queue.Queue() for i in range(5): nodes.put_nowait(('fake_uuid-%d' % i, 'fake-hardware', '', None)) self._start_service() CONF.set_override('send_sensor_data', True, group='conductor') task = acquire_mock.return_value.__enter__.return_value task.node.maintenance = False task.node.driver = 'fake' task.node.name = 'fake_node' get_sensors_data_mock = task.driver.management.get_sensors_data validate_mock = task.driver.management.validate get_sensors_data_mock.return_value = 'fake-sensor-data' self.service._sensors_nodes_task(self.context, nodes) self.assertEqual(5, acquire_mock.call_count) self.assertEqual(5, validate_mock.call_count) self.assertEqual(5, get_sensors_data_mock.call_count) self.assertEqual(5, notifier_mock.call_count) n_call = mock.call(mock.ANY, mock.ANY, 'hardware.fake.metrics', {'event_type': 'hardware.fake.metrics.update', 'node_name': 'fake_node', 'timestamp': mock.ANY, 'message_id': mock.ANY, 'payload': 'fake-sensor-data', 'node_uuid': mock.ANY, 'instance_uuid': None}) notifier_mock.assert_has_calls([n_call, n_call, n_call, n_call, n_call]) @mock.patch.object(task_manager, 'acquire') def test_send_sensor_task_shutdown(self, acquire_mock): nodes = queue.Queue() nodes.put_nowait(('fake_uuid', 'fake-hardware', '', None)) self._start_service() self.service._shutdown = True CONF.set_override('send_sensor_data', True, group='conductor') self.service._sensors_nodes_task(self.context, nodes) acquire_mock.__enter__.assert_not_called() @mock.patch.object(task_manager, 'acquire', autospec=True) def test_send_sensor_task_no_management(self, acquire_mock): nodes = queue.Queue() nodes.put_nowait(('fake_uuid', 'fake-hardware', '', None)) CONF.set_override('send_sensor_data', True, group='conductor') self._start_service() task = acquire_mock.return_value.__enter__.return_value task.node.maintenance = False task.driver.management = None self.service._sensors_nodes_task(self.context, nodes) self.assertTrue(acquire_mock.called) @mock.patch.object(manager.LOG, 'debug', autospec=True) @mock.patch.object(task_manager, 'acquire', autospec=True) def test_send_sensor_task_maintenance(self, acquire_mock, debug_log): nodes = queue.Queue() nodes.put_nowait(('fake_uuid', 'fake-hardware', '', None)) self._start_service() CONF.set_override('send_sensor_data', True, group='conductor') task = acquire_mock.return_value.__enter__.return_value task.node.maintenance = True get_sensors_data_mock = task.driver.management.get_sensors_data validate_mock = task.driver.management.validate self.service._sensors_nodes_task(self.context, nodes) self.assertTrue(acquire_mock.called) self.assertFalse(validate_mock.called) self.assertFalse(get_sensors_data_mock.called) self.assertTrue(debug_log.called) @mock.patch.object(manager.ConductorManager, '_spawn_worker', autospec=True) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') def test___send_sensor_data(self, get_nodeinfo_list_mock, _mapped_to_this_conductor_mock, mock_spawn): self._start_service() CONF.set_override('send_sensor_data', True, group='conductor') # NOTE(galyna): do not wait for threads to be finished in unittests CONF.set_override('send_sensor_data_wait_timeout', 0, group='conductor') _mapped_to_this_conductor_mock.return_value = True get_nodeinfo_list_mock.return_value = [('fake_uuid', 'fake', None)] self.service._send_sensor_data(self.context) mock_spawn.assert_called_with(self.service, self.service._sensors_nodes_task, self.context, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') def test___send_sensor_data_multiple_workers( self, get_nodeinfo_list_mock, _mapped_to_this_conductor_mock, mock_spawn): self._start_service() mock_spawn.reset_mock() number_of_workers = 8 CONF.set_override('send_sensor_data', True, group='conductor') CONF.set_override('send_sensor_data_workers', number_of_workers, group='conductor') # NOTE(galyna): do not wait for threads to be finished in unittests CONF.set_override('send_sensor_data_wait_timeout', 0, group='conductor') _mapped_to_this_conductor_mock.return_value = True get_nodeinfo_list_mock.return_value = [('fake_uuid', 'fake', None)] * 20 self.service._send_sensor_data(self.context) self.assertEqual(number_of_workers, mock_spawn.call_count) # TODO(TheJulia): At some point, we should add a test to validate that # that a modified filter to return all nodes actually works, although # the way the sensor tests are written, the list is all mocked. @mgr_utils.mock_record_keepalive class BootDeviceTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(fake.FakeManagement, 'set_boot_device', autospec=True) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_set_boot_device(self, mock_val, mock_sbd): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.service.set_boot_device(self.context, node.uuid, boot_devices.PXE) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_sbd.assert_called_once_with(mock.ANY, mock.ANY, boot_devices.PXE, persistent=False) def test_set_boot_device_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_boot_device, self.context, node.uuid, boot_devices.DISK) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_set_boot_device_validate_fail(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware') mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.set_boot_device, self.context, node.uuid, boot_devices.DISK) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_get_boot_device(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') bootdev = self.service.get_boot_device(self.context, node.uuid) expected = {'boot_device': boot_devices.PXE, 'persistent': False} self.assertEqual(expected, bootdev) def test_get_boot_device_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_boot_device, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_get_boot_device_validate_fail(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware') mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_boot_device, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_get_supported_boot_devices(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') bootdevs = self.service.get_supported_boot_devices(self.context, node.uuid) self.assertEqual([boot_devices.PXE], bootdevs) @mgr_utils.mock_record_keepalive class IndicatorsTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(fake.FakeManagement, 'set_indicator_state', autospec=True) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_set_indicator_state(self, mock_val, mock_sbd): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.service.set_indicator_state( self.context, node.uuid, components.CHASSIS, 'led', indicator_states.ON) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_sbd.assert_called_once_with( mock.ANY, mock.ANY, components.CHASSIS, 'led', indicator_states.ON) def test_get_indicator_state(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') state = self.service.get_indicator_state( self.context, node.uuid, components.CHASSIS, 'led-0') expected = indicator_states.ON self.assertEqual(expected, state) def test_get_supported_indicators(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') indicators = self.service.get_supported_indicators( self.context, node.uuid) expected = { 'chassis': { 'led-0': { 'readonly': True, 'states': [ indicator_states.OFF, indicator_states.ON ] } }, 'system': { 'led': { 'readonly': False, 'states': [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } } } self.assertEqual(expected, indicators) @mgr_utils.mock_record_keepalive class NmiTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(fake.FakeManagement, 'inject_nmi', autospec=True) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_inject_nmi(self, mock_val, mock_nmi): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.service.inject_nmi(self.context, node.uuid) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_nmi.assert_called_once_with(mock.ANY, mock.ANY) def test_inject_nmi_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inject_nmi, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_inject_nmi_validate_invalid_param(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware') mock_val.side_effect = exception.InvalidParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inject_nmi, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mock.patch.object(fake.FakeManagement, 'validate', autospec=True) def test_inject_nmi_validate_missing_param(self, mock_val): node = obj_utils.create_test_node(self.context, driver='fake-hardware') mock_val.side_effect = exception.MissingParameterValue('error') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inject_nmi, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) def test_inject_nmi_not_implemented(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inject_nmi, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) @mgr_utils.mock_record_keepalive @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) class VifTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(VifTestCase, self).setUp() self.vif = {'id': 'fake'} @mock.patch.object(n_flat.FlatNetwork, 'vif_list', autospec=True) def test_vif_list(self, mock_list, mock_valid): mock_list.return_value = ['VIF_ID'] node = obj_utils.create_test_node(self.context, driver='fake-hardware') data = self.service.vif_list(self.context, node.uuid) mock_list.assert_called_once_with(mock.ANY, mock.ANY) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) self.assertEqual(mock_list.return_value, data) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autospec=True) def test_vif_attach(self, mock_attach, mock_valid): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.service.vif_attach(self.context, node.uuid, self.vif) mock_attach.assert_called_once_with(mock.ANY, mock.ANY, self.vif) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autospec=True) def test_vif_attach_node_locked(self, mock_attach, mock_valid): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_attach, self.context, node.uuid, self.vif) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) self.assertFalse(mock_attach.called) self.assertFalse(mock_valid.called) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autospec=True) def test_vif_attach_raises_network_error(self, mock_attach, mock_valid): mock_attach.side_effect = exception.NetworkError("BOOM") node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_attach, self.context, node.uuid, self.vif) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NetworkError, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_attach.assert_called_once_with(mock.ANY, mock.ANY, self.vif) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autpspec=True) def test_vif_attach_raises_portgroup_physnet_inconsistent( self, mock_attach, mock_valid): mock_valid.side_effect = exception.PortgroupPhysnetInconsistent( portgroup='fake-pg', physical_networks='fake-physnet') node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_attach, self.context, node.uuid, self.vif) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.PortgroupPhysnetInconsistent, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_attach.called) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autpspec=True) def test_vif_attach_raises_vif_invalid_for_attach( self, mock_attach, mock_valid): mock_valid.side_effect = exception.VifInvalidForAttach( node='fake-node', vif='fake-vif', reason='fake-reason') node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_attach, self.context, node.uuid, self.vif) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.VifInvalidForAttach, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_attach.called) @mock.patch.object(n_flat.FlatNetwork, 'vif_attach', autpspec=True) def test_vif_attach_validate_error(self, mock_attach, mock_valid): mock_valid.side_effect = exception.MissingParameterValue("BOOM") node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_attach, self.context, node.uuid, self.vif) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_attach.called) @mock.patch.object(n_flat.FlatNetwork, 'vif_detach', autpspec=True) def test_vif_detach(self, mock_detach, mock_valid): node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.service.vif_detach(self.context, node.uuid, "interface") mock_detach.assert_called_once_with(mock.ANY, "interface") mock_valid.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(n_flat.FlatNetwork, 'vif_detach', autpspec=True) def test_vif_detach_node_locked(self, mock_detach, mock_valid): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_detach, self.context, node.uuid, "interface") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) self.assertFalse(mock_detach.called) self.assertFalse(mock_valid.called) @mock.patch.object(n_flat.FlatNetwork, 'vif_detach', autpspec=True) def test_vif_detach_raises_network_error(self, mock_detach, mock_valid): mock_detach.side_effect = exception.NetworkError("BOOM") node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_detach, self.context, node.uuid, "interface") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NetworkError, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) mock_detach.assert_called_once_with(mock.ANY, "interface") @mock.patch.object(n_flat.FlatNetwork, 'vif_detach', autpspec=True) def test_vif_detach_validate_error(self, mock_detach, mock_valid): mock_valid.side_effect = exception.MissingParameterValue("BOOM") node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.vif_detach, self.context, node.uuid, "interface") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) mock_valid.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_detach.called) @mgr_utils.mock_record_keepalive class UpdatePortgroupTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} portgroup.extra = new_extra self.service.update_portgroup(self.context, portgroup) portgroup.refresh() self.assertEqual(new_extra, portgroup.extra) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, portgroup) @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup_failure(self, mock_val, mock_pc): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) old_extra = portgroup.extra new_extra = {'foo': 'baz'} portgroup.extra = new_extra mock_pc.side_effect = (exception.FailedToUpdateMacOnPort('boom')) self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, portgroup) portgroup.refresh() self.assertEqual(old_extra, portgroup.extra) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pc.assert_called_once_with(mock.ANY, mock.ANY, portgroup) def test_update_portgroup_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) old_extra = portgroup.extra portgroup.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, portgroup) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) portgroup.refresh() self.assertEqual(old_extra, portgroup.extra) def test_update_portgroup_to_node_in_deleting_state(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) update_node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DELETING, uuid=uuidutils.generate_uuid()) old_node_id = portgroup.node_id portgroup.node_id = update_node.id exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, portgroup) self.assertEqual(exception.InvalidState, exc.exc_info[0]) portgroup.refresh() self.assertEqual(old_node_id, portgroup.node_id) @mock.patch.object(dbapi.IMPL, 'get_ports_by_portgroup_id') @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup_to_node_in_manageable_state(self, mock_val, mock_pgc, mock_get_ports): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) update_node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, uuid=uuidutils.generate_uuid()) mock_get_ports.return_value = [] self._start_service() portgroup.node_id = update_node.id self.service.update_portgroup(self.context, portgroup) portgroup.refresh() self.assertEqual(update_node.id, portgroup.node_id) mock_get_ports.assert_called_once_with(portgroup.uuid) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pgc.assert_called_once_with(mock.ANY, mock.ANY, portgroup) @mock.patch.object(dbapi.IMPL, 'get_ports_by_portgroup_id') @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup_to_node_in_inspect_wait_state(self, mock_val, mock_pgc, mock_get_ports): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) update_node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT, uuid=uuidutils.generate_uuid()) mock_get_ports.return_value = [] self._start_service() portgroup.node_id = update_node.id self.service.update_portgroup(self.context, portgroup) portgroup.refresh() self.assertEqual(update_node.id, portgroup.node_id) mock_get_ports.assert_called_once_with(portgroup.uuid) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pgc.assert_called_once_with(mock.ANY, mock.ANY, portgroup) @mock.patch.object(dbapi.IMPL, 'get_ports_by_portgroup_id') @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup_to_node_in_active_state_and_maintenance( self, mock_val, mock_pgc, mock_get_ports): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) update_node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ACTIVE, maintenance=True, uuid=uuidutils.generate_uuid()) mock_get_ports.return_value = [] self._start_service() portgroup.node_id = update_node.id self.service.update_portgroup(self.context, portgroup) portgroup.refresh() self.assertEqual(update_node.id, portgroup.node_id) mock_get_ports.assert_called_once_with(portgroup.uuid) mock_val.assert_called_once_with(mock.ANY, mock.ANY) mock_pgc.assert_called_once_with(mock.ANY, mock.ANY, portgroup) @mock.patch.object(dbapi.IMPL, 'get_ports_by_portgroup_id') @mock.patch.object(n_flat.FlatNetwork, 'portgroup_changed', autospec=True) @mock.patch.object(n_flat.FlatNetwork, 'validate', autospec=True) def test_update_portgroup_association_with_ports(self, mock_val, mock_pgc, mock_get_ports): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id, extra={'foo': 'bar'}) update_node = obj_utils.create_test_node( self.context, driver='fake-hardware', maintenance=True, uuid=uuidutils.generate_uuid()) mock_get_ports.return_value = ['test_port'] self._start_service() old_node_id = portgroup.node_id portgroup.node_id = update_node.id exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_portgroup, self.context, portgroup) self.assertEqual(exception.PortgroupNotEmpty, exc.exc_info[0]) portgroup.refresh() self.assertEqual(old_node_id, portgroup.node_id) mock_get_ports.assert_called_once_with(portgroup.uuid) self.assertFalse(mock_val.called) self.assertFalse(mock_pgc.called) @mgr_utils.mock_record_keepalive class RaidTestCases(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): driver_name = 'fake-hardware' raid_interface = None def setUp(self): super(RaidTestCases, self).setUp() self.node = obj_utils.create_test_node( self.context, driver=self.driver_name, raid_interface=self.raid_interface, provision_state=states.MANAGEABLE) def test_get_raid_logical_disk_properties(self): self._start_service() properties = self.service.get_raid_logical_disk_properties( self.context, self.driver_name) self.assertIn('raid_level', properties) self.assertIn('size_gb', properties) def test_set_target_raid_config(self): raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': '1'}]} self.service.set_target_raid_config( self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual(raid_config, self.node.target_raid_config) def test_set_target_raid_config_empty(self): self.node.target_raid_config = {'foo': 'bar'} self.node.save() raid_config = {} self.service.set_target_raid_config( self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({}, self.node.target_raid_config) def test_set_target_raid_config_invalid_parameter_value(self): # Missing raid_level in the below raid config. raid_config = {'logical_disks': [{'size_gb': 100}]} self.node.target_raid_config = {'foo': 'bar'} self.node.save() exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.set_target_raid_config, self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({'foo': 'bar'}, self.node.target_raid_config) self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class RaidHardwareTypeTestCases(RaidTestCases): driver_name = 'fake-hardware' raid_interface = 'fake' def test_get_raid_logical_disk_properties_iface_not_supported(self): # NOTE(jroll) we don't run this test as get_logical_disk_properties # is supported on all RAID implementations, and we cannot have a # null interface for a hardware type pass def test_set_target_raid_config_iface_not_supported(self): # NOTE(jroll): it's impossible for a dynamic driver to have a null # interface (e.g. node.driver.raid), so this instead tests that # if validation fails, we blow up properly. # need a different raid interface and a hardware type that supports it self.node = obj_utils.create_test_node( self.context, driver='manual-management', raid_interface='no-raid', uuid=uuidutils.generate_uuid(), provision_state=states.MANAGEABLE) raid_config = {'logical_disks': [{'size_gb': 100, 'raid_level': '1'}]} exc = self.assertRaises( messaging.rpc.ExpectedException, self.service.set_target_raid_config, self.context, self.node.uuid, raid_config) self.node.refresh() self.assertEqual({}, self.node.target_raid_config) self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) self.assertIn('manual-management', str(exc.exc_info[1])) @mock.patch.object(conductor_utils, 'node_power_action') class ManagerDoSyncPowerStateTestCase(db_base.DbTestCase): def setUp(self): super(ManagerDoSyncPowerStateTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.driver = mock.Mock(spec_set=drivers_base.BareDriver) self.power = self.driver.power self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', maintenance=False, provision_state=states.AVAILABLE, instance_uuid=uuidutils.uuid) self.task = mock.Mock(spec_set=['context', 'driver', 'node', 'upgrade_lock', 'shared']) self.task.context = self.context self.task.driver = self.driver self.task.node = self.node self.task.shared = False self.config(force_power_state_during_sync=False, group='conductor') def _do_sync_power_state(self, old_power_state, new_power_states, fail_validate=False): self.node.power_state = old_power_state if not isinstance(new_power_states, (list, tuple)): new_power_states = [new_power_states] if fail_validate: exc = exception.InvalidParameterValue('error') self.power.validate.side_effect = exc for new_power_state in new_power_states: self.node.power_state = old_power_state if isinstance(new_power_state, Exception): self.power.get_power_state.side_effect = new_power_state else: self.power.get_power_state.return_value = new_power_state count = manager.do_sync_power_state( self.task, self.service.power_state_sync_count[self.node.uuid]) self.service.power_state_sync_count[self.node.uuid] = count def test_state_unchanged(self, node_power_action): self._do_sync_power_state('fake-power', 'fake-power') self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual('fake-power', self.node.power_state) self.assertFalse(node_power_action.called) self.assertFalse(self.task.upgrade_lock.called) @mock.patch.object(nova, 'power_update', autospec=True) def test_state_not_set(self, mock_power_update, node_power_action): self._do_sync_power_state(None, states.POWER_ON) self.power.validate.assert_called_once_with(self.task) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual(states.POWER_ON, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_ON) def test_validate_fail(self, node_power_action): self._do_sync_power_state(None, states.POWER_ON, fail_validate=True) self.power.validate.assert_called_once_with(self.task) self.assertFalse(self.power.get_power_state.called) self.assertFalse(node_power_action.called) self.assertIsNone(self.node.power_state) def test_get_power_state_fail(self, node_power_action): self._do_sync_power_state('fake', exception.IronicException('foo')) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual('fake', self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) def test_get_power_state_error(self, node_power_action): self._do_sync_power_state('fake', states.ERROR) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual('fake', self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) @mock.patch.object(nova, 'power_update', autospec=True) def test_state_changed_no_sync(self, mock_power_update, node_power_action): self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual(states.POWER_OFF, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) @mock.patch('ironic.objects.node.NodeCorrectedPowerStateNotification') @mock.patch.object(nova, 'power_update', autospec=True) def test_state_changed_no_sync_notify(self, mock_power_update, mock_notif, node_power_action): # Required for exception handling mock_notif.__name__ = 'NodeCorrectedPowerStateNotification' self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.assertEqual(states.POWER_OFF, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() # 1 notification should be sent: # baremetal.node.power_state_updated.success, indicating the DB was # updated to reflect the actual node power state mock_notif.assert_called_once_with(publisher=mock.ANY, event_type=mock.ANY, level=mock.ANY, payload=mock.ANY) mock_notif.return_value.emit.assert_called_once_with(mock.ANY) notif_args = mock_notif.call_args[1] self.assertNotificationEqual( notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_state_corrected.success', obj_fields.NotificationLevel.INFO) mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) def test_state_changed_sync(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=1, group='conductor') self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_ON, self.node.power_state) self.task.upgrade_lock.assert_called_once_with() def test_state_changed_sync_failed(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') node_power_action.side_effect = exception.IronicException('test') self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) # Just testing that this test doesn't raise. self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertEqual(1, self.service.power_state_sync_count[self.node.uuid]) @mock.patch.object(nova, 'power_update', autospec=True) def test_max_retries_exceeded(self, mock_power_update, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=1, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 2 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) node_power_action.assert_called_once_with(self.task, states.POWER_ON) self.assertEqual(states.POWER_OFF, self.node.power_state) self.assertEqual(2, self.service.power_state_sync_count[self.node.uuid]) self.assertTrue(self.node.maintenance) self.assertIsNotNone(self.node.maintenance_reason) self.assertEqual('power failure', self.node.fault) mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) @mock.patch.object(nova, 'power_update', autospec=True) def test_max_retries_exceeded2(self, mock_power_update, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=2, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF, states.POWER_OFF]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 3 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) npa_exp_calls = [mock.call(self.task, states.POWER_ON)] * 2 self.assertEqual(npa_exp_calls, node_power_action.call_args_list) self.assertEqual(states.POWER_OFF, self.node.power_state) self.assertEqual(3, self.service.power_state_sync_count[self.node.uuid]) self.assertTrue(self.node.maintenance) self.assertEqual('power failure', self.node.fault) mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) @mock.patch('ironic.objects.node.NodeCorrectedPowerStateNotification') @mock.patch.object(nova, 'power_update', autospec=True) def test_max_retries_exceeded_notify(self, mock_power_update, mock_notif, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=1, group='conductor') # Required for exception handling mock_notif.__name__ = 'NodeCorrectedPowerStateNotification' self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF]) # 1 notification should be sent: # baremetal.node.power_state_corrected.success, indicating # the DB was updated to reflect the actual node power state mock_notif.assert_called_once_with(publisher=mock.ANY, event_type=mock.ANY, level=mock.ANY, payload=mock.ANY) mock_notif.return_value.emit.assert_called_once_with(mock.ANY) notif_args = mock_notif.call_args[1] self.assertNotificationEqual( notif_args, 'ironic-conductor', CONF.host, 'baremetal.node.power_state_corrected.success', obj_fields.NotificationLevel.INFO) mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) def test_retry_then_success(self, node_power_action): self.config(force_power_state_during_sync=True, group='conductor') self.config(power_state_sync_max_retries=2, group='conductor') self._do_sync_power_state(states.POWER_ON, [states.POWER_OFF, states.POWER_OFF, states.POWER_ON]) self.assertFalse(self.power.validate.called) power_exp_calls = [mock.call(self.task)] * 3 self.assertEqual(power_exp_calls, self.power.get_power_state.call_args_list) npa_exp_calls = [mock.call(self.task, states.POWER_ON)] * 2 self.assertEqual(npa_exp_calls, node_power_action.call_args_list) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertEqual(0, self.service.power_state_sync_count[self.node.uuid]) def test_power_state_sync_max_retries_gps_exception(self, node_power_action): self.config(power_state_sync_max_retries=2, group='conductor') self.service.power_state_sync_count[self.node.uuid] = 2 node_power_action.side_effect = exception.IronicException('test') self._do_sync_power_state('fake', exception.IronicException('SpongeBob')) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertIsNone(self.node.power_state) self.assertTrue(self.node.maintenance) self.assertFalse(node_power_action.called) # make sure the actual error is in the last_error attribute self.assertIn('SpongeBob', self.node.last_error) def test_maintenance_on_upgrade_lock(self, node_power_action): self.node.maintenance = True self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() def test_wrong_provision_state_on_upgrade_lock(self, node_power_action): self.node.provision_state = states.DEPLOYWAIT self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertEqual(states.POWER_ON, self.node.power_state) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() def test_correct_power_state_on_upgrade_lock(self, node_power_action): def _fake_upgrade(): self.node.power_state = states.POWER_OFF self.task.upgrade_lock.side_effect = _fake_upgrade self._do_sync_power_state(states.POWER_ON, states.POWER_OFF) self.assertFalse(self.power.validate.called) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(node_power_action.called) self.task.upgrade_lock.assert_called_once_with() @mock.patch.object(waiters, 'wait_for_all', new=mock.MagicMock(return_value=(0, 0))) @mock.patch.object(manager.ConductorManager, '_spawn_worker', new=lambda self, fun, *args: fun(*args)) @mock.patch.object(manager, 'do_sync_power_state') @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerSyncPowerStatesTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ManagerSyncPowerStatesTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node() self.filters = {'maintenance': False} self.columns = ['uuid', 'driver', 'conductor_group', 'id'] def test_node_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) self.assertFalse(sync_mock.called) def test_node_locked_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(reservation='host1', uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_deploywait_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_enroll_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(provision_state=states.ENROLL, target_provision_state=states.NOSTATE, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_power_transition_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(target_power_state=states.POWER_ON, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_in_maintenance_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task( node_attrs=dict(maintenance=True, uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_node_disappears_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node=self.node.uuid, host='fake') self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(sync_mock.called) def test_single_node(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task(node_attrs=dict(uuid=self.node.uuid)) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._sync_power_states(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) sync_mock.assert_called_once_with(task, mock.ANY) def test__sync_power_state_multiple_nodes(self, get_nodeinfo_mock, mapped_mock, acquire_mock, sync_mock): # Create 8 nodes: # 1st node: Should acquire and try to sync # 2nd node: Not mapped to this conductor # 3rd node: In DEPLOYWAIT provision_state # 4th node: In maintenance mode # 5th node: Is in power transition # 6th node: Disappears after getting nodeinfo list # 7th node: Should acquire and try to sync # 8th node: do_sync_power_state raises NodeLocked nodes = [] node_attrs = {} mapped_map = {} for i in range(1, 8): attrs = {'id': i, 'uuid': uuidutils.generate_uuid()} if i == 3: attrs['provision_state'] = states.DEPLOYWAIT attrs['target_provision_state'] = states.ACTIVE elif i == 4: attrs['maintenance'] = True elif i == 5: attrs['target_power_state'] = states.POWER_ON n = self._create_node(**attrs) nodes.append(n) node_attrs[n.uuid] = attrs mapped_map[n.uuid] = False if i == 2 else True tasks = [self._create_task(node_attrs=node_attrs[x.uuid]) for x in nodes if x.id != 2] # not found during acquire (4 = index of Node6 after removing Node2) tasks[4] = exception.NodeNotFound(node=6) sync_results = [0] * 7 + [exception.NodeLocked(node=8, host='')] get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response(nodes)) mapped_mock.side_effect = lambda x, y, z: mapped_map[x] acquire_mock.side_effect = self._get_acquire_side_effect(tasks) sync_mock.side_effect = sync_results with mock.patch.object(eventlet, 'sleep') as sleep_mock: self.service._sync_power_states(self.context) # Ensure we've yielded on every iteration, except for node # not mapped to this conductor self.assertEqual(len(nodes) - 1, sleep_mock.call_count) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_calls = [mock.call(x.uuid, x.driver, x.conductor_group) for x in nodes] self.assertEqual(mapped_calls, mapped_mock.call_args_list) acquire_calls = [mock.call(self.context, x.uuid, purpose=mock.ANY, shared=True) for x in nodes if x.id != 2] self.assertEqual(acquire_calls, acquire_mock.call_args_list) # Nodes 1 and 7 (5 = index of Node7 after removing Node2) sync_calls = [mock.call(tasks[0], mock.ANY), mock.call(tasks[5], mock.ANY)] self.assertEqual(sync_calls, sync_mock.call_args_list) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerPowerRecoveryTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ManagerPowerRecoveryTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.driver = mock.Mock(spec_set=drivers_base.BareDriver) self.power = self.driver.power self.task = mock.Mock(spec_set=['context', 'driver', 'node', 'upgrade_lock', 'shared']) self.node = self._create_node(maintenance=True, fault='power failure', maintenance_reason='Unreachable BMC') self.task.node = self.node self.task.driver = self.driver self.filters = {'maintenance': True, 'fault': 'power failure'} self.columns = ['uuid', 'driver', 'conductor_group', 'id'] def test_node_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._power_failure_recovery(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) self.assertFalse(self.power.validate.called) def _power_failure_recovery(self, node_dict, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True task = self._create_task(node_attrs=node_dict) acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._power_failure_recovery(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(self.power.validate.called) def test_node_locked_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock): node_dict = dict(reservation='host1', uuid=self.node.uuid) self._power_failure_recovery(node_dict, get_nodeinfo_mock, mapped_mock, acquire_mock) def test_node_in_enroll_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock): node_dict = dict(provision_state=states.ENROLL, target_provision_state=states.NOSTATE, maintenance=True, uuid=self.node.uuid) self._power_failure_recovery(node_dict, get_nodeinfo_mock, mapped_mock, acquire_mock) def test_node_in_power_transition_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock): node_dict = dict(target_power_state=states.POWER_ON, maintenance=True, uuid=self.node.uuid) self._power_failure_recovery(node_dict, get_nodeinfo_mock, mapped_mock, acquire_mock) def test_node_not_in_maintenance_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock): node_dict = dict(maintenance=False, uuid=self.node.uuid) self._power_failure_recovery(node_dict, get_nodeinfo_mock, mapped_mock, acquire_mock) def test_node_disappears_on_acquire(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node=self.node.uuid, host='fake') self.service._power_failure_recovery(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.assertFalse(self.power.validate.called) @mock.patch.object(notification_utils, 'emit_power_state_corrected_notification') @mock.patch.object(nova, 'power_update', autospec=True) def test_node_recovery_success(self, mock_power_update, notify_mock, get_nodeinfo_mock, mapped_mock, acquire_mock): self.node.power_state = states.POWER_ON get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.power.get_power_state.return_value = states.POWER_OFF self.service._power_failure_recovery(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.power.validate.assert_called_once_with(self.task) self.power.get_power_state.assert_called_once_with(self.task) self.task.upgrade_lock.assert_called_once_with() self.assertFalse(self.node.maintenance) self.assertIsNone(self.node.fault) self.assertIsNone(self.node.maintenance_reason) self.assertEqual(states.POWER_OFF, self.node.power_state) notify_mock.assert_called_once_with(self.task, states.POWER_ON) mock_power_update.assert_called_once_with( self.task.context, self.node.instance_uuid, states.POWER_OFF) def test_node_recovery_failed(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.power.get_power_state.return_value = states.ERROR self.service._power_failure_recovery(self.context) get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY, shared=True) self.power.validate.assert_called_once_with(self.task) self.power.get_power_state.assert_called_once_with(self.task) self.assertFalse(self.task.upgrade_lock.called) self.assertTrue(self.node.maintenance) self.assertEqual('power failure', self.node.fault) self.assertEqual('Unreachable BMC', self.node.maintenance_reason) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerCheckDeployTimeoutsTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ManagerCheckDeployTimeoutsTestCase, self).setUp() self.config(deploy_callback_timeout=300, group='conductor') self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE) self.task = self._create_task(node=self.node) self.node2 = self._create_node(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE) self.task2 = self._create_task(node=self.node2) self.filters = {'reserved': False, 'maintenance': False, 'provisioned_before': 300, 'provision_state': states.DEPLOYWAIT} self.columns = ['uuid', 'driver', 'conductor_group'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters, sort_key='provision_updated_at', sort_dir='asc') def test_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) def test_timeout(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_acquire_node_disappears(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node='fake') # Exception eaten self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.spawn_after.called) def test_acquire_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeLocked(node='fake', host='fake') # Exception eaten self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.spawn_after.called) def test_no_deploywait_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.AVAILABLE, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(task.spawn_after.called) def test_maintenance_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, maintenance=True, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([task.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([task, self.task2])) self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) self.assertEqual([mock.call(self.node.uuid, task.node.driver, task.node.conductor_group), mock.call(self.node2.uuid, self.node2.driver, self.node2.conductor_group)], mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY), mock.call(self.context, self.node2.uuid, purpose=mock.ANY)], acquire_mock.call_args_list) # First node skipped self.assertFalse(task.spawn_after.called) # Second node spawned self.task2.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task2), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_exiting_no_worker_avail(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.NoFreeConductorWorker()), self.task2]) # Exception should be nuked self.service._check_deploy_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to NoFreeConductorWorker mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_exiting_with_other_exception(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.IronicException('foo')), self.task2]) # Should re-raise self.assertRaises(exception.IronicException, self.service._check_deploy_timeouts, self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to unknown exception mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) def test_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(periodic_max_workers=2, group='conductor') # Use the same nodes/tasks to make life easier in the tests # here get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.service._check_deploy_timeouts(self.context) # Should only have ran 2. self.assertEqual([mock.call(self.node.uuid, self.node.driver, self.node.conductor_group)] * 2, mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2, acquire_mock.call_args_list) process_event_call = mock.call( 'fail', callback=self.service._spawn_worker, call_args=(conductor_utils.cleanup_after_timeout, self.task), err_handler=conductor_utils.provisioning_error_handler, target_state=None) self.assertEqual([process_event_call] * 2, self.task.process_event.call_args_list) @mgr_utils.mock_record_keepalive class ManagerTestProperties(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(ManagerTestProperties, self).setUp() self.service = manager.ConductorManager('test-host', 'test-topic') def _check_driver_properties(self, hw_type, expected): self._start_service() properties = self.service.get_driver_properties(self.context, hw_type) self.assertEqual(sorted(expected), sorted(properties)) def test_driver_properties_fake(self): expected = ['B1', 'B2'] self._check_driver_properties("fake-hardware", expected) def test_driver_properties_ipmi(self): self.config(enabled_hardware_types='ipmi', enabled_power_interfaces=['ipmitool'], enabled_management_interfaces=['ipmitool'], enabled_console_interfaces=['ipmitool-socat']) expected = ['ipmi_address', 'ipmi_terminal_port', 'ipmi_password', 'ipmi_port', 'ipmi_priv_level', 'ipmi_username', 'ipmi_bridging', 'ipmi_transit_channel', 'ipmi_transit_address', 'ipmi_target_channel', 'ipmi_target_address', 'ipmi_local_address', 'deploy_kernel', 'deploy_ramdisk', 'force_persistent_boot_device', 'ipmi_protocol_version', 'ipmi_force_boot_device', 'deploy_forces_oob_reboot', 'rescue_kernel', 'rescue_ramdisk', 'ipmi_disable_boot_timeout', 'ipmi_hex_kg_key'] self._check_driver_properties("ipmi", expected) def test_driver_properties_snmp(self): self.config(enabled_hardware_types='snmp', enabled_power_interfaces=['snmp']) expected = ['deploy_kernel', 'deploy_ramdisk', 'force_persistent_boot_device', 'rescue_kernel', 'rescue_ramdisk', 'snmp_driver', 'snmp_address', 'snmp_port', 'snmp_version', 'snmp_community', 'snmp_community_read', 'snmp_community_write', 'snmp_security', 'snmp_outlet', 'snmp_user', 'snmp_context_engine_id', 'snmp_context_name', 'snmp_auth_key', 'snmp_auth_protocol', 'snmp_priv_key', 'snmp_priv_protocol', 'deploy_forces_oob_reboot'] self._check_driver_properties("snmp", expected) def test_driver_properties_ilo(self): self.config(enabled_hardware_types='ilo', enabled_power_interfaces=['ilo'], enabled_management_interfaces=['ilo'], enabled_boot_interfaces=['ilo-virtual-media'], enabled_inspect_interfaces=['ilo'], enabled_console_interfaces=['ilo']) expected = ['ilo_address', 'ilo_username', 'ilo_password', 'client_port', 'client_timeout', 'ilo_deploy_iso', 'console_port', 'ilo_change_password', 'ca_file', 'snmp_auth_user', 'snmp_auth_prot_password', 'snmp_auth_priv_password', 'snmp_auth_protocol', 'snmp_auth_priv_protocol', 'deploy_forces_oob_reboot'] self._check_driver_properties("ilo", expected) def test_driver_properties_fail(self): self.service.init_host() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.get_driver_properties, self.context, "bad-driver") # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.DriverNotFound, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class ManagerTestHardwareTypeProperties(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def _check_hardware_type_properties(self, hardware_type, expected): self.config(enabled_hardware_types=[hardware_type]) self.hardware_type = driver_factory.get_hardware_type(hardware_type) self._start_service() properties = self.service.get_driver_properties(self.context, hardware_type) self.assertEqual(sorted(expected), sorted(properties)) def test_hardware_type_properties_manual_management(self): expected = ['deploy_kernel', 'deploy_ramdisk', 'force_persistent_boot_device', 'deploy_forces_oob_reboot', 'rescue_kernel', 'rescue_ramdisk'] self._check_hardware_type_properties('manual-management', expected) @mock.patch.object(waiters, 'wait_for_all') @mock.patch.object(manager.ConductorManager, '_spawn_worker') @mock.patch.object(manager.ConductorManager, '_sync_power_state_nodes_task') class ParallelPowerSyncTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ParallelPowerSyncTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') def test__sync_power_states_9_nodes_8_workers( self, sync_mock, spawn_mock, waiter_mock): CONF.set_override('sync_power_state_workers', 8, group='conductor') with mock.patch.object(self.service, 'iter_nodes', new=mock.MagicMock(return_value=[[0]] * 9)): self.service._sync_power_states(self.context) self.assertEqual(7, spawn_mock.call_count) self.assertEqual(1, sync_mock.call_count) self.assertEqual(1, waiter_mock.call_count) def test__sync_power_states_6_nodes_8_workers( self, sync_mock, spawn_mock, waiter_mock): CONF.set_override('sync_power_state_workers', 8, group='conductor') with mock.patch.object(self.service, 'iter_nodes', new=mock.MagicMock(return_value=[[0]] * 6)): self.service._sync_power_states(self.context) self.assertEqual(5, spawn_mock.call_count) self.assertEqual(1, sync_mock.call_count) self.assertEqual(1, waiter_mock.call_count) def test__sync_power_states_1_nodes_8_workers( self, sync_mock, spawn_mock, waiter_mock): CONF.set_override('sync_power_state_workers', 8, group='conductor') with mock.patch.object(self.service, 'iter_nodes', new=mock.MagicMock(return_value=[[0]])): self.service._sync_power_states(self.context) self.assertEqual(0, spawn_mock.call_count) self.assertEqual(1, sync_mock.call_count) self.assertEqual(1, waiter_mock.call_count) def test__sync_power_states_9_nodes_1_worker( self, sync_mock, spawn_mock, waiter_mock): CONF.set_override('sync_power_state_workers', 1, group='conductor') with mock.patch.object(self.service, 'iter_nodes', new=mock.MagicMock(return_value=[[0]] * 9)): self.service._sync_power_states(self.context) self.assertEqual(0, spawn_mock.call_count) self.assertEqual(1, sync_mock.call_count) self.assertEqual(1, waiter_mock.call_count) @mock.patch.object(queue, 'Queue', autospec=True) def test__sync_power_states_node_prioritization( self, queue_mock, sync_mock, spawn_mock, waiter_mock): CONF.set_override('sync_power_state_workers', 1, group='conductor') with mock.patch.object( self.service, 'iter_nodes', new=mock.MagicMock(return_value=[[0], [1], [2]]) ), mock.patch.dict( self.service.power_state_sync_count, {0: 1, 1: 0, 2: 2}, clear=True): queue_mock.return_value.qsize.return_value = 0 self.service._sync_power_states(self.context) expected_calls = [mock.call([2]), mock.call([0]), mock.call([1])] queue_mock.return_value.put.assert_has_calls(expected_calls) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerSyncLocalStateTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ManagerSyncLocalStateTestCase, self).setUp() self.service = manager.ConductorManager('hostname', 'test-topic') self.service.conductor = mock.Mock() self.service.dbapi = self.dbapi self.service.ring_manager = mock.Mock() self.node = self._create_node(provision_state=states.ACTIVE, target_provision_state=states.NOSTATE) self.task = self._create_task(node=self.node) self.filters = {'reserved': False, 'maintenance': False, 'provision_state': states.ACTIVE} self.columns = ['uuid', 'driver', 'conductor_group', 'id', 'conductor_affinity'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( columns=self.columns, filters=self.filters) def test_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) def test_already_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): # Node is already mapped to the conductor running the periodic task self.node.conductor_affinity = 123 self.service.conductor.id = 123 get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) def test_good(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) # assert spawn_after has been called self.task.spawn_after.assert_called_once_with( self.service._spawn_worker, self.service._do_takeover, self.task) def test_no_free_worker(self, get_nodeinfo_mock, mapped_mock, acquire_mock): mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.task.spawn_after.side_effect = [ None, exception.NoFreeConductorWorker('error') ] # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called 2 times only # instead of 3. When NoFreeConductorWorker is raised the loop # should be broken expected = [mock.call(self.node.uuid, self.node.driver, self.node.conductor_group)] * 2 self.assertEqual(expected, mapped_mock.call_args_list) # assert acquire() gets called 2 times only instead of 3. When # NoFreeConductorWorker is raised the loop should be broken expected = [mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2 self.assertEqual(expected, acquire_mock.call_args_list) # assert spawn_after has been called twice expected = [mock.call(self.service._spawn_worker, self.service._do_takeover, self.task)] * 2 self.assertEqual(expected, self.task.spawn_after.call_args_list) def test_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock,): mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [self.task, exception.NodeLocked('error'), self.task]) self.task.spawn_after.side_effect = [None, None] # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called 3 times expected = [mock.call(self.node.uuid, self.node.driver, self.node.conductor_group)] * 3 self.assertEqual(expected, mapped_mock.call_args_list) # assert acquire() gets called 3 times expected = [mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 3 self.assertEqual(expected, acquire_mock.call_args_list) # assert spawn_after has been called only 2 times expected = [mock.call(self.service._spawn_worker, self.service._do_takeover, self.task)] * 2 self.assertEqual(expected, self.task.spawn_after.call_args_list) def test_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): # Limit to only 1 worker self.config(periodic_max_workers=1, group='conductor') mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.task.spawn_after.side_effect = [None] * 3 # 3 nodes to be checked get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) self.service._sync_local_state(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # assert _mapped_to_this_conductor() gets called only once # because of the worker limit mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) # assert acquire() gets called only once because of the worker limit acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) # assert spawn_after has been called self.task.spawn_after.assert_called_once_with( self.service._spawn_worker, self.service._do_takeover, self.task) @mgr_utils.mock_record_keepalive class NodeInspectHardware(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_ok(self, mock_inspect): self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.INSPECTING, driver_internal_info={'agent_url': 'url'}) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = states.MANAGEABLE manager._do_inspect_hardware(task) node.refresh() self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) task.node.refresh() self.assertNotIn('agent_url', task.node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_return_inspecting(self, mock_inspect): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = states.INSPECTING self.assertRaises(exception.HardwareInspectionFailure, manager._do_inspect_hardware, task) node.refresh() self.assertIn('driver returned unexpected state', node.last_error) self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) mock_inspect.assert_called_once_with(mock.ANY) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_return_inspect_wait(self, mock_inspect): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = states.INSPECTWAIT manager._do_inspect_hardware(task) node.refresh() self.assertEqual(states.INSPECTWAIT, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_return_other_state(self, mock_inspect, log_mock): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTING) task = task_manager.TaskManager(self.context, node.uuid) mock_inspect.return_value = None self.assertRaises(exception.HardwareInspectionFailure, manager._do_inspect_hardware, task) node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNotNone(node.last_error) mock_inspect.assert_called_once_with(mock.ANY) self.assertTrue(log_mock.error.called) def test__check_inspect_wait_timeouts(self): self._start_service() CONF.set_override('inspect_wait_timeout', 1, group='conductor') node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT, target_provision_state=states.MANAGEABLE, provision_updated_at=datetime.datetime(2000, 1, 1, 0, 0), inspection_started_at=datetime.datetime(2000, 1, 1, 0, 0)) self.service._check_inspect_wait_timeouts(self.context) self._stop_service() node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertIsNotNone(node.last_error) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_inspect_hardware_worker_pool_full(self, mock_spawn): prv_state = states.MANAGEABLE tgt_prv_state = states.NOSTATE node = obj_utils.create_test_node(self.context, provision_state=prv_state, target_provision_state=tgt_prv_state, last_error=None, driver='fake-hardware') self._start_service() mock_spawn.side_effect = exception.NoFreeConductorWorker() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inspect_hardware, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NoFreeConductorWorker, exc.exc_info[0]) self._stop_service() node.refresh() # Make sure things were rolled back self.assertEqual(prv_state, node.provision_state) self.assertEqual(tgt_prv_state, node.target_provision_state) self.assertIsNotNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) def _test_inspect_hardware_validate_fail(self, mock_validate): mock_validate.side_effect = exception.InvalidParameterValue( 'Fake error message') node = obj_utils.create_test_node(self.context, driver='fake-hardware') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inspect_hardware, self.context, node.uuid) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) mock_validate.side_effect = exception.MissingParameterValue( 'Fake error message') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.inspect_hardware, self.context, node.uuid) self.assertEqual(exception.MissingParameterValue, exc.exc_info[0]) # This is a sync operation last_error should be None. self.assertIsNone(node.last_error) # Verify reservation has been cleared. self.assertIsNone(node.reservation) @mock.patch('ironic.drivers.modules.fake.FakeInspect.validate') def test_inspect_hardware_validate_fail(self, mock_validate): self._test_inspect_hardware_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakePower.validate') def test_inspect_hardware_power_validate_fail(self, mock_validate): self._test_inspect_hardware_validate_fail(mock_validate) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_raises_error(self, mock_inspect): self._start_service() mock_inspect.side_effect = exception.HardwareInspectionFailure('test') state = states.MANAGEABLE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTING, target_provision_state=state) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaisesRegex(exception.HardwareInspectionFailure, '^test$', manager._do_inspect_hardware, task) node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertEqual('test', node.last_error) self.assertTrue(mock_inspect.called) @mock.patch('ironic.drivers.modules.fake.FakeInspect.inspect_hardware') def test_inspect_hardware_unexpected_error(self, mock_inspect): self._start_service() mock_inspect.side_effect = RuntimeError('x') state = states.MANAGEABLE node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTING, target_provision_state=state) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaisesRegex(exception.HardwareInspectionFailure, 'Unexpected exception of type RuntimeError: x', manager._do_inspect_hardware, task) node.refresh() self.assertEqual(states.INSPECTFAIL, node.provision_state) self.assertEqual(states.MANAGEABLE, node.target_provision_state) self.assertEqual('Unexpected exception of type RuntimeError: x', node.last_error) self.assertTrue(mock_inspect.called) @mock.patch.object(task_manager, 'acquire') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_nodeinfo_list') class ManagerCheckInspectWaitTimeoutsTestCase(mgr_utils.CommonMixIn, db_base.DbTestCase): def setUp(self): super(ManagerCheckInspectWaitTimeoutsTestCase, self).setUp() self.config(inspect_wait_timeout=300, group='conductor') self.service = manager.ConductorManager('hostname', 'test-topic') self.service.dbapi = self.dbapi self.node = self._create_node(provision_state=states.INSPECTWAIT, target_provision_state=states.MANAGEABLE) self.task = self._create_task(node=self.node) self.node2 = self._create_node( provision_state=states.INSPECTWAIT, target_provision_state=states.MANAGEABLE) self.task2 = self._create_task(node=self.node2) self.filters = {'reserved': False, 'maintenance': False, 'inspection_started_before': 300, 'provision_state': states.INSPECTWAIT} self.columns = ['uuid', 'driver', 'conductor_group'] def _assert_get_nodeinfo_args(self, get_nodeinfo_mock): get_nodeinfo_mock.assert_called_once_with( sort_dir='asc', columns=self.columns, filters=self.filters, sort_key='inspection_started_at') def test__check_inspect_timeouts_not_mapped(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = False self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) self.assertFalse(acquire_mock.called) def test__check_inspect_timeout(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(self.task) self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_acquire_node_disappears(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeNotFound(node='fake') # Exception eaten self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.process_event.called) def test__check_inspect_timeouts_acquire_node_locked(self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = exception.NodeLocked(node='fake', host='fake') # Exception eaten self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with(self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(self.task.process_event.called) def test__check_inspect_timeouts_no_acquire_after_lock(self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.AVAILABLE, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = self._get_nodeinfo_list_response() mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect(task) self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.assertFalse(task.process_event.called) def test__check_inspect_timeouts_to_maintenance_after_lock( self, get_nodeinfo_mock, mapped_mock, acquire_mock): task = self._create_task( node_attrs=dict(provision_state=states.INSPECTWAIT, target_provision_state=states.MANAGEABLE, maintenance=True, uuid=self.node.uuid)) get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([task.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([task, self.task2])) self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) self.assertEqual([mock.call(self.node.uuid, task.node.driver, task.node.conductor_group), mock.call(self.node2.uuid, self.node2.driver, self.node2.conductor_group)], mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY), mock.call(self.context, self.node2.uuid, purpose=mock.ANY)], acquire_mock.call_args_list) # First node skipped self.assertFalse(task.process_event.called) # Second node spawned self.task2.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_exiting_no_worker_avail( self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.NoFreeConductorWorker()), self.task2]) # Exception should be nuked self.service._check_inspect_wait_timeouts(self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to NoFreeConductorWorker mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_exit_with_other_exception( self, get_nodeinfo_mock, mapped_mock, acquire_mock): get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node, self.node2])) mapped_mock.return_value = True acquire_mock.side_effect = self._get_acquire_side_effect( [(self.task, exception.IronicException('foo')), self.task2]) # Should re-raise self.assertRaises(exception.IronicException, self.service._check_inspect_wait_timeouts, self.context) self._assert_get_nodeinfo_args(get_nodeinfo_mock) # mapped should be only called for the first node as we should # have exited the loop early due to unknown exception mapped_mock.assert_called_once_with( self.node.uuid, self.node.driver, self.node.conductor_group) acquire_mock.assert_called_once_with(self.context, self.node.uuid, purpose=mock.ANY) self.task.process_event.assert_called_with('fail', target_state=None) def test__check_inspect_timeouts_worker_limit(self, get_nodeinfo_mock, mapped_mock, acquire_mock): self.config(periodic_max_workers=2, group='conductor') # Use the same nodes/tasks to make life easier in the tests # here get_nodeinfo_mock.return_value = ( self._get_nodeinfo_list_response([self.node] * 3)) mapped_mock.return_value = True acquire_mock.side_effect = ( self._get_acquire_side_effect([self.task] * 3)) self.service._check_inspect_wait_timeouts(self.context) # Should only have ran 2. self.assertEqual([mock.call(self.node.uuid, self.node.driver, self.node.conductor_group)] * 2, mapped_mock.call_args_list) self.assertEqual([mock.call(self.context, self.node.uuid, purpose=mock.ANY)] * 2, acquire_mock.call_args_list) process_event_call = mock.call('fail', target_state=None) self.assertEqual([process_event_call] * 2, self.task.process_event.call_args_list) @mgr_utils.mock_record_keepalive class DestroyPortTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_destroy_port(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') port = obj_utils.create_test_port(self.context, node_id=node.id) self.service.destroy_port(self.context, port) self.assertRaises(exception.PortNotFound, port.refresh) def test_destroy_port_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') port = obj_utils.create_test_port(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_port, self.context, port) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_destroy_port_node_active_state(self): instance_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=instance_uuid, provision_state='active') port = obj_utils.create_test_port( self.context, node_id=node.id, internal_info={'tenant_vif_port_id': 'foo'}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_port, self.context, port) self.assertEqual(exception.InvalidState, exc.exc_info[0]) def test_destroy_port_node_active_and_maintenance(self): instance_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=instance_uuid, provision_state='active', maintenance=True) port = obj_utils.create_test_port(self.context, node_id=node.id, extra={'vif_port_id': 'fake-id'}) self.service.destroy_port(self.context, port) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_uuid, port.uuid) def test_destroy_port_with_instance_not_in_active_port_unbound(self): instance_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=instance_uuid, provision_state='deploy failed') port = obj_utils.create_test_port(self.context, node_id=node.id) self.service.destroy_port(self.context, port) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_uuid, port.uuid) def test_destroy_port_with_instance_not_in_active_port_bound(self): instance_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=instance_uuid, provision_state='deploy failed') port = obj_utils.create_test_port( self.context, node_id=node.id, internal_info={'tenant_vif_port_id': 'foo'}) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_port, self.context, port) self.assertEqual(exception.InvalidState, exc.exc_info[0]) def test_destroy_port_node_active_port_unbound(self): instance_uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_uuid=instance_uuid, provision_state='active') port = obj_utils.create_test_port(self.context, node_id=node.id) self.service.destroy_port(self.context, port) self.assertRaises(exception.PortNotFound, self.dbapi.get_port_by_uuid, port.uuid) @mgr_utils.mock_record_keepalive class DestroyPortgroupTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_destroy_portgroup(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) self.service.destroy_portgroup(self.context, portgroup) self.assertRaises(exception.PortgroupNotFound, portgroup.refresh) def test_destroy_portgroup_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') portgroup = obj_utils.create_test_portgroup(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_portgroup, self.context, portgroup) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) @mgr_utils.mock_record_keepalive @mock.patch.object(manager.ConductorManager, '_fail_if_in_state') @mock.patch.object(manager.ConductorManager, '_mapped_to_this_conductor') @mock.patch.object(dbapi.IMPL, 'get_offline_conductors') class ManagerCheckOrphanNodesTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(ManagerCheckOrphanNodesTestCase, self).setUp() self._start_service() self.node = obj_utils.create_test_node( self.context, id=1, uuid=uuidutils.generate_uuid(), driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, target_power_state=states.POWER_ON, reservation='fake-conductor') # create a second node in a different state to test the # filtering nodes in DEPLOYING state obj_utils.create_test_node( self.context, id=10, uuid=uuidutils.generate_uuid(), driver='fake-hardware', provision_state=states.AVAILABLE, target_provision_state=states.NOSTATE) def test__check_orphan_nodes(self, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = ['fake-conductor'] self.service._check_orphan_nodes(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake-hardware', '') mock_fail_if.assert_called_once_with( mock.ANY, {'uuid': self.node.uuid}, {states.DEPLOYING, states.CLEANING}, 'provision_updated_at', callback_method=conductor_utils.abort_on_conductor_take_over, err_handler=conductor_utils.provisioning_error_handler) # assert node was released self.assertIsNone(self.node.reservation) self.assertIsNone(self.node.target_power_state) self.assertIsNotNone(self.node.last_error) def test__check_orphan_nodes_cleaning(self, mock_off_cond, mock_mapped, mock_fail_if): self.node.provision_state = states.CLEANING self.node.save() mock_off_cond.return_value = ['fake-conductor'] self.service._check_orphan_nodes(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake-hardware', '') mock_fail_if.assert_called_once_with( mock.ANY, {'uuid': self.node.uuid}, {states.DEPLOYING, states.CLEANING}, 'provision_updated_at', callback_method=conductor_utils.abort_on_conductor_take_over, err_handler=conductor_utils.provisioning_error_handler) # assert node was released self.assertIsNone(self.node.reservation) self.assertIsNone(self.node.target_power_state) self.assertIsNotNone(self.node.last_error) def test__check_orphan_nodes_alive(self, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = [] self.service._check_orphan_nodes(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() self.assertFalse(mock_mapped.called) self.assertFalse(mock_fail_if.called) # assert node still locked self.assertIsNotNone(self.node.reservation) @mock.patch.object(objects.Node, 'release') def test__check_orphan_nodes_release_exceptions_skipping( self, mock_release, mock_off_cond, mock_mapped, mock_fail_if): mock_off_cond.return_value = ['fake-conductor'] # Add another node so we can check both exceptions node2 = obj_utils.create_test_node( self.context, id=2, uuid=uuidutils.generate_uuid(), driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.DEPLOYDONE, reservation='fake-conductor') mock_mapped.return_value = True mock_release.side_effect = [exception.NodeNotFound('not found'), exception.NodeLocked('locked')] self.service._check_orphan_nodes(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() expected_calls = [mock.call(self.node.uuid, 'fake-hardware', ''), mock.call(node2.uuid, 'fake-hardware', '')] mock_mapped.assert_has_calls(expected_calls) # Assert we skipped and didn't try to call _fail_if_in_state self.assertFalse(mock_fail_if.called) def test__check_orphan_nodes_release_node_not_locked( self, mock_off_cond, mock_mapped, mock_fail_if): # this simulates releasing the node elsewhere count = [0] def _fake_release(*args, **kwargs): self.node.reservation = None self.node.save() # raise an exception only the first time release is called count[0] += 1 if count[0] == 1: raise exception.NodeNotLocked('not locked') mock_off_cond.return_value = ['fake-conductor'] mock_mapped.return_value = True with mock.patch.object(objects.Node, 'release', side_effect=_fake_release) as mock_release: self.service._check_orphan_nodes(self.context) mock_release.assert_called_with(self.context, mock.ANY, self.node.id) mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake-hardware', '') mock_fail_if.assert_called_once_with( mock.ANY, {'uuid': self.node.uuid}, {states.DEPLOYING, states.CLEANING}, 'provision_updated_at', callback_method=conductor_utils.abort_on_conductor_take_over, err_handler=conductor_utils.provisioning_error_handler) def test__check_orphan_nodes_maintenance(self, mock_off_cond, mock_mapped, mock_fail_if): self.node.maintenance = True self.node.save() mock_off_cond.return_value = ['fake-conductor'] self.service._check_orphan_nodes(self.context) self.node.refresh() mock_off_cond.assert_called_once_with() mock_mapped.assert_called_once_with(self.node.uuid, 'fake-hardware', '') # assert node was released self.assertIsNone(self.node.reservation) # not changing states in maintenance self.assertFalse(mock_fail_if.called) self.assertIsNotNone(self.node.target_power_state) class TestIndirectionApiConductor(db_base.DbTestCase): def setUp(self): super(TestIndirectionApiConductor, self).setUp() self.conductor = manager.ConductorManager('test-host', 'test-topic') def _test_object_action(self, is_classmethod, raise_exception, return_object=False): @obj_base.IronicObjectRegistry.register class TestObject(obj_base.IronicObject): context = self.context def foo(self, context, raise_exception=False, return_object=False): if raise_exception: raise Exception('test') elif return_object: return obj else: return 'test' @classmethod def bar(cls, context, raise_exception=False, return_object=False): if raise_exception: raise Exception('test') elif return_object: return obj else: return 'test' obj = TestObject(self.context) if is_classmethod: versions = ovo_base.obj_tree_get_versions(TestObject.obj_name()) result = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'bar', versions, tuple(), {'raise_exception': raise_exception, 'return_object': return_object}) else: updates, result = self.conductor.object_action( self.context, obj, 'foo', tuple(), {'raise_exception': raise_exception, 'return_object': return_object}) if return_object: self.assertEqual(obj, result) else: self.assertEqual('test', result) def test_object_action(self): self._test_object_action(False, False) def test_object_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, False, True) def test_object_action_on_object(self): self._test_object_action(False, False, True) def test_object_class_action(self): self._test_object_action(True, False) def test_object_class_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, True, True) def test_object_class_action_on_object(self): self._test_object_action(True, False, False) def test_object_action_copies_object(self): @obj_base.IronicObjectRegistry.register class TestObject(obj_base.IronicObject): fields = {'dict': fields.DictOfStringsField()} def touch_dict(self, context): self.dict['foo'] = 'bar' self.obj_reset_changes() obj = TestObject(self.context) obj.dict = {} obj.obj_reset_changes() updates, result = self.conductor.object_action( self.context, obj, 'touch_dict', tuple(), {}) # NOTE(danms): If conductor did not properly copy the object, then # the new and reference copies of the nested dict object will be # the same, and thus 'dict' will not be reported as changed self.assertIn('dict', updates) self.assertEqual({'foo': 'bar'}, updates['dict']) def test_object_backport_versions(self): fake_backported_obj = 'fake-backported-obj' obj_name = 'fake-obj' test_obj = mock.Mock() test_obj.obj_name.return_value = obj_name test_obj.obj_to_primitive.return_value = fake_backported_obj fake_version_manifest = {obj_name: '1.0'} result = self.conductor.object_backport_versions( self.context, test_obj, fake_version_manifest) self.assertEqual(result, fake_backported_obj) test_obj.obj_to_primitive.assert_called_once_with( target_version='1.0', version_manifest=fake_version_manifest) @mgr_utils.mock_record_keepalive class DoNodeTakeOverTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover(self, mock_prepare, mock_take_over, mock_start_console): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware') task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) self.assertFalse(mock_start_console.called) @mock.patch.object(notification_utils, 'emit_console_notification') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover_with_console_enabled(self, mock_prepare, mock_take_over, mock_start_console, mock_notify): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNone(node.last_error) self.assertTrue(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) mock_start_console.assert_called_once_with(mock.ANY) mock_notify.assert_has_calls( [mock.call(task, 'console_restore', obj_fields.NotificationStatus.START), mock.call(task, 'console_restore', obj_fields.NotificationStatus.END)]) @mock.patch.object(notification_utils, 'emit_console_notification') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover_with_console_exception(self, mock_prepare, mock_take_over, mock_start_console, mock_notify): self._start_service() mock_start_console.side_effect = Exception() node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNotNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) mock_start_console.assert_called_once_with(mock.ANY) mock_notify.assert_has_calls( [mock.call(task, 'console_restore', obj_fields.NotificationStatus.START), mock.call(task, 'console_restore', obj_fields.NotificationStatus.ERROR)]) @mock.patch.object(notification_utils, 'emit_console_notification') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_takeover_with_console_port_cleaned(self, mock_prepare, mock_take_over, mock_start_console, mock_notify): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', console_enabled=True) di_info = node.driver_internal_info di_info['allocated_ipmi_terminal_port'] = 12345 node.driver_internal_info = di_info node.save() task = task_manager.TaskManager(self.context, node.uuid) self.service._do_takeover(task) node.refresh() self.assertIsNone(node.last_error) self.assertTrue(node.console_enabled) self.assertIsNone( node.driver_internal_info.get('allocated_ipmi_terminal_port', None)) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) mock_start_console.assert_called_once_with(mock.ANY) mock_notify.assert_has_calls( [mock.call(task, 'console_restore', obj_fields.NotificationStatus.START), mock.call(task, 'console_restore', obj_fields.NotificationStatus.END)]) @mgr_utils.mock_record_keepalive class DoNodeAdoptionTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def _fake_spawn(self, conductor_obj, func, *args, **kwargs): func(*args, **kwargs) return mock.MagicMock() @mock.patch('ironic.drivers.modules.fake.FakePower.validate') @mock.patch('ironic.drivers.modules.fake.FakeBoot.validate') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_adoption_with_takeover(self, mock_prepare, mock_take_over, mock_start_console, mock_boot_validate, mock_power_validate): """Test a successful node adoption""" self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ADOPTING) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_adoption(task) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertIsNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) self.assertFalse(mock_start_console.called) self.assertTrue(mock_boot_validate.called) self.assertIn('is_whole_disk_image', task.node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeBoot.validate') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_adoption_take_over_failure(self, mock_prepare, mock_take_over, mock_start_console, mock_boot_validate): """Test that adoption failed if an exception is raised""" # Note(TheJulia): Use of an actual possible exception that # can be raised due to a misconfiguration. mock_take_over.side_effect = exception.IPMIFailure( "something went wrong") self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ADOPTING) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_adoption(task) node.refresh() self.assertEqual(states.ADOPTFAIL, node.provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(node.console_enabled) mock_prepare.assert_called_once_with(mock.ANY) mock_take_over.assert_called_once_with(mock.ANY) self.assertFalse(mock_start_console.called) self.assertTrue(mock_boot_validate.called) self.assertIn('is_whole_disk_image', task.node.driver_internal_info) @mock.patch('ironic.drivers.modules.fake.FakeBoot.validate') @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.take_over') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_adoption_boot_validate_failure(self, mock_prepare, mock_take_over, mock_start_console, mock_boot_validate): """Test that adoption fails if the boot validation fails""" # Note(TheJulia): Use of an actual possible exception that # can be raised due to a misconfiguration. mock_boot_validate.side_effect = exception.MissingParameterValue( "something is missing") self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ADOPTING) task = task_manager.TaskManager(self.context, node.uuid) self.service._do_adoption(task) node.refresh() self.assertEqual(states.ADOPTFAIL, node.provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(node.console_enabled) self.assertFalse(mock_prepare.called) self.assertFalse(mock_take_over.called) self.assertFalse(mock_start_console.called) self.assertTrue(mock_boot_validate.called) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provisioning_action_adopt_node(self, mock_spawn): """Test an adoption request results in the node in ADOPTING""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.MANAGEABLE, target_provision_state=states.NOSTATE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'adopt') node.refresh() self.assertEqual(states.ADOPTING, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service, self.service._do_adoption, mock.ANY) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_do_provisioning_action_adopt_node_retry(self, mock_spawn): """Test a retried adoption from ADOPTFAIL results in ADOPTING state""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ADOPTFAIL, target_provision_state=states.ACTIVE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'adopt') node.refresh() self.assertEqual(states.ADOPTING, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNone(node.last_error) mock_spawn.assert_called_with(self.service, self.service._do_adoption, mock.ANY) def test_do_provisioning_action_manage_of_failed_adoption(self): """Test a node in ADOPTFAIL can be taken to MANAGEABLE""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.ADOPTFAIL, target_provision_state=states.ACTIVE) self._start_service() self.service.do_provisioning_action(self.context, node.uuid, 'manage') node.refresh() self.assertEqual(states.MANAGEABLE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) # TODO(TheJulia): We should double check if these heartbeat tests need # to move. I have this strange feeling we were lacking rpc testing of # heartbeat until we did adoption testing.... @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_without_version(self, mock_spawn, mock_heartbeat): """Test heartbeating.""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.service.heartbeat(self.context, node.uuid, 'http://callback') mock_heartbeat.assert_called_with(mock.ANY, mock.ANY, 'http://callback', '3.0.0') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_with_agent_version(self, mock_spawn, mock_heartbeat): """Test heartbeating.""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.service.heartbeat( self.context, node.uuid, 'http://callback', '1.4.1') mock_heartbeat.assert_called_with(mock.ANY, mock.ANY, 'http://callback', '1.4.1') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_with_agent_pregenerated_token( self, mock_spawn, mock_heartbeat): """Test heartbeating.""" node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, driver_internal_info={'agent_secret_token': 'a secret'}) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.service.heartbeat( self.context, node.uuid, 'http://callback', '6.0.1', agent_token=None) mock_heartbeat.assert_called_with(mock.ANY, mock.ANY, 'http://callback', '6.0.1') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_with_no_required_agent_token(self, mock_spawn, mock_heartbeat): """Tests that we kill the heartbeat attempt very early on.""" self.config(require_agent_token=True) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.assertRaises( exception.InvalidParameterValue, self.service.heartbeat, self.context, node.uuid, 'http://callback', agent_token=None) self.assertFalse(mock_heartbeat.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_with_required_agent_token(self, mock_spawn, mock_heartbeat): """Test heartbeat works when token matches.""" self.config(require_agent_token=True) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, driver_internal_info={'agent_secret_token': 'a secret'}) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.service.heartbeat(self.context, node.uuid, 'http://callback', agent_token='a secret') mock_heartbeat.assert_called_with(mock.ANY, mock.ANY, 'http://callback', '3.0.0') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_with_agent_token(self, mock_spawn, mock_heartbeat): """Test heartbeat works when token matches.""" self.config(require_agent_token=False) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, driver_internal_info={'agent_secret_token': 'a secret'}) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.service.heartbeat(self.context, node.uuid, 'http://callback', agent_token='a secret') mock_heartbeat.assert_called_with(mock.ANY, mock.ANY, 'http://callback', '3.0.0') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_invalid_agent_token(self, mock_spawn, mock_heartbeat): """Heartbeat fails when it does not match.""" self.config(require_agent_token=False) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, driver_internal_info={'agent_secret_token': 'a secret'}) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.assertRaises(exception.InvalidParameterValue, self.service.heartbeat, self.context, node.uuid, 'http://callback', agent_token='evil', agent_version='5.0.0b23') self.assertFalse(mock_heartbeat.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_invalid_agent_token_older_version( self, mock_spawn, mock_heartbeat): """Heartbeat is rejected if token is received that is invalid.""" self.config(require_agent_token=False) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE, driver_internal_info={'agent_secret_token': 'a secret'}) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn # Intentionally sending an older client in case something fishy # occurs. self.assertRaises(exception.InvalidParameterValue, self.service.heartbeat, self.context, node.uuid, 'http://callback', agent_token='evil', agent_version='4.0.0') self.assertFalse(mock_heartbeat.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.heartbeat', autospec=True) @mock.patch('ironic.conductor.manager.ConductorManager._spawn_worker', autospec=True) def test_heartbeat_invalid_newer_version( self, mock_spawn, mock_heartbeat): """Heartbeat rejected if client should be sending a token.""" self.config(require_agent_token=False) node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) self._start_service() mock_spawn.reset_mock() mock_spawn.side_effect = self._fake_spawn self.assertRaises(exception.InvalidParameterValue, self.service.heartbeat, self.context, node.uuid, 'http://callback', agent_token=None, agent_version='6.1.5') self.assertFalse(mock_heartbeat.called) @mgr_utils.mock_record_keepalive class DestroyVolumeConnectorTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_destroy_volume_connector(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) self.service.destroy_volume_connector(self.context, volume_connector) self.assertRaises(exception.VolumeConnectorNotFound, volume_connector.refresh) self.assertRaises(exception.VolumeConnectorNotFound, self.dbapi.get_volume_connector_by_uuid, volume_connector.uuid) def test_destroy_volume_connector_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_connector, self.context, volume_connector) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_destroy_volume_connector_node_power_on(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_ON) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_connector, self.context, volume_connector) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class UpdateVolumeConnectorTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_update_volume_connector(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} volume_connector.extra = new_extra res = self.service.update_volume_connector(self.context, volume_connector) self.assertEqual(new_extra, res.extra) def test_update_volume_connector_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) volume_connector.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_connector, self.context, volume_connector) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_update_volume_connector_type(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id, extra={'vol_id': 'fake-id'}) new_type = 'wwnn' volume_connector.type = new_type res = self.service.update_volume_connector(self.context, volume_connector) self.assertEqual(new_type, res.type) def test_update_volume_connector_uuid(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) volume_connector.uuid = uuidutils.generate_uuid() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_connector, self.context, volume_connector) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_update_volume_connector_duplicate(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_connector1 = obj_utils.create_test_volume_connector( self.context, node_id=node.id) volume_connector2 = obj_utils.create_test_volume_connector( self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), type='diff_type') volume_connector2.type = volume_connector1.type exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_connector, self.context, volume_connector2) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.VolumeConnectorTypeAndIdAlreadyExists, exc.exc_info[0]) def test_update_volume_connector_node_power_on(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_ON) volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=node.id) volume_connector.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_connector, self.context, volume_connector) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class DestroyVolumeTargetTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_destroy_volume_target(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) self.service.destroy_volume_target(self.context, volume_target) self.assertRaises(exception.VolumeTargetNotFound, volume_target.refresh) self.assertRaises(exception.VolumeTargetNotFound, self.dbapi.get_volume_target_by_uuid, volume_target.uuid) def test_destroy_volume_target_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_destroy_volume_target_node_gone(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware') volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) self.service.destroy_node(self.context, node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeNotFound, exc.exc_info[0]) def test_destroy_volume_target_already_destroyed(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) self.service.destroy_volume_target(self.context, volume_target) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.VolumeTargetNotFound, exc.exc_info[0]) def test_destroy_volume_target_node_power_on(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_ON) volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.destroy_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class UpdateVolumeTargetTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def test_update_volume_target(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target( self.context, node_id=node.id, extra={'foo': 'bar'}) new_extra = {'foo': 'baz'} volume_target.extra = new_extra res = self.service.update_volume_target(self.context, volume_target) self.assertEqual(new_extra, res.extra) def test_update_volume_target_node_locked(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', reservation='fake-reserv') volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) volume_target.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.NodeLocked, exc.exc_info[0]) def test_update_volume_target_volume_type(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target( self.context, node_id=node.id, extra={'vol_id': 'fake-id'}) new_volume_type = 'fibre_channel' volume_target.volume_type = new_volume_type res = self.service.update_volume_target(self.context, volume_target) self.assertEqual(new_volume_type, res.volume_type) def test_update_volume_target_uuid(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target( self.context, node_id=node.id) volume_target.uuid = uuidutils.generate_uuid() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidParameterValue, exc.exc_info[0]) def test_update_volume_target_duplicate(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target1 = obj_utils.create_test_volume_target( self.context, node_id=node.id) volume_target2 = obj_utils.create_test_volume_target( self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), boot_index=volume_target1.boot_index + 1) volume_target2.boot_index = volume_target1.boot_index exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_target, self.context, volume_target2) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.VolumeTargetBootIndexAlreadyExists, exc.exc_info[0]) def _test_update_volume_target_exception(self, expected_exc): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_OFF) volume_target = obj_utils.create_test_volume_target( self.context, node_id=node.id, extra={'vol_id': 'fake-id'}) new_volume_type = 'fibre_channel' volume_target.volume_type = new_volume_type with mock.patch.object(objects.VolumeTarget, 'save') as mock_save: mock_save.side_effect = expected_exc('Boo') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(expected_exc, exc.exc_info[0]) def test_update_volume_target_node_not_found(self): self._test_update_volume_target_exception(exception.NodeNotFound) def test_update_volume_target_not_found(self): self._test_update_volume_target_exception( exception.VolumeTargetNotFound) def test_update_volume_target_node_power_on(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', power_state=states.POWER_ON) volume_target = obj_utils.create_test_volume_target(self.context, node_id=node.id) volume_target.extra = {'foo': 'baz'} exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.update_volume_target, self.context, volume_target) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(exception.InvalidStateRequested, exc.exc_info[0]) @mgr_utils.mock_record_keepalive class NodeTraitsTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(NodeTraitsTestCase, self).setUp() self.traits = ['trait1', 'trait2'] self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') def test_add_node_traits(self): self.service.add_node_traits(self.context, self.node.id, self.traits[:1]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits[:1], [trait.trait for trait in traits]) self.service.add_node_traits(self.context, self.node.id, self.traits[1:]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits, [trait.trait for trait in traits]) def test_add_node_traits_replace(self): self.service.add_node_traits(self.context, self.node.id, self.traits[:1], replace=True) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits[:1], [trait.trait for trait in traits]) self.service.add_node_traits(self.context, self.node.id, self.traits[1:], replace=True) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits[1:], [trait.trait for trait in traits]) def _test_add_node_traits_exception(self, expected_exc): with mock.patch.object(objects.Trait, 'create') as mock_create: mock_create.side_effect = expected_exc('Boo') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.add_node_traits, self.context, self.node.id, self.traits) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(expected_exc, exc.exc_info[0]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual([], traits.objects) def test_add_node_traits_invalid_parameter_value(self): self._test_add_node_traits_exception(exception.InvalidParameterValue) def test_add_node_traits_node_locked(self): self._test_add_node_traits_exception(exception.NodeLocked) def test_add_node_traits_node_not_found(self): self._test_add_node_traits_exception(exception.NodeNotFound) def test_remove_node_traits(self): objects.TraitList.create(self.context, self.node.id, self.traits) self.service.remove_node_traits(self.context, self.node.id, self.traits[:1]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits[1:], [trait.trait for trait in traits]) self.service.remove_node_traits(self.context, self.node.id, self.traits[1:]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual([], traits.objects) def test_remove_node_traits_all(self): objects.TraitList.create(self.context, self.node.id, self.traits) self.service.remove_node_traits(self.context, self.node.id, None) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual([], traits.objects) def test_remove_node_traits_empty(self): objects.TraitList.create(self.context, self.node.id, self.traits) self.service.remove_node_traits(self.context, self.node.id, []) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits, [trait.trait for trait in traits]) def _test_remove_node_traits_exception(self, expected_exc): objects.TraitList.create(self.context, self.node.id, self.traits) with mock.patch.object(objects.Trait, 'destroy') as mock_destroy: mock_destroy.side_effect = expected_exc('Boo') exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.remove_node_traits, self.context, self.node.id, self.traits) # Compare true exception hidden by @messaging.expected_exceptions self.assertEqual(expected_exc, exc.exc_info[0]) traits = objects.TraitList.get_by_node_id(self.context, self.node.id) self.assertEqual(self.traits, [trait.trait for trait in traits]) def test_remove_node_traits_node_locked(self): self._test_remove_node_traits_exception(exception.NodeLocked) def test_remove_node_traits_node_not_found(self): self._test_remove_node_traits_exception(exception.NodeNotFound) def test_remove_node_traits_node_trait_not_found(self): self._test_remove_node_traits_exception(exception.NodeTraitNotFound) @mgr_utils.mock_record_keepalive class DoNodeInspectAbortTestCase(mgr_utils.CommonMixIn, mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeInspect.abort') @mock.patch('ironic.conductor.task_manager.acquire', autospec=True) def test_do_inspect_abort_interface_not_support(self, mock_acquire, mock_abort, mock_log): node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT) task = task_manager.TaskManager(self.context, node.uuid) mock_acquire.side_effect = self._get_acquire_side_effect(task) mock_abort.side_effect = exception.UnsupportedDriverExtension( driver='fake-hardware', extension='inspect') self._start_service() exc = self.assertRaises(messaging.rpc.ExpectedException, self.service.do_provisioning_action, self.context, task.node.uuid, "abort") self.assertEqual(exception.UnsupportedDriverExtension, exc.exc_info[0]) self.assertTrue(mock_log.error.called) @mock.patch.object(manager, 'LOG') @mock.patch('ironic.drivers.modules.fake.FakeInspect.abort') @mock.patch('ironic.conductor.task_manager.acquire', autospec=True) def test_do_inspect_abort_interface_return_failed(self, mock_acquire, mock_abort, mock_log): mock_abort.side_effect = exception.IronicException('Oops') self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT) task = task_manager.TaskManager(self.context, node.uuid) mock_acquire.side_effect = self._get_acquire_side_effect(task) self.assertRaises(exception.IronicException, self.service.do_provisioning_action, self.context, task.node.uuid, "abort") node.refresh() self.assertTrue(mock_log.exception.called) self.assertIn('Failed to abort inspection.', node.last_error) @mock.patch('ironic.drivers.modules.fake.FakeInspect.abort') @mock.patch('ironic.conductor.task_manager.acquire', autospec=True) def test_do_inspect_abort_succeeded(self, mock_acquire, mock_abort): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.INSPECTWAIT) task = task_manager.TaskManager(self.context, node.uuid) mock_acquire.side_effect = self._get_acquire_side_effect(task) self.service.do_provisioning_action(self.context, task.node.uuid, "abort") node.refresh() self.assertEqual('inspect failed', node.provision_state) self.assertIn('Inspection was aborted', node.last_error) ironic-15.0.0/ironic/tests/unit/conductor/test_deployments.py0000664000175000017500000011246213652514273024500 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for deployment aspects of the conductor.""" import mock from oslo_config import cfg from oslo_db import exception as db_exception from oslo_utils import uuidutils from ironic.common import exception from ironic.common import states from ironic.common import swift from ironic.conductor import deployments from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.db import api as dbapi from ironic.drivers.modules import fake from ironic.tests.unit.conductor import mgr_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF @mgr_utils.mock_record_keepalive class DoNodeDeployTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_driver_raises_prepare_error(self, mock_prepare, mock_deploy): self._start_service() # test when driver.deploy.prepare raises an ironic error mock_prepare.side_effect = exception.InstanceDeployFailure('test') node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.InstanceDeployFailure, deployments.do_node_deploy, task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) # NOTE(tenbrae): failing a deploy does not clear the target state # any longer. Instead, it is cleared when the instance # is deleted. self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertTrue(mock_prepare.called) self.assertFalse(mock_deploy.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_unexpected_prepare_error(self, mock_prepare, mock_deploy): self._start_service() # test when driver.deploy.prepare raises an exception mock_prepare.side_effect = RuntimeError('test') node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(RuntimeError, deployments.do_node_deploy, task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) # NOTE(tenbrae): failing a deploy does not clear the target state # any longer. Instead, it is cleared when the instance # is deleted. self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertTrue(mock_prepare.called) self.assertFalse(mock_deploy.called) def _test__do_node_deploy_driver_exception(self, exc, unexpected=False): self._start_service() with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: # test when driver.deploy.deploy() raises an exception mock_deploy.side_effect = exc node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) deployments.do_node_deploy(task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) # NOTE(tenbrae): failing a deploy does not clear the target state # any longer. Instead, it is cleared when the instance # is deleted. self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) if unexpected: self.assertIn('Exception', node.last_error) else: self.assertNotIn('Exception', node.last_error) mock_deploy.assert_called_once_with(mock.ANY, task) def test__do_node_deploy_driver_ironic_exception(self): self._test__do_node_deploy_driver_exception( exception.InstanceDeployFailure('test')) def test__do_node_deploy_driver_unexpected_exception(self): self._test__do_node_deploy_driver_exception(RuntimeError('test'), unexpected=True) @mock.patch.object(deployments, '_store_configdrive', autospec=True) def _test__do_node_deploy_ok(self, mock_store, configdrive=None, expected_configdrive=None): expected_configdrive = expected_configdrive or configdrive self._start_service() with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: mock_deploy.return_value = None self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', name=None, provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, self.node.uuid) deployments.do_node_deploy(task, self.service.conductor.id, configdrive=configdrive) self.node.refresh() self.assertEqual(states.ACTIVE, self.node.provision_state) self.assertEqual(states.NOSTATE, self.node.target_provision_state) self.assertIsNone(self.node.last_error) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) if configdrive: mock_store.assert_called_once_with(task.node, expected_configdrive) else: self.assertFalse(mock_store.called) def test__do_node_deploy_ok(self): self._test__do_node_deploy_ok() def test__do_node_deploy_ok_configdrive(self): configdrive = 'foo' self._test__do_node_deploy_ok(configdrive=configdrive) @mock.patch('openstack.baremetal.configdrive.build') def test__do_node_deploy_configdrive_as_dict(self, mock_cd): mock_cd.return_value = 'foo' configdrive = {'user_data': 'abcd'} self._test__do_node_deploy_ok(configdrive=configdrive, expected_configdrive='foo') mock_cd.assert_called_once_with({'uuid': self.node.uuid}, network_data=None, user_data=b'abcd', vendor_data=None) @mock.patch('openstack.baremetal.configdrive.build') def test__do_node_deploy_configdrive_as_dict_with_meta_data(self, mock_cd): mock_cd.return_value = 'foo' configdrive = {'meta_data': {'uuid': uuidutils.generate_uuid(), 'name': 'new-name', 'hostname': 'example.com'}} self._test__do_node_deploy_ok(configdrive=configdrive, expected_configdrive='foo') mock_cd.assert_called_once_with(configdrive['meta_data'], network_data=None, user_data=None, vendor_data=None) @mock.patch('openstack.baremetal.configdrive.build') def test__do_node_deploy_configdrive_with_network_data(self, mock_cd): mock_cd.return_value = 'foo' configdrive = {'network_data': {'links': []}} self._test__do_node_deploy_ok(configdrive=configdrive, expected_configdrive='foo') mock_cd.assert_called_once_with({'uuid': self.node.uuid}, network_data={'links': []}, user_data=None, vendor_data=None) @mock.patch('openstack.baremetal.configdrive.build') def test__do_node_deploy_configdrive_and_user_data_as_dict(self, mock_cd): mock_cd.return_value = 'foo' configdrive = {'user_data': {'user': 'data'}} self._test__do_node_deploy_ok(configdrive=configdrive, expected_configdrive='foo') mock_cd.assert_called_once_with({'uuid': self.node.uuid}, network_data=None, user_data=b'{"user": "data"}', vendor_data=None) @mock.patch('openstack.baremetal.configdrive.build') def test__do_node_deploy_configdrive_with_vendor_data(self, mock_cd): mock_cd.return_value = 'foo' configdrive = {'vendor_data': {'foo': 'bar'}} self._test__do_node_deploy_ok(configdrive=configdrive, expected_configdrive='foo') mock_cd.assert_called_once_with({'uuid': self.node.uuid}, network_data=None, user_data=None, vendor_data={'foo': 'bar'}) @mock.patch.object(swift, 'SwiftAPI') @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_configdrive_swift_error(self, mock_prepare, mock_swift): CONF.set_override('configdrive_use_object_store', True, group='deploy') self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) mock_swift.side_effect = exception.SwiftOperationError('error') self.assertRaises(exception.SwiftOperationError, deployments.do_node_deploy, task, self.service.conductor.id, configdrive=b'fake config drive') node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(mock_prepare.called) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_configdrive_db_error(self, mock_prepare): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) task.node.save() expected_instance_info = dict(node.instance_info) with mock.patch.object(dbapi.IMPL, 'update_node') as mock_db: db_node = self.dbapi.get_node_by_uuid(node.uuid) mock_db.side_effect = [db_exception.DBDataError('DB error'), db_node, db_node, db_node] self.assertRaises(db_exception.DBDataError, deployments.do_node_deploy, task, self.service.conductor.id, configdrive=b'fake config drive') expected_instance_info.update(configdrive=b'fake config drive') expected_calls = [ mock.call(node.uuid, {'version': mock.ANY, 'instance_info': expected_instance_info, 'driver_internal_info': mock.ANY}), mock.call(node.uuid, {'version': mock.ANY, 'last_error': mock.ANY}), mock.call(node.uuid, {'version': mock.ANY, 'deploy_step': {}, 'driver_internal_info': mock.ANY}), mock.call(node.uuid, {'version': mock.ANY, 'provision_state': states.DEPLOYFAIL, 'target_provision_state': states.ACTIVE}), ] self.assertEqual(expected_calls, mock_db.mock_calls) self.assertFalse(mock_prepare.called) @mock.patch.object(deployments, '_store_configdrive', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.prepare') def test__do_node_deploy_configdrive_unexpected_error(self, mock_prepare, mock_store): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) mock_store.side_effect = RuntimeError('unexpected') self.assertRaises(RuntimeError, deployments.do_node_deploy, task, self.service.conductor.id, configdrive=b'fake config drive') node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(mock_prepare.called) def test__do_node_deploy_ok_2(self): # NOTE(rloo): a different way of testing for the same thing as in # test__do_node_deploy_ok(). Instead of specifying the provision & # target_provision_states when creating the node, we call # task.process_event() to "set the stage" (err "states"). self._start_service() with mock.patch.object(fake.FakeDeploy, 'deploy', autospec=True) as mock_deploy: # test when driver.deploy.deploy() returns None mock_deploy.return_value = None node = obj_utils.create_test_node(self.context, driver='fake-hardware') task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_node_deploy(task, self.service.conductor.id) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertIsNone(node.last_error) mock_deploy.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(deployments, 'do_next_deploy_step', autospec=True) @mock.patch.object(conductor_steps, 'set_node_deployment_steps', autospec=True) def test_do_node_deploy_steps(self, mock_set_steps, mock_deploy_step): # these are not real steps... fake_deploy_steps = ['step1', 'step2'] def add_steps(task, **kwargs): info = task.node.driver_internal_info info['deploy_steps'] = fake_deploy_steps task.node.driver_internal_info = info task.node.save() mock_set_steps.side_effect = add_steps self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware') task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_node_deploy(task, self.service.conductor.id) mock_set_steps.assert_called_once_with(task, skip_missing=True) self.assertEqual(fake_deploy_steps, task.node.driver_internal_info['deploy_steps']) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.deploy', autospec=True) def test__do_node_deploy_driver_raises_error_old(self, mock_deploy): # Mocking FakeDeploy.deploy before starting the service, causes # it not to be a deploy_step. self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=states.ACTIVE) task = task_manager.TaskManager(self.context, node.uuid) self.assertRaises(exception.InstanceDeployFailure, deployments.do_node_deploy, task, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertIsNotNone(node.last_error) self.assertFalse(mock_deploy.called) @mgr_utils.mock_record_keepalive class DoNextDeployStepTestCase(mgr_utils.ServiceSetUpMixin, db_base.DbTestCase): def setUp(self): super(DoNextDeployStepTestCase, self).setUp() self.deploy_start = { 'step': 'deploy_start', 'priority': 50, 'interface': 'deploy'} self.deploy_end = { 'step': 'deploy_end', 'priority': 20, 'interface': 'deploy'} self.deploy_steps = [self.deploy_start, self.deploy_end] @mock.patch.object(deployments, 'LOG', autospec=True) def test__do_next_deploy_step_none(self, mock_log): self._start_service() node = obj_utils.create_test_node(self.context, driver='fake-hardware') task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, None, self.service.conductor.id) node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(2, mock_log.info.call_count) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test__do_next_deploy_step_async(self, mock_execute): driver_internal_info = {'deploy_step_index': None, 'deploy_steps': self.deploy_steps} self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', driver_internal_info=driver_internal_info, deploy_step={}) mock_execute.return_value = states.DEPLOYWAIT expected_first_step = node.driver_internal_info['deploy_steps'][0] task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, 0, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYWAIT, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertEqual(expected_first_step, node.deploy_step) self.assertEqual(0, node.driver_internal_info['deploy_step_index']) self.assertEqual(self.service.conductor.id, node.conductor_affinity) mock_execute.assert_called_once_with(mock.ANY, task, self.deploy_steps[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test__do_next_deploy_step_continue_from_last_step(self, mock_execute): # Resume an in-progress deploy after the first async step driver_internal_info = {'deploy_step_index': 0, 'deploy_steps': self.deploy_steps} self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, driver_internal_info=driver_internal_info, deploy_step=self.deploy_steps[0]) mock_execute.return_value = states.DEPLOYWAIT task = task_manager.TaskManager(self.context, node.uuid) task.process_event('resume') deployments.do_next_deploy_step(task, 1, self.service.conductor.id) node.refresh() self.assertEqual(states.DEPLOYWAIT, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertEqual(self.deploy_steps[1], node.deploy_step) self.assertEqual(1, node.driver_internal_info['deploy_step_index']) mock_execute.assert_called_once_with(mock.ANY, task, self.deploy_steps[1]) @mock.patch('ironic.drivers.modules.fake.FakeConsole.start_console', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def _test__do_next_deploy_step_last_step_done(self, mock_execute, mock_console, console_enabled=False, console_error=False): # Resume where last_step is the last deploy step that was executed driver_internal_info = {'deploy_step_index': 1, 'deploy_steps': self.deploy_steps} self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYWAIT, target_provision_state=states.ACTIVE, driver_internal_info=driver_internal_info, deploy_step=self.deploy_steps[1], console_enabled=console_enabled) mock_execute.return_value = None if console_error: mock_console.side_effect = exception.ConsoleError() task = task_manager.TaskManager(self.context, node.uuid) task.process_event('resume') deployments.do_next_deploy_step(task, None, self.service.conductor.id) node.refresh() # Deploying should be complete without calling additional steps self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertIsNone(node.driver_internal_info['deploy_steps']) self.assertFalse(mock_execute.called) if console_enabled: mock_console.assert_called_once_with(mock.ANY, task) else: self.assertFalse(mock_console.called) def test__do_next_deploy_step_last_step_done(self): self._test__do_next_deploy_step_last_step_done() def test__do_next_deploy_step_last_step_done_with_console(self): self._test__do_next_deploy_step_last_step_done(console_enabled=True) def test__do_next_deploy_step_last_step_done_with_console_error(self): self._test__do_next_deploy_step_last_step_done(console_enabled=True, console_error=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test__do_next_deploy_step_all(self, mock_execute): # Run all steps from start to finish (all synchronous) driver_internal_info = {'deploy_step_index': None, 'deploy_steps': self.deploy_steps, 'agent_url': 'url'} self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', driver_internal_info=driver_internal_info, deploy_step={}) mock_execute.return_value = None task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, 1, self.service.conductor.id) # Deploying should be complete node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertIsNone(node.driver_internal_info['deploy_steps']) mock_execute.assert_has_calls = [mock.call(self.deploy_steps[0]), mock.call(self.deploy_steps[1])] self.assertNotIn('agent_url', node.driver_internal_info) @mock.patch.object(conductor_utils, 'LOG', autospec=True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def _do_next_deploy_step_execute_fail(self, exc, traceback, mock_execute, mock_log): # When a deploy step fails, go to DEPLOYFAIL driver_internal_info = {'deploy_step_index': None, 'deploy_steps': self.deploy_steps} self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', driver_internal_info=driver_internal_info, deploy_step={}) mock_execute.side_effect = exc task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, 0, self.service.conductor.id) # Make sure we go to DEPLOYFAIL, clear deploy_steps node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertFalse(node.maintenance) mock_execute.assert_called_once_with(mock.ANY, mock.ANY, self.deploy_steps[0]) mock_log.error.assert_called_once_with(mock.ANY, exc_info=traceback) def test_do_next_deploy_step_execute_ironic_exception(self): self._do_next_deploy_step_execute_fail( exception.IronicException('foo'), False) def test_do_next_deploy_step_execute_exception(self): self._do_next_deploy_step_execute_fail(Exception('foo'), True) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test_do_next_deploy_step_no_steps(self, mock_execute): self._start_service() for info in ({'deploy_steps': None, 'deploy_step_index': None}, {'deploy_steps': None}): # Resume where there are no steps, should be a noop node = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid(), last_error=None, driver_internal_info=info, deploy_step={}) task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, None, self.service.conductor.id) # Deploying should be complete without calling additional steps node.refresh() self.assertEqual(states.ACTIVE, node.provision_state) self.assertEqual(states.NOSTATE, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertFalse(mock_execute.called) mock_execute.reset_mock() @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test_do_next_deploy_step_bad_step_return_value(self, mock_execute): # When a deploy step fails, go to DEPLOYFAIL self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', driver_internal_info={'deploy_steps': self.deploy_steps, 'deploy_step_index': None}, deploy_step={}) mock_execute.return_value = "foo" task = task_manager.TaskManager(self.context, node.uuid) task.process_event('deploy') deployments.do_next_deploy_step(task, 0, self.service.conductor.id) # Make sure we go to DEPLOYFAIL, clear deploy_steps node.refresh() self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(states.ACTIVE, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertIsNotNone(node.last_error) self.assertFalse(node.maintenance) mock_execute.assert_called_once_with(mock.ANY, mock.ANY, self.deploy_steps[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test_do_next_deploy_step_oob_reboot(self, mock_execute): # When a deploy step fails, go to DEPLOYWAIT tgt_prov_state = states.ACTIVE self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'deploy_steps': self.deploy_steps, 'deploy_step_index': None, 'deployment_reboot': True}, clean_step={}) mock_execute.side_effect = exception.AgentConnectionFailed( reason='failed') with task_manager.acquire( self.context, node.uuid, shared=False) as task: deployments.do_next_deploy_step(task, 0, mock.ANY) self._stop_service() node.refresh() # Make sure we go to CLEANWAIT self.assertEqual(states.DEPLOYWAIT, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual(self.deploy_steps[0], node.deploy_step) self.assertEqual(0, node.driver_internal_info['deploy_step_index']) self.assertFalse(node.driver_internal_info['skip_current_deploy_step']) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.deploy_steps[0]) @mock.patch('ironic.drivers.modules.fake.FakeDeploy.execute_deploy_step', autospec=True) def test_do_next_deploy_step_oob_reboot_fail(self, mock_execute): # When a deploy step fails with no reboot requested go to DEPLOYFAIL tgt_prov_state = states.ACTIVE self._start_service() node = obj_utils.create_test_node( self.context, driver='fake-hardware', provision_state=states.DEPLOYING, target_provision_state=tgt_prov_state, last_error=None, driver_internal_info={'deploy_steps': self.deploy_steps, 'deploy_step_index': None}, deploy_step={}) mock_execute.side_effect = exception.AgentConnectionFailed( reason='failed') with task_manager.acquire( self.context, node.uuid, shared=False) as task: deployments.do_next_deploy_step(task, 0, mock.ANY) self._stop_service() node.refresh() # Make sure we go to DEPLOYFAIL, clear deploy_steps self.assertEqual(states.DEPLOYFAIL, node.provision_state) self.assertEqual(tgt_prov_state, node.target_provision_state) self.assertEqual({}, node.deploy_step) self.assertNotIn('deploy_step_index', node.driver_internal_info) self.assertNotIn('skip_current_deploy_step', node.driver_internal_info) self.assertIsNotNone(node.last_error) mock_execute.assert_called_once_with( mock.ANY, mock.ANY, self.deploy_steps[0]) @mock.patch.object(swift, 'SwiftAPI') class StoreConfigDriveTestCase(db_base.DbTestCase): def setUp(self): super(StoreConfigDriveTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake-hardware', instance_info=None) def test_store_configdrive(self, mock_swift): deployments._store_configdrive(self.node, 'foo') expected_instance_info = {'configdrive': 'foo'} self.node.refresh() self.assertEqual(expected_instance_info, self.node.instance_info) self.assertFalse(mock_swift.called) def test_store_configdrive_swift(self, mock_swift): container_name = 'foo_container' timeout = 123 expected_obj_name = 'configdrive-%s' % self.node.uuid expected_obj_header = {'X-Delete-After': str(timeout)} expected_instance_info = {'configdrive': 'http://1.2.3.4'} # set configs and mocks CONF.set_override('configdrive_use_object_store', True, group='deploy') CONF.set_override('configdrive_swift_container', container_name, group='conductor') CONF.set_override('deploy_callback_timeout', timeout, group='conductor') mock_swift.return_value.get_temp_url.return_value = 'http://1.2.3.4' deployments._store_configdrive(self.node, b'foo') mock_swift.assert_called_once_with() mock_swift.return_value.create_object.assert_called_once_with( container_name, expected_obj_name, mock.ANY, object_headers=expected_obj_header) mock_swift.return_value.get_temp_url.assert_called_once_with( container_name, expected_obj_name, timeout) self.node.refresh() self.assertEqual(expected_instance_info, self.node.instance_info) def test_store_configdrive_swift_no_deploy_timeout(self, mock_swift): container_name = 'foo_container' expected_obj_name = 'configdrive-%s' % self.node.uuid expected_obj_header = {'X-Delete-After': '1200'} expected_instance_info = {'configdrive': 'http://1.2.3.4'} # set configs and mocks CONF.set_override('configdrive_use_object_store', True, group='deploy') CONF.set_override('configdrive_swift_container', container_name, group='conductor') CONF.set_override('configdrive_swift_temp_url_duration', 1200, group='conductor') CONF.set_override('deploy_callback_timeout', 0, group='conductor') mock_swift.return_value.get_temp_url.return_value = 'http://1.2.3.4' deployments._store_configdrive(self.node, b'foo') mock_swift.assert_called_once_with() mock_swift.return_value.create_object.assert_called_once_with( container_name, expected_obj_name, mock.ANY, object_headers=expected_obj_header) mock_swift.return_value.get_temp_url.assert_called_once_with( container_name, expected_obj_name, 1200) self.node.refresh() self.assertEqual(expected_instance_info, self.node.instance_info) def test_store_configdrive_swift_no_deploy_timeout_fallback(self, mock_swift): container_name = 'foo_container' expected_obj_name = 'configdrive-%s' % self.node.uuid expected_obj_header = {'X-Delete-After': '1800'} expected_instance_info = {'configdrive': 'http://1.2.3.4'} # set configs and mocks CONF.set_override('configdrive_use_object_store', True, group='deploy') CONF.set_override('configdrive_swift_container', container_name, group='conductor') CONF.set_override('deploy_callback_timeout', 0, group='conductor') mock_swift.return_value.get_temp_url.return_value = 'http://1.2.3.4' deployments._store_configdrive(self.node, b'foo') mock_swift.assert_called_once_with() mock_swift.return_value.create_object.assert_called_once_with( container_name, expected_obj_name, mock.ANY, object_headers=expected_obj_header) mock_swift.return_value.get_temp_url.assert_called_once_with( container_name, expected_obj_name, 1800) self.node.refresh() self.assertEqual(expected_instance_info, self.node.instance_info) ironic-15.0.0/ironic/tests/unit/conductor/__init__.py0000664000175000017500000000000013652514273022615 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/api/0000775000175000017500000000000013652514443017266 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/api/test_ospmiddleware.py0000664000175000017500000000266713652514273023552 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from osprofiler import web from ironic.tests.unit.api import base CONF = cfg.CONF class TestOsprofilerWsgiMiddleware(base.BaseApiTest): """Provide a basic test for OSProfiler wsgi middleware. The tests below provide minimal confirmation that the OSProfiler wsgi middleware is called. """ def setUp(self): super(TestOsprofilerWsgiMiddleware, self).setUp() @mock.patch.object(web, 'WsgiMiddleware') def test_enable_osp_wsgi_request(self, mock_ospmiddleware): CONF.profiler.enabled = True self._make_app() mock_ospmiddleware.assert_called_once_with(mock.ANY) @mock.patch.object(web, 'WsgiMiddleware') def test_disable_osp_wsgi_request(self, mock_ospmiddleware): CONF.profiler.enabled = False self._make_app() self.assertFalse(mock_ospmiddleware.called) ironic-15.0.0/ironic/tests/unit/api/test_root.py0000664000175000017500000001064513652514273021671 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client from ironic.api.controllers.v1 import versions from ironic.tests.unit.api import base class TestRoot(base.BaseApiTest): def test_get_root(self): response = self.get_json('/', path_prefix='') # Check fields are not empty [self.assertNotIn(f, ['', []]) for f in response] self.assertEqual('OpenStack Ironic API', response['name']) self.assertTrue(response['description']) self.assertEqual([response['default_version']], response['versions']) version1 = response['default_version'] self.assertEqual('v1', version1['id']) self.assertEqual('CURRENT', version1['status']) self.assertEqual(versions.min_version_string(), version1['min_version']) self.assertEqual(versions.max_version_string(), version1['version']) def test_no_html_errors(self): response = self.get_json('/foo', expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertIn('Not Found', response.json['error_message']) self.assertNotIn('/ports/ response = self.get_json( '/portgroups/%s/ports/%s' % (pg.uuid, uuidutils.generate_uuid()), headers={api_base.Version.string: str(api_v1.max_version())}, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_ports_subresource_no_portgroups_allowed(self): pg = obj_utils.create_test_portgroup(self.context, uuid=uuidutils.generate_uuid(), node_id=self.node.id) for id_ in range(2): obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), portgroup_id=pg.id, address='52:54:00:cf:2d:3%s' % id_) response = self.get_json('/portgroups/%s/ports' % pg.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) def test_get_all_ports_by_portgroup_uuid(self): pg = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) port = obj_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=pg.id) data = self.get_json('/portgroups/%s/ports' % pg.uuid, headers={api_base.Version.string: '1.24'}) self.assertEqual(port.uuid, data['ports'][0]['uuid']) def test_ports_subresource_not_allowed(self): pg = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) response = self.get_json('/portgroups/%s/ports' % pg.uuid, expect_errors=True, headers={api_base.Version.string: '1.23'}) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertIn('Not Found', response.json['error_message']) def test_ports_subresource_portgroup_not_found(self): non_existent_uuid = 'eeeeeeee-cccc-aaaa-bbbb-cccccccccccc' response = self.get_json('/portgroups/%s/ports' % non_existent_uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertIn('Portgroup %s could not be found.' % non_existent_uuid, response.json['error_message']) def test_portgroup_by_address(self): address_template = "aa:bb:cc:dd:ee:f%d" for id_ in range(3): obj_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='portgroup%s' % id_, address=address_template % id_) target_address = address_template % 1 data = self.get_json('/portgroups?address=%s' % target_address, headers=self.headers) self.assertThat(data['portgroups'], HasLength(1)) self.assertEqual(target_address, data['portgroups'][0]['address']) def test_portgroup_get_all_invalid_api_version(self): obj_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='portgroup_1') response = self.get_json('/portgroups', headers={api_base.Version.string: '1.14'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_portgroup_by_address_non_existent_address(self): # non-existent address data = self.get_json('/portgroups?address=%s' % 'aa:bb:cc:dd:ee:ff', headers=self.headers) self.assertThat(data['portgroups'], HasLength(0)) def test_portgroup_by_address_invalid_address_format(self): obj_utils.create_test_portgroup(self.context, node_id=self.node.id) invalid_address = 'invalid-mac-format' response = self.get_json('/portgroups?address=%s' % invalid_address, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_address, response.json['error_message']) def test_sort_key(self): portgroups = [] for id_ in range(3): portgroup = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='portgroup%s' % id_, address='52:54:00:cf:2d:3%s' % id_) portgroups.append(portgroup.uuid) data = self.get_json('/portgroups?sort_key=uuid', headers=self.headers) uuids = [n['uuid'] for n in data['portgroups']] self.assertEqual(sorted(portgroups), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra', 'internal_info', 'properties'] for invalid_key in invalid_keys_list: response = self.get_json('/portgroups?sort_key=%s' % invalid_key, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def _test_sort_key_allowed(self, detail=False): portgroup_uuids = [] for id_ in range(3, 0, -1): portgroup = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='portgroup%s' % id_, address='52:54:00:cf:2d:3%s' % id_, mode='mode_%s' % id_) portgroup_uuids.append(portgroup.uuid) portgroup_uuids.reverse() detail_str = '/detail' if detail else '' data = self.get_json('/portgroups%s?sort_key=mode' % detail_str, headers=self.headers) data_uuids = [p['uuid'] for p in data['portgroups']] self.assertEqual(portgroup_uuids, data_uuids) def test_sort_key_allowed(self): self._test_sort_key_allowed() def test_detail_sort_key_allowed(self): self._test_sort_key_allowed(detail=True) def _test_sort_key_not_allowed(self, detail=False): headers = {api_base.Version.string: '1.25'} detail_str = '/detail' if detail else '' response = self.get_json('/portgroups%s?sort_key=mode' % detail_str, headers=headers, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) self.assertEqual('application/json', response.content_type) def test_sort_key_not_allowed(self): self._test_sort_key_not_allowed() def test_detail_sort_key_not_allowed(self): self._test_sort_key_not_allowed(detail=True) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/portgroups specifying node_name - success mock_get_rpc_node.return_value = self.node for i in range(5): if i < 3: node_id = self.node.id else: node_id = 100000 + i obj_utils.create_test_portgroup( self.context, node_id=node_id, uuid=uuidutils.generate_uuid(), name='portgroup%s' % i, address='52:54:00:cf:2d:3%s' % i) data = self.get_json("/portgroups?node=%s" % 'test-node', headers=self.headers) self.assertEqual(3, len(data['portgroups'])) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_uuid_ok(self, mock_get_rpc_node): mock_get_rpc_node.return_value = self.node obj_utils.create_test_portgroup(self.context, node_id=self.node.id) data = self.get_json('/portgroups/detail?node=%s' % (self.node.uuid), headers=self.headers) mock_get_rpc_node.assert_called_once_with(self.node.uuid) self.assertEqual(1, len(data['portgroups'])) @mock.patch.object(api_utils, 'get_rpc_node') def test_detail_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/portgroups/detail specifying node_name - success mock_get_rpc_node.return_value = self.node portgroup = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) data = self.get_json('/portgroups/detail?node=%s' % 'test-node', headers=self.headers) self.assertEqual(portgroup.uuid, data['portgroups'][0]['uuid']) self.assertEqual(self.node.uuid, data['portgroups'][0]['node_uuid']) @mock.patch.object(rpcapi.ConductorAPI, 'update_portgroup') class TestPatch(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPatch, self).setUp() self.node = obj_utils.create_test_node(self.context) self.portgroup = obj_utils.create_test_portgroup(self.context, name='pg.1', node_id=self.node.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) @mock.patch.object(notification_utils, '_emit_api_notification') def test_update_byid(self, mock_notify, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_update_byname(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.name, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) def test_update_byname_with_json(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s.json' % self.portgroup.name, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) def test_update_invalid_name(self, mock_upd): mock_upd.return_value = self.portgroup response = self.patch_json('/portgroups/%s' % self.portgroup.name, [{'path': '/name', 'value': 'aa:bb_cc', 'op': 'replace'}], headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_code) def test_update_byid_invalid_api_version(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra headers = {api_base.Version.string: '1.14'} response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_update_byaddress_not_allowed(self, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.address, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertIn(self.portgroup.address, response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_not_found(self, mock_upd): uuid = uuidutils.generate_uuid() response = self.patch_json('/portgroups/%s' % uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_singular(self, mock_upd): address = 'aa:bb:cc:dd:ee:ff' mock_upd.return_value = self.portgroup mock_upd.return_value.address = address response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address, response.json['address']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) @mock.patch.object(notification_utils, '_emit_api_notification') def test_replace_address_already_exist(self, mock_notify, mock_upd): address = 'aa:aa:aa:aa:aa:aa' mock_upd.side_effect = exception.MACAlreadyExists(mac=address) response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_replace_node_uuid(self, mock_upd): mock_upd.return_value = self.portgroup response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_uuid(self, mock_upd): mock_upd.return_value = self.portgroup response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_id(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_id', 'value': '1', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_node_id(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_id', 'value': '1', 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_remove_node_id(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_id', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_non_existent_node_uuid(self, mock_upd): node_uuid = '12506333-a81c-4d59-9987-889ed5f8687b' response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/node_uuid', 'value': node_uuid, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(node_uuid, response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.portgroup.extra = extra self.portgroup.save() # mutate extra so we replace all of them extra = dict((k, extra[k] + 'x') for k in extra) patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'replace'}) mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.portgroup.extra = extra self.portgroup.save() # Removing one item from the collection extra.pop('foo1') mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/foo1', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Removing the collection extra = {} mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual({}, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Assert nothing else was changed self.assertEqual(self.portgroup.uuid, response.json['uuid']) self.assertEqual(self.portgroup.address, response.json['address']) def test_remove_non_existent_property_fail(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_remove_address(self, mock_upd): mock_upd.return_value = self.portgroup mock_upd.return_value.address = None response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertIsNone(response.json['address']) self.assertTrue(mock_upd.called) def test_add_root(self, mock_upd): address = 'aa:bb:cc:dd:ee:ff' mock_upd.return_value = self.portgroup mock_upd.return_value.address = address response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address, response.json['address']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address, kargs.address) def test_add_root_non_existent(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'add'}) mock_upd.return_value = self.portgroup mock_upd.return_value.extra = extra response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_uuid(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_address_invalid_format(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': 'invalid-format', 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_portgroup_address_normalized(self, mock_upd): address = 'AA:BB:CC:DD:EE:FF' mock_upd.return_value = self.portgroup mock_upd.return_value.address = address.lower() response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(address.lower(), response.json['address']) kargs = mock_upd.call_args[0][1] self.assertEqual(address.lower(), kargs.address) def test_update_portgroup_standalone_ports_supported(self, mock_upd): mock_upd.return_value = self.portgroup mock_upd.return_value.standalone_ports_supported = False response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/standalone_ports_supported', 'value': False, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertIs(False, response.json['standalone_ports_supported']) def test_update_portgroup_standalone_ports_supported_bad_api_version( self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/standalone_ports_supported', 'value': False, 'op': 'replace'}], expect_errors=True, headers={api_base.Version.string: str(api_v1.min_version())}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_portgroup_internal_info_not_allowed(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/internal_info', 'value': False, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_portgroup_mode_properties(self, mock_upd): mock_upd.return_value = self.portgroup mock_upd.return_value.mode = '802.3ad' mock_upd.return_value.properties = {'bond_param': '100'} response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/mode', 'value': '802.3ad', 'op': 'add'}, {'path': '/properties/bond_param', 'value': '100', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual('802.3ad', response.json['mode']) self.assertEqual({'bond_param': '100'}, response.json['properties']) def _test_update_portgroup_mode_properties_bad_api_version(self, patch, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, patch, expect_errors=True, headers={api_base.Version.string: '1.25'}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_portgroup_mode_properties_bad_api_version(self, mock_upd): self._test_update_portgroup_mode_properties_bad_api_version( [{'path': '/mode', 'op': 'add', 'value': '802.3ad'}], mock_upd) self._test_update_portgroup_mode_properties_bad_api_version( [{'path': '/properties/abc', 'op': 'add', 'value': 123}], mock_upd) def test_remove_mode_not_allowed(self, mock_upd): response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/mode', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_update_in_inspecting_not_allowed(self, mock_upd): self.node.provision_state = states.INSPECTING self.node.save() address = 'AA:BB:CC:DD:EE:FF' response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], expect_errors=True, headers={api_base.Version.string: "1.39"}) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertFalse(mock_upd.called) def test_update_in_inspecting_allowed(self, mock_upd): self.node.provision_state = states.INSPECTING self.node.save() address = 'AA:BB:CC:DD:EE:FF' mock_upd.return_value = self.portgroup mock_upd.return_value.address = address.lower() response = self.patch_json('/portgroups/%s' % self.portgroup.uuid, [{'path': '/address', 'value': address, 'op': 'replace'}], expect_errors=True, headers={api_base.Version.string: "1.38"}) self.assertEqual(http_client.OK, response.status_int) self.assertEqual(address.lower(), response.json['address']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(address.lower(), kargs.address) @mock.patch.object(rpcapi.ConductorAPI, 'update_portgroup', autospec=True, side_effect=_rpcapi_update_portgroup) class TestPatchExtraVifPortId(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPatchExtraVifPortId, self).setUp() self.node = obj_utils.create_test_node(self.context) self.portgroup = obj_utils.create_test_portgroup(self.context, node_id=self.node.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) def _test_add_extra_vif_port_id(self, headers, mock_warn, mock_upd): extra = {'vif_port_id': 'bar'} response = self.patch_json( '/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/vif_port_id', 'value': 'foo', 'op': 'add'}, {'path': '/extra/vif_port_id', 'value': 'bar', 'op': 'add'}], headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) return response @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_add_extra_vif_port_id(self, mock_warn, mock_upd): expected_intern_info = self.portgroup.internal_info expected_intern_info.update({'tenant_vif_port_id': 'bar'}) headers = {api_base.Version.string: '1.27'} response = self._test_add_extra_vif_port_id(headers, mock_warn, mock_upd) self.assertEqual(expected_intern_info, response.json['internal_info']) self.assertFalse(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_add_extra_vif_port_id_deprecated(self, mock_warn, mock_upd): expected_intern_info = self.portgroup.internal_info expected_intern_info.update({'tenant_vif_port_id': 'bar'}) response = self._test_add_extra_vif_port_id(self.headers, mock_warn, mock_upd) self.assertEqual(expected_intern_info, response.json['internal_info']) self.assertTrue(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_replace_extra_vif_port_id(self, mock_warn, mock_upd): self.portgroup.extra = {'vif_port_id': 'original'} self.portgroup.internal_info = {'tenant_vif_port_id': 'original'} self.portgroup.save() expected_intern_info = self.portgroup.internal_info expected_intern_info.update({'tenant_vif_port_id': 'bar'}) headers = {api_base.Version.string: '1.27'} response = self._test_add_extra_vif_port_id(headers, mock_warn, mock_upd) self.assertEqual(expected_intern_info, response.json['internal_info']) self.assertFalse(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_add_extra_vif_port_id_diff_internal(self, mock_warn, mock_upd): internal_info = {'tenant_vif_port_id': 'original'} self.portgroup.internal_info = internal_info self.portgroup.save() headers = {api_base.Version.string: '1.27'} response = self._test_add_extra_vif_port_id(headers, mock_warn, mock_upd) # not changed self.assertEqual(internal_info, response.json['internal_info']) self.assertFalse(mock_warn.called) def _test_remove_extra_vif_port_id(self, headers, mock_warn, mock_upd): response = self.patch_json( '/portgroups/%s' % self.portgroup.uuid, [{'path': '/extra/vif_port_id', 'op': 'remove'}], headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual({}, response.json['extra']) self.assertTrue(mock_upd.called) return response @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_remove_extra_vif_port_id(self, mock_warn, mock_upd): self.portgroup.extra = {'vif_port_id': 'bar'} orig_info = self.portgroup.internal_info.copy() intern_info = self.portgroup.internal_info intern_info.update({'tenant_vif_port_id': 'bar'}) self.portgroup.internal_info = intern_info self.portgroup.save() headers = {api_base.Version.string: '1.27'} response = self._test_remove_extra_vif_port_id(headers, mock_warn, mock_upd) self.assertEqual(orig_info, response.json['internal_info']) self.assertFalse(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_remove_extra_vif_port_id_not_same(self, mock_warn, mock_upd): # .internal_info['tenant_vif_port_id'] != .extra['vif_port_id'] self.portgroup.extra = {'vif_port_id': 'foo'} intern_info = self.portgroup.internal_info intern_info.update({'tenant_vif_port_id': 'bar'}) self.portgroup.internal_info = intern_info self.portgroup.save() headers = {api_base.Version.string: '1.28'} response = self._test_remove_extra_vif_port_id(headers, mock_warn, mock_upd) self.assertEqual(intern_info, response.json['internal_info']) self.assertTrue(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_remove_extra_vif_port_id_not_internal(self, mock_warn, mock_upd): # no portgroup.internal_info['tenant_vif_port_id'] self.portgroup.extra = {'vif_port_id': 'foo'} self.portgroup.save() intern_info = self.portgroup.internal_info headers = {api_base.Version.string: '1.28'} response = self._test_remove_extra_vif_port_id(headers, mock_warn, mock_upd) self.assertEqual(intern_info, response.json['internal_info']) self.assertTrue(mock_warn.called) class TestPost(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPost, self).setUp() self.node = obj_utils.create_test_node(self.context) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_create_portgroup(self, mock_utcnow, mock_warn, mock_notify): pdict = apiutils.post_get_test_portgroup() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/portgroups', pdict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/portgroups/%s' % pdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) self.assertEqual(0, mock_warn.call_count) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_create_portgroup_v123(self, mock_utcnow): pdict = apiutils.post_get_test_portgroup() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time headers = {api_base.Version.string: "1.23"} response = self.post_json('/portgroups', pdict, headers=headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=headers) self.assertEqual(pdict['uuid'], result['uuid']) self.assertEqual(pdict['node_uuid'], result['node_uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/portgroups/%s' % pdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) def test_create_portgroup_invalid_api_version(self): pdict = apiutils.post_get_test_portgroup() response = self.post_json( '/portgroups', pdict, headers={api_base.Version.string: '1.14'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_create_portgroup_doesnt_contain_id(self): with mock.patch.object(self.dbapi, 'create_portgroup', wraps=self.dbapi.create_portgroup) as cp_mock: pdict = apiutils.post_get_test_portgroup(extra={'foo': 123}) self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) cp_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cp_mock.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_portgroup_generate_uuid(self, mock_warn, mock_except): pdict = apiutils.post_get_test_portgroup() del pdict['uuid'] response = self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % response.json['uuid'], headers=self.headers) self.assertEqual(pdict['address'], result['address']) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) self.assertFalse(mock_warn.called) self.assertFalse(mock_except.called) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.Portgroup, 'create') def test_create_portgroup_error(self, mock_create, mock_notify): mock_create.side_effect = Exception() pdict = apiutils.post_get_test_portgroup() self.post_json('/portgroups', pdict, headers=self.headers, expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_create_portgroup_valid_extra(self): pdict = apiutils.post_get_test_portgroup( extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) def _test_create_portgroup_with_extra_vif_port_id(self, headers, mock_warn): pgdict = apiutils.post_get_test_portgroup(extra={'vif_port_id': 'foo'}) response = self.post_json('/portgroups', pgdict, headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual({'vif_port_id': 'foo'}, response.json['extra']) self.assertEqual({'tenant_vif_port_id': 'foo'}, response.json['internal_info']) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_create_portgroup_with_extra_vif_port_id(self, mock_warn): headers = {api_base.Version.string: '1.27'} self._test_create_portgroup_with_extra_vif_port_id(headers, mock_warn) self.assertFalse(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_create_portgroup_with_extra_vif_port_id_deprecated( self, mock_warn): self._test_create_portgroup_with_extra_vif_port_id( self.headers, mock_warn) self.assertTrue(mock_warn.called) @mock.patch.object(common_utils, 'warn_about_deprecated_extra_vif_port_id', autospec=True) def test_create_portgroup_with_no_extra(self, mock_warn): pgdict = apiutils.post_get_test_portgroup() del pgdict['extra'] response = self.post_json('/portgroups', pgdict, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual(0, mock_warn.call_count) def test_create_portgroup_no_address(self): pdict = apiutils.post_get_test_portgroup() del pdict['address'] self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertIsNone(result['address']) def test_create_portgroup_no_mandatory_field_node_uuid(self): pdict = apiutils.post_get_test_portgroup() del pdict['node_uuid'] response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_portgroup_invalid_addr_format(self): pdict = apiutils.post_get_test_portgroup(address='invalid-format') response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_portgroup_address_normalized(self): address = 'AA:BB:CC:DD:EE:FF' pdict = apiutils.post_get_test_portgroup(address=address) self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(address.lower(), result['address']) def test_create_portgroup_with_hyphens_delimiter(self): pdict = apiutils.post_get_test_portgroup() colonsMAC = pdict['address'] hyphensMAC = colonsMAC.replace(':', '-') pdict['address'] = hyphensMAC response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_portgroup_invalid_node_uuid_format(self): pdict = apiutils.post_get_test_portgroup(node_uuid='invalid-format') response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_node_uuid_to_node_id_mapping(self): pdict = apiutils.post_get_test_portgroup(node_uuid=self.node['uuid']) self.post_json('/portgroups', pdict, headers=self.headers) # GET doesn't return the node_id it's an internal value portgroup = self.dbapi.get_portgroup_by_uuid(pdict['uuid']) self.assertEqual(self.node['id'], portgroup.node_id) def test_create_portgroup_node_uuid_not_found(self): pdict = apiutils.post_get_test_portgroup( node_uuid='1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e') response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_portgroup_address_already_exist(self): address = 'AA:AA:AA:11:22:33' pdict = apiutils.post_get_test_portgroup(address=address) self.post_json('/portgroups', pdict, headers=self.headers) pdict['uuid'] = uuidutils.generate_uuid() pdict['name'] = uuidutils.generate_uuid() response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.CONFLICT, response.status_int) self.assertEqual('application/json', response.content_type) error_msg = response.json['error_message'] self.assertTrue(error_msg) self.assertIn(address, error_msg.upper()) def test_create_portgroup_name_ok(self): address = 'AA:AA:AA:11:22:33' name = 'foo' pdict = apiutils.post_get_test_portgroup(address=address, name=name) self.post_json('/portgroups', pdict, headers=self.headers) result = self.get_json('/portgroups/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(name, result['name']) def test_create_portgroup_name_invalid(self): address = 'AA:AA:AA:11:22:33' name = 'aa:bb_cc' pdict = apiutils.post_get_test_portgroup(address=address, name=name) response = self.post_json('/portgroups', pdict, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_create_portgroup_internal_info_not_allowed(self): pdict = apiutils.post_get_test_portgroup() pdict['internal_info'] = 'info' response = self.post_json('/portgroups', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_portgroup_mode_old_api_version(self): for kwarg in [{'mode': '802.3ad'}, {'properties': {'bond_prop': 123}}]: pdict = apiutils.post_get_test_portgroup(**kwarg) response = self.post_json( '/portgroups', pdict, expect_errors=True, headers={api_base.Version.string: '1.25'}) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_portgroup_mode_properties(self): mode = '802.3ad' props = {'bond_prop': 123} pdict = apiutils.post_get_test_portgroup(mode=mode, properties=props) self.post_json('/portgroups', pdict, headers={api_base.Version.string: '1.26'}) portgroup = self.dbapi.get_portgroup_by_uuid(pdict['uuid']) self.assertEqual((mode, props), (portgroup.mode, portgroup.properties)) def test_create_portgroup_default_mode(self): pdict = apiutils.post_get_test_portgroup() self.post_json('/portgroups', pdict, headers={api_base.Version.string: '1.26'}) portgroup = self.dbapi.get_portgroup_by_uuid(pdict['uuid']) self.assertEqual('active-backup', portgroup.mode) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_portgroup') class TestDelete(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestDelete, self).setUp() self.node = obj_utils.create_test_node(self.context) self.portgroup = obj_utils.create_test_portgroup(self.context, name='pg.1', node_id=self.node.id) gtf = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = gtf.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(gtf.stop) def test_delete_portgroup_byaddress(self, mock_dpt): response = self.delete('/portgroups/%s' % self.portgroup.address, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(self.portgroup.address, response.json['error_message']) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_portgroup_byid(self, mock_notify, mock_dpt): self.delete('/portgroups/%s' % self.portgroup.uuid, headers=self.headers) self.assertTrue(mock_dpt.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_portgroup_node_locked(self, mock_notify, mock_dpt): self.node.reserve(self.context, 'fake', self.node.uuid) mock_dpt.side_effect = exception.NodeLocked(node='fake-node', host='fake-host') ret = self.delete('/portgroups/%s' % self.portgroup.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.CONFLICT, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dpt.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_delete_portgroup_invalid_api_version(self, mock_dpt): response = self.delete('/portgroups/%s' % self.portgroup.uuid, expect_errors=True, headers={api_base.Version.string: '1.14'}) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_delete_portgroup_byname(self, mock_dpt): self.delete('/portgroups/%s' % self.portgroup.name, headers=self.headers) self.assertTrue(mock_dpt.called) def test_delete_portgroup_byname_with_json(self, mock_dpt): self.delete('/portgroups/%s.json' % self.portgroup.name, headers=self.headers) self.assertTrue(mock_dpt.called) def test_delete_portgroup_byname_not_existed(self, mock_dpt): res = self.delete('/portgroups/%s' % 'blah', expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_volume_target.py0000664000175000017500000013224413652514273026457 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /volume targets/ methods. """ import datetime from http import client as http_client from urllib import parse as urlparse import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import notification_utils from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import volume_target as api_volume_target from ironic.api import types as atypes from ironic.common import exception from ironic.conductor import rpcapi from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.db import utils as dbutils from ironic.tests.unit.objects import utils as obj_utils def post_get_test_volume_target(**kw): target = apiutils.volume_target_post_data(**kw) node = dbutils.get_test_node() target['node_uuid'] = kw.get('node_uuid', node['uuid']) return target class TestVolumeTargetObject(base.TestCase): def test_volume_target_init(self): target_dict = apiutils.volume_target_post_data(node_id=None) del target_dict['extra'] target = api_volume_target.VolumeTarget(**target_dict) self.assertEqual(atypes.Unset, target.extra) class TestListVolumeTargets(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestListVolumeTargets, self).setUp() self.node = obj_utils.create_test_node(self.context) def test_empty(self): data = self.get_json('/volume/targets', headers=self.headers) self.assertEqual([], data['targets']) def test_one(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) data = self.get_json('/volume/targets', headers=self.headers) self.assertEqual(target.uuid, data['targets'][0]["uuid"]) self.assertNotIn('extra', data['targets'][0]) # never expose the node_id self.assertNotIn('node_id', data['targets'][0]) def test_one_invalid_api_version(self): obj_utils.create_test_volume_target( self.context, node_id=self.node.id) response = self.get_json( '/volume/targets', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) data = self.get_json('/volume/targets/%s' % target.uuid, headers=self.headers) self.assertEqual(target.uuid, data['uuid']) self.assertIn('extra', data) self.assertIn('node_uuid', data) # never expose the node_id self.assertNotIn('node_id', data) def test_get_one_invalid_api_version(self): target = obj_utils.create_test_volume_target(self.context, node_id=self.node.id) response = self.get_json( '/volume/targets/%s' % target.uuid, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one_custom_fields(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) fields = 'boot_index,extra' data = self.get_json( '/volume/targets/%s?fields=%s' % (target.uuid, fields), headers=self.headers) # We always append "links" self.assertItemsEqual(['boot_index', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=i) data = self.get_json( '/volume/targets?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['targets'])) for target in data['targets']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], target) def test_get_custom_fields_invalid_fields(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) fields = 'uuid,spongebob' response = self.get_json( '/volume/targets/%s?fields=%s' % (target.uuid, fields), headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_detail(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) data = self.get_json('/volume/targets?detail=True', headers=self.headers) self.assertEqual(target.uuid, data['targets'][0]["uuid"]) self.assertIn('extra', data['targets'][0]) self.assertIn('node_uuid', data['targets'][0]) # never expose the node_id self.assertNotIn('node_id', data['targets'][0]) def test_detail_false(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) data = self.get_json('/volume/targets?detail=False', headers=self.headers) self.assertEqual(target.uuid, data['targets'][0]["uuid"]) self.assertNotIn('extra', data['targets'][0]) # never expose the node_id self.assertNotIn('node_id', data['targets'][0]) def test_detail_invalid_api_version(self): obj_utils.create_test_volume_target(self.context, node_id=self.node.id) response = self.get_json( '/volume/targets?detail=True', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_detail_sepecified_by_path(self): obj_utils.create_test_volume_target(self.context, node_id=self.node.id) response = self.get_json( '/volume/targets/detail', headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_against_single(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) response = self.get_json('/volume/targets/%s?detail=True' % target.uuid, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_and_fields(self): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) fields = 'boot_index,extra' response = self.get_json('/volume/targets/%s?detail=True&fields=%s' % (target.uuid, fields), headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_many(self): targets = [] for id_ in range(5): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets', headers=self.headers) self.assertEqual(len(targets), len(data['targets'])) uuids = [n['uuid'] for n in data['targets']] self.assertCountEqual(targets, uuids) def test_links(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_target(self.context, uuid=uuid, node_id=self.node.id) data = self.get_json('/volume/targets/%s' % uuid, headers=self.headers) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark, headers=self.headers)) def test_collection_links(self): targets = [] for id_ in range(5): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets/?limit=3', headers=self.headers) self.assertEqual(3, len(data['targets'])) next_marker = data['targets'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/targets', data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') targets = [] for id_ in range(5): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets', headers=self.headers) self.assertEqual(3, len(data['targets'])) next_marker = data['targets'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/targets', data['next']) def test_collection_links_custom_fields(self): fields = 'uuid,extra' cfg.CONF.set_override('max_limit', 3, 'api') targets = [] for id_ in range(5): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['targets'])) next_marker = data['targets'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/targets', data['next']) self.assertIn('fields', data['next']) def test_get_collection_pagination_no_uuid(self): fields = 'boot_index' limit = 2 targets = [] for id_ in range(3): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target) data = self.get_json( '/volume/targets?fields=%s&limit=%s' % (fields, limit), headers=self.headers) self.assertEqual(limit, len(data['targets'])) self.assertIn('marker=%s' % targets[limit - 1].uuid, data['next']) def test_collection_links_detail(self): targets = [] for id_ in range(5): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets?detail=True&limit=3', headers=self.headers) self.assertEqual(3, len(data['targets'])) next_marker = data['targets'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/targets', data['next']) self.assertIn('detail=True', data['next']) def test_sort_key(self): targets = [] for id_ in range(3): target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), boot_index=id_) targets.append(target.uuid) data = self.get_json('/volume/targets?sort_key=uuid', headers=self.headers) uuids = [n['uuid'] for n in data['targets']] self.assertEqual(sorted(targets), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra', 'properties'] for invalid_key in invalid_keys_list: response = self.get_json('/volume/targets?sort_key=%s' % invalid_key, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/volume/targets specifying node_name - success mock_get_rpc_node.return_value = self.node for i in range(5): if i < 3: node_id = self.node.id else: node_id = 100000 + i obj_utils.create_test_volume_target( self.context, node_id=node_id, uuid=uuidutils.generate_uuid(), boot_index=i) data = self.get_json("/volume/targets?node=%s" % 'test-node', headers=self.headers) self.assertEqual(3, len(data['targets'])) @mock.patch.object(api_utils, 'get_rpc_node') def test_detail_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/volume/targets/?detail=True specifying node_name - success mock_get_rpc_node.return_value = self.node target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) data = self.get_json('/volume/targets?detail=True&node=%s' % 'test-node', headers=self.headers) self.assertEqual(target.uuid, data['targets'][0]['uuid']) self.assertEqual(self.node.uuid, data['targets'][0]['node_uuid']) @mock.patch.object(rpcapi.ConductorAPI, 'update_volume_target') class TestPatch(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPatch, self).setUp() self.node = obj_utils.create_test_node(self.context) self.target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) @mock.patch.object(notification_utils, '_emit_api_notification') def test_update_byid(self, mock_notify, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.target mock_upd.return_value.extra = extra response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_update_byid_invalid_api_version(self, mock_upd): headers = {api_base.Version.string: str(api_v1.min_version())} response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_update_not_found(self, mock_upd): uuid = uuidutils.generate_uuid() response = self.patch_json('/volume/targets/%s' % uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_singular(self, mock_upd): boot_index = 100 mock_upd.return_value = self.target mock_upd.return_value.boot_index = boot_index response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/boot_index', 'value': boot_index, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(boot_index, response.json['boot_index']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(boot_index, kargs.boot_index) @mock.patch.object(notification_utils, '_emit_api_notification') def test_replace_boot_index_already_exist(self, mock_notify, mock_upd): boot_index = 100 mock_upd.side_effect = \ exception.VolumeTargetBootIndexAlreadyExists(boot_index=boot_index) response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/boot_index', 'value': boot_index, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(boot_index, kargs.boot_index) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_replace_invalid_power_state(self, mock_upd): mock_upd.side_effect = \ exception.InvalidStateRequested( action='volume target update', node=self.node.uuid, state='power on') response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/boot_index', 'value': 0, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(0, kargs.boot_index) def test_replace_node_uuid(self, mock_upd): mock_upd.return_value = self.target response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_replace_node_uuid_inalid_type(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_uuid', 'value': 123, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(b'Expected a UUID for node_uuid, but received 123.', response.body) self.assertFalse(mock_upd.called) def test_add_node_uuid(self, mock_upd): mock_upd.return_value = self.target response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_uuid_invalid_type(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_uuid', 'value': 123, 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(b'Expected a UUID for node_uuid, but received 123.', response.body) self.assertFalse(mock_upd.called) def test_add_node_id(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_id', 'value': '1', 'op': 'add'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_node_id(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_id', 'value': '1', 'op': 'replace'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_remove_node_id(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_id', 'op': 'remove'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_non_existent_node_uuid(self, mock_upd): node_uuid = '12506333-a81c-4d59-9987-889ed5f8687b' response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/node_uuid', 'value': node_uuid, 'op': 'replace'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(node_uuid, response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.target.extra = extra self.target.save() # mutate extra so we replace all of them extra = dict((k, extra[k] + 'x') for k in extra) patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'replace'}) mock_upd.return_value = self.target mock_upd.return_value.extra = extra response = self.patch_json('/volume/targets/%s' % self.target.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.target.extra = extra self.target.save() # Remove one item from the collection. extra.pop('foo1') mock_upd.return_value = self.target mock_upd.return_value.extra = extra response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/extra/foo1', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Remove the collection. extra = {} mock_upd.return_value.extra = extra response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/extra', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual({}, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Assert nothing else was changed. self.assertEqual(self.target.uuid, response.json['uuid']) self.assertEqual(self.target.volume_type, response.json['volume_type']) self.assertEqual(self.target.boot_index, response.json['boot_index']) def test_remove_non_existent_property_fail(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_remove_mandatory_field(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/boot_index', 'op': 'remove'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_root(self, mock_upd): boot_index = 100 mock_upd.return_value = self.target mock_upd.return_value.boot_index = boot_index response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/boot_index', 'value': boot_index, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(boot_index, response.json['boot_index']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(boot_index, kargs.boot_index) def test_add_root_non_existent(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'add'}) mock_upd.return_value = self.target mock_upd.return_value.extra = extra response = self.patch_json('/volume/targets/%s' % self.target.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_uuid(self, mock_upd): response = self.patch_json('/volume/targets/%s' % self.target.uuid, [{'path': '/uuid', 'op': 'remove'}], headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) class TestPost(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPost, self).setUp() self.node = obj_utils.create_test_node(self.context) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow') def test_create_volume_target(self, mock_utcnow, mock_notify): pdict = post_get_test_volume_target() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/volume/targets', pdict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/volume/targets/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header. self.assertIsNotNone(response.location) expected_location = '/v1/volume/targets/%s' % pdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_create_volume_target_invalid_api_version(self): pdict = post_get_test_volume_target() response = self.post_json( '/volume/targets', pdict, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_create_volume_target_doesnt_contain_id(self): with mock.patch.object( self.dbapi, 'create_volume_target', wraps=self.dbapi.create_volume_target) as cp_mock: pdict = post_get_test_volume_target(extra={'foo': 123}) self.post_json('/volume/targets', pdict, headers=self.headers) result = self.get_json('/volume/targets/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) cp_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args. self.assertNotIn('id', cp_mock.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_volume_target_generate_uuid(self, mock_warning, mock_exception): pdict = post_get_test_volume_target() del pdict['uuid'] response = self.post_json('/volume/targets', pdict, headers=self.headers) result = self.get_json('/volume/targets/%s' % response.json['uuid'], headers=self.headers) self.assertEqual(pdict['boot_index'], result['boot_index']) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) self.assertFalse(mock_warning.called) self.assertFalse(mock_exception.called) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.VolumeTarget, 'create') def test_create_volume_target_error(self, mock_create, mock_notify): mock_create.side_effect = Exception() tdict = post_get_test_volume_target() self.post_json('/volume/targets', tdict, headers=self.headers, expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_create_volume_target_valid_extra(self): pdict = post_get_test_volume_target( extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/volume/targets', pdict, headers=self.headers) result = self.get_json('/volume/targets/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) def test_create_volume_target_no_mandatory_field_type(self): pdict = post_get_test_volume_target() del pdict['volume_type'] response = self.post_json('/volume/targets', pdict, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_target_no_mandatory_field_value(self): pdict = post_get_test_volume_target() del pdict['boot_index'] response = self.post_json('/volume/targets', pdict, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_target_no_mandatory_field_node_uuid(self): pdict = post_get_test_volume_target() del pdict['node_uuid'] response = self.post_json('/volume/targets', pdict, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_target_invalid_node_uuid_format(self): pdict = post_get_test_volume_target(node_uuid=123) response = self.post_json('/volume/targets', pdict, headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertIn(b'Expected a UUID but received 123.', response.body) def test_node_uuid_to_node_id_mapping(self): pdict = post_get_test_volume_target(node_uuid=self.node['uuid']) self.post_json('/volume/targets', pdict, headers=self.headers) # GET doesn't return the node_id it's an internal value target = self.dbapi.get_volume_target_by_uuid(pdict['uuid']) self.assertEqual(self.node['id'], target.node_id) def test_create_volume_target_node_uuid_not_found(self): pdict = post_get_test_volume_target( node_uuid='1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e') response = self.post_json('/volume/targets', pdict, headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_volume_target') class TestDelete(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestDelete, self).setUp() self.node = obj_utils.create_test_node(self.context) self.target = obj_utils.create_test_volume_target( self.context, node_id=self.node.id) gtf = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = gtf.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(gtf.stop) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_volume_target_byid(self, mock_notify, mock_dvc): self.delete('/volume/targets/%s' % self.target.uuid, headers=self.headers, expect_errors=True) self.assertTrue(mock_dvc.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_delete_volume_target_byid_invalid_api_version(self, mock_dvc): headers = {api_base.Version.string: str(api_v1.min_version())} response = self.delete('/volume/targets/%s' % self.target.uuid, headers=headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_volume_target_node_locked(self, mock_notify, mock_dvc): self.node.reserve(self.context, 'fake', self.node.uuid) mock_dvc.side_effect = exception.NodeLocked(node='fake-node', host='fake-host') ret = self.delete('/volume/targets/%s' % self.target.uuid, headers=self.headers, expect_errors=True) self.assertEqual(http_client.CONFLICT, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dvc.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_delete_volume_target_invalid_power_state(self, mock_dvc): mock_dvc.side_effect = exception.InvalidStateRequested( action='volume target deletion', node=self.node.uuid, state='power on') ret = self.delete('/volume/targets/%s' % self.target.uuid, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dvc.called) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_notification_utils.py0000664000175000017500000002424713652514273027513 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for ironic-api notification utilities.""" import mock from oslo_utils import uuidutils from ironic.api.controllers.v1 import notification_utils as notif_utils from ironic.api import types as atypes from ironic.objects import fields from ironic.objects import notification from ironic.tests import base as tests_base from ironic.tests.unit.objects import utils as obj_utils class APINotifyTestCase(tests_base.TestCase): def setUp(self): super(APINotifyTestCase, self).setUp() self.node_notify_mock = mock.Mock() self.port_notify_mock = mock.Mock() self.chassis_notify_mock = mock.Mock() self.portgroup_notify_mock = mock.Mock() self.node_notify_mock.__name__ = 'NodeCRUDNotification' self.port_notify_mock.__name__ = 'PortCRUDNotification' self.chassis_notify_mock.__name__ = 'ChassisCRUDNotification' self.portgroup_notify_mock.__name__ = 'PortgroupCRUDNotification' _notification_mocks = { 'chassis': (self.chassis_notify_mock, notif_utils.CRUD_NOTIFY_OBJ['chassis'][1]), 'node': (self.node_notify_mock, notif_utils.CRUD_NOTIFY_OBJ['node'][1]), 'port': (self.port_notify_mock, notif_utils.CRUD_NOTIFY_OBJ['port'][1]), 'portgroup': (self.portgroup_notify_mock, notif_utils.CRUD_NOTIFY_OBJ['portgroup'][1]) } self.addCleanup(self._restore, notif_utils.CRUD_NOTIFY_OBJ.copy()) notif_utils.CRUD_NOTIFY_OBJ = _notification_mocks def _restore(self, value): notif_utils.CRUD_NOTIFY_OBJ = value def test_common_params(self): self.config(host='fake-host') node = obj_utils.get_test_node(self.context) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, node, 'create', test_level, test_status, chassis_uuid=None) init_kwargs = self.node_notify_mock.call_args[1] publisher = init_kwargs['publisher'] event_type = init_kwargs['event_type'] level = init_kwargs['level'] self.assertEqual('fake-host', publisher.host) self.assertEqual('ironic-api', publisher.service) self.assertEqual('create', event_type.action) self.assertEqual(test_status, event_type.status) self.assertEqual(test_level, level) def test_node_notification(self): chassis_uuid = uuidutils.generate_uuid() node = obj_utils.get_test_node(self.context, instance_info={'foo': 'baz'}, driver_info={'param': 104}) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, node, 'create', test_level, test_status, chassis_uuid=chassis_uuid) init_kwargs = self.node_notify_mock.call_args[1] payload = init_kwargs['payload'] event_type = init_kwargs['event_type'] self.assertEqual('node', event_type.object) self.assertEqual(node.uuid, payload.uuid) self.assertEqual({'foo': 'baz'}, payload.instance_info) self.assertEqual({'param': 104}, payload.driver_info) self.assertEqual(chassis_uuid, payload.chassis_uuid) def test_node_notification_mask_secrets(self): test_info = {'password': 'secret123', 'some_value': 'fake-value'} node = obj_utils.get_test_node(self.context, driver_info=test_info) notification.mask_secrets(node) self.assertEqual('******', node.driver_info['password']) self.assertEqual('fake-value', node.driver_info['some_value']) def test_notification_uuid_unset(self): node = obj_utils.get_test_node(self.context) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, node, 'create', test_level, test_status, chassis_uuid=atypes.Unset) init_kwargs = self.node_notify_mock.call_args[1] payload = init_kwargs['payload'] self.assertIsNone(payload.chassis_uuid) def test_chassis_notification(self): chassis = obj_utils.get_test_chassis(self.context, extra={'foo': 'boo'}, description='bare01') test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, chassis, 'create', test_level, test_status) init_kwargs = self.chassis_notify_mock.call_args[1] payload = init_kwargs['payload'] event_type = init_kwargs['event_type'] self.assertEqual('chassis', event_type.object) self.assertEqual(chassis.uuid, payload.uuid) self.assertEqual({'foo': 'boo'}, payload.extra) self.assertEqual('bare01', payload.description) def test_port_notification(self): node_uuid = uuidutils.generate_uuid() portgroup_uuid = uuidutils.generate_uuid() port = obj_utils.get_test_port(self.context, address='11:22:33:77:88:99', local_link_connection={'a': 25}, extra={'as': 34}, pxe_enabled=False) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, port, 'create', test_level, test_status, node_uuid=node_uuid, portgroup_uuid=portgroup_uuid) init_kwargs = self.port_notify_mock.call_args[1] payload = init_kwargs['payload'] event_type = init_kwargs['event_type'] self.assertEqual('port', event_type.object) self.assertEqual(port.uuid, payload.uuid) self.assertEqual(node_uuid, payload.node_uuid) self.assertEqual(portgroup_uuid, payload.portgroup_uuid) self.assertEqual('11:22:33:77:88:99', payload.address) self.assertEqual({'a': 25}, payload.local_link_connection) self.assertEqual({'as': 34}, payload.extra) self.assertIs(False, payload.pxe_enabled) def test_portgroup_notification(self): node_uuid = uuidutils.generate_uuid() portgroup = obj_utils.get_test_portgroup(self.context, address='22:55:88:AA:BB:99', name='new01', mode='mode2', extra={'bs': 11}) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.SUCCESS notif_utils._emit_api_notification(self.context, portgroup, 'create', test_level, test_status, node_uuid=node_uuid) init_kwargs = self.portgroup_notify_mock.call_args[1] payload = init_kwargs['payload'] event_type = init_kwargs['event_type'] self.assertEqual('portgroup', event_type.object) self.assertEqual(portgroup.uuid, payload.uuid) self.assertEqual(node_uuid, payload.node_uuid) self.assertEqual(portgroup.address, payload.address) self.assertEqual(portgroup.name, payload.name) self.assertEqual(portgroup.mode, payload.mode) self.assertEqual(portgroup.extra, payload.extra) self.assertEqual(portgroup.standalone_ports_supported, payload.standalone_ports_supported) @mock.patch('ironic.objects.node.NodeMaintenanceNotification') def test_node_maintenance_notification(self, maintenance_mock): maintenance_mock.__name__ = 'NodeMaintenanceNotification' node = obj_utils.get_test_node(self.context, maintenance=True, maintenance_reason='test reason') test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.START notif_utils._emit_api_notification(self.context, node, 'maintenance_set', test_level, test_status) init_kwargs = maintenance_mock.call_args[1] payload = init_kwargs['payload'] event_type = init_kwargs['event_type'] self.assertEqual('node', event_type.object) self.assertEqual(node.uuid, payload.uuid) self.assertEqual(True, payload.maintenance) self.assertEqual('test reason', payload.maintenance_reason) @mock.patch.object(notification.NotificationBase, 'emit') def test_emit_maintenance_notification(self, emit_mock): node = obj_utils.get_test_node(self.context) test_level = fields.NotificationLevel.INFO test_status = fields.NotificationStatus.START notif_utils._emit_api_notification(self.context, node, 'maintenance_set', test_level, test_status) emit_mock.assert_called_once_with(self.context) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_conductor.py0000664000175000017500000002475313652514273025607 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /conductors/ methods. """ import datetime from http import client as http_client import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.objects import utils as obj_utils class TestListConductors(test_api_base.BaseApiTest): def test_empty(self): data = self.get_json( '/conductors', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual([], data['conductors']) def test_list(self): obj_utils.create_test_conductor(self.context, hostname='why care') obj_utils.create_test_conductor(self.context, hostname='why not') data = self.get_json( '/conductors', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(2, len(data['conductors'])) for c in data['conductors']: self.assertIn('hostname', c) self.assertIn('conductor_group', c) self.assertIn('alive', c) self.assertNotIn('drivers', c) self.assertEqual(data['conductors'][0]['hostname'], 'why care') self.assertEqual(data['conductors'][1]['hostname'], 'why not') def test_list_with_detail(self): obj_utils.create_test_conductor(self.context, hostname='why care') obj_utils.create_test_conductor(self.context, hostname='why not') data = self.get_json( '/conductors?detail=true', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(2, len(data['conductors'])) for c in data['conductors']: self.assertIn('hostname', c) self.assertIn('drivers', c) self.assertIn('conductor_group', c) self.assertIn('alive', c) self.assertIn('drivers', c) self.assertEqual(data['conductors'][0]['hostname'], 'why care') self.assertEqual(data['conductors'][1]['hostname'], 'why not') def test_list_with_invalid_api(self): response = self.get_json( '/conductors', headers={api_base.Version.string: '1.48'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one(self): obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') data = self.get_json( '/conductors/rocky.rocks', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertIn('hostname', data) self.assertIn('drivers', data) self.assertIn('conductor_group', data) self.assertIn('alive', data) self.assertIn('drivers', data) self.assertEqual(data['hostname'], 'rocky.rocks') self.assertTrue(data['alive']) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_one_conductor_offline(self, mock_utcnow): self.config(heartbeat_timeout=10, group='conductor') _time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = _time obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') mock_utcnow.return_value = _time + datetime.timedelta(seconds=30) data = self.get_json( '/conductors/rocky.rocks', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertIn('hostname', data) self.assertIn('drivers', data) self.assertIn('conductor_group', data) self.assertIn('alive', data) self.assertIn('drivers', data) self.assertEqual(data['hostname'], 'rocky.rocks') self.assertFalse(data['alive']) def test_get_one_with_invalid_api(self): response = self.get_json( '/conductors/rocky.rocks', headers={api_base.Version.string: '1.48'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one_custom_fields(self): obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') fields = 'hostname,alive' data = self.get_json( '/conductors/rocky.rocks?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}) self.assertItemsEqual(['hostname', 'alive', 'links'], data) def test_get_collection_custom_fields(self): obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') obj_utils.create_test_conductor(self.context, hostname='stein.rocks') fields = 'hostname,alive' data = self.get_json( '/conductors?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(2, len(data['conductors'])) for c in data['conductors']: self.assertItemsEqual(['hostname', 'alive', 'links'], c) def test_get_custom_fields_invalid_fields(self): obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') fields = 'hostname,spongebob' response = self.get_json( '/conductors/rocky.rocks?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') obj_utils.create_test_conductor(self.context, hostname='rocky.rocks') headers = {api_base.Version.string: str(api_v1.max_version())} data = self.get_json( '/conductors/rocky.rocks', headers=headers) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn('rocky.rocks', data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark, headers=headers)) if public_url is not None: expected = [{'href': '%s/v1/conductors/rocky.rocks' % public_url, 'rel': 'self'}, {'href': '%s/conductors/rocky.rocks' % public_url, 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') def test_collection_links(self): conductors = [] for id in range(5): hostname = uuidutils.generate_uuid() conductor = obj_utils.create_test_conductor(self.context, hostname=hostname) conductors.append(conductor.hostname) data = self.get_json( '/conductors/?limit=3', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(3, len(data['conductors'])) next_marker = data['conductors'][-1]['hostname'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') conductors = [] for id in range(5): hostname = uuidutils.generate_uuid() conductor = obj_utils.create_test_conductor(self.context, hostname=hostname) conductors.append(conductor.hostname) data = self.get_json( '/conductors', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(3, len(data['conductors'])) next_marker = data['conductors'][-1]['hostname'] self.assertIn(next_marker, data['next']) def test_collection_links_custom_fields(self): cfg.CONF.set_override('max_limit', 3, 'api') conductors = [] fields = 'hostname,alive' for id in range(5): hostname = uuidutils.generate_uuid() conductor = obj_utils.create_test_conductor(self.context, hostname=hostname) conductors.append(conductor.hostname) data = self.get_json( '/conductors?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(3, len(data['conductors'])) next_marker = data['conductors'][-1]['hostname'] self.assertIn(next_marker, data['next']) self.assertIn('fields', data['next']) def test_sort_key(self): conductors = [] for id in range(5): hostname = uuidutils.generate_uuid() conductor = obj_utils.create_test_conductor(self.context, hostname=hostname) conductors.append(conductor.hostname) data = self.get_json( '/conductors?sort_key=hostname', headers={api_base.Version.string: str(api_v1.max_version())}) hosts = [n['hostname'] for n in data['conductors']] self.assertEqual(sorted(conductors), hosts) def test_sort_key_invalid(self): invalid_keys_list = ['alive', 'drivers'] headers = {api_base.Version.string: str(api_v1.max_version())} for invalid_key in invalid_keys_list: response = self.get_json('/conductors?sort_key=%s' % invalid_key, headers=headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_allocation.py0000664000175000017500000017247413652514273025740 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /allocations/ methods. """ import datetime from http import client as http_client from urllib import parse as urlparse import fixtures import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import allocation as api_allocation from ironic.api.controllers.v1 import notification_utils from ironic.api import types as atypes from ironic.common import exception from ironic.common import policy from ironic.conductor import rpcapi from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.objects import utils as obj_utils class TestAllocationObject(base.TestCase): def test_allocation_init(self): allocation_dict = apiutils.allocation_post_data(node_id=None) del allocation_dict['extra'] allocation = api_allocation.Allocation(**allocation_dict) self.assertEqual(atypes.Unset, allocation.extra) class TestListAllocations(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestListAllocations, self).setUp() self.node = obj_utils.create_test_node(self.context, name='node-1') def test_empty(self): data = self.get_json('/allocations', headers=self.headers) self.assertEqual([], data['allocations']) def test_one(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/allocations', headers=self.headers) self.assertEqual(allocation.uuid, data['allocations'][0]["uuid"]) self.assertEqual(allocation.name, data['allocations'][0]['name']) self.assertEqual({}, data['allocations'][0]["extra"]) self.assertEqual(self.node.uuid, data['allocations'][0]["node_uuid"]) self.assertEqual(allocation.owner, data['allocations'][0]["owner"]) # never expose the node_id self.assertNotIn('node_id', data['allocations'][0]) def test_get_one(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual(allocation.uuid, data['uuid']) self.assertEqual({}, data["extra"]) self.assertEqual(self.node.uuid, data["node_uuid"]) self.assertEqual(allocation.owner, data["owner"]) # never expose the node_id self.assertNotIn('node_id', data) def test_get_one_with_json(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/allocations/%s.json' % allocation.uuid, headers=self.headers) self.assertEqual(allocation.uuid, data['uuid']) def test_get_one_with_json_in_name(self): allocation = obj_utils.create_test_allocation(self.context, name='pg.json', node_id=self.node.id) data = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual(allocation.uuid, data['uuid']) def test_get_one_with_suffix(self): allocation = obj_utils.create_test_allocation(self.context, name='pg.1', node_id=self.node.id) data = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual(allocation.uuid, data['uuid']) def test_get_one_custom_fields(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) fields = 'resource_class,extra' data = self.get_json( '/allocations/%s?fields=%s' % (allocation.uuid, fields), headers=self.headers) # We always append "links" self.assertItemsEqual(['resource_class', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation%s' % i) data = self.get_json( '/allocations?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['allocations'])) for allocation in data['allocations']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], allocation) def test_get_custom_fields_invalid_fields(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) fields = 'uuid,spongebob' response = self.get_json( '/allocations/%s?fields=%s' % (allocation.uuid, fields), headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_one_invalid_api_version(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) response = self.get_json( '/allocations/%s' % (allocation.uuid), headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one_invalid_api_version_without_check(self): # Invalid name, but the check happens after the microversion check. response = self.get_json( '/allocations/ba!na!na!', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_many(self): allocations = [] for id_ in range(5): allocation = obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocations.append(allocation.uuid) data = self.get_json('/allocations', headers=self.headers) self.assertEqual(len(allocations), len(data['allocations'])) uuids = [n['uuid'] for n in data['allocations']] self.assertCountEqual(allocations, uuids) def test_links(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_allocation(self.context, uuid=uuid, node_id=self.node.id) data = self.get_json('/allocations/%s' % uuid, headers=self.headers) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark, headers=self.headers)) def test_collection_links(self): allocations = [] for id_ in range(5): allocation = obj_utils.create_test_allocation( self.context, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocations.append(allocation.uuid) data = self.get_json('/allocations/?limit=3', headers=self.headers) self.assertEqual(3, len(data['allocations'])) next_marker = data['allocations'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') allocations = [] for id_ in range(5): allocation = obj_utils.create_test_allocation( self.context, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocations.append(allocation.uuid) data = self.get_json('/allocations', headers=self.headers) self.assertEqual(3, len(data['allocations'])) next_marker = data['allocations'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_custom_fields(self): cfg.CONF.set_override('max_limit', 3, 'api') fields = 'uuid,extra' allocations = [] for i in range(5): allocation = obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation%s' % i) allocations.append(allocation.uuid) data = self.get_json( '/allocations?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['allocations'])) next_marker = data['allocations'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('fields', data['next']) def test_get_collection_pagination_no_uuid(self): fields = 'node_uuid' limit = 2 allocations = [] for id_ in range(3): allocation = obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocations.append(allocation) data = self.get_json( '/allocations?fields=%s&limit=%s' % (fields, limit), headers=self.headers) self.assertEqual(limit, len(data['allocations'])) self.assertIn('marker=%s' % allocations[limit - 1].uuid, data['next']) def test_allocation_get_all_invalid_api_version(self): obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation_1') response = self.get_json('/allocations', headers={api_base.Version.string: '1.14'}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) @mock.patch.object(policy, 'authorize', spec=True) def test_allocation_get_all_forbidden(self, mock_authorize): def mock_authorize_function(rule, target, creds): raise exception.HTTPForbidden(resource='fake') mock_authorize.side_effect = mock_authorize_function response = self.get_json('/allocations', expect_errors=True, headers={ api_base.Version.string: '1.60', 'X-Project-Id': '12345' }) self.assertEqual(http_client.FORBIDDEN, response.status_int) @mock.patch.object(policy, 'authorize', spec=True) def test_allocation_get_all_forbidden_no_project(self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:list_all': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function response = self.get_json('/allocations', expect_errors=True, headers={ api_base.Version.string: '1.59', }) self.assertEqual(http_client.FORBIDDEN, response.status_int) @mock.patch.object(policy, 'authorize', spec=True) def test_allocation_get_all_forbid_owner_proj_mismatch( self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:list_all': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function response = self.get_json('/allocations?owner=54321', expect_errors=True, headers={ api_base.Version.string: '1.60', 'X-Project-Id': '12345' }) self.assertEqual(http_client.FORBIDDEN, response.status_int) @mock.patch.object(policy, 'authorize', spec=True) def test_allocation_get_all_non_admin(self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:list_all': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function allocations = [] for id in range(5): allocation = obj_utils.create_test_allocation( self.context, uuid=uuidutils.generate_uuid(), owner='12345') allocations.append(allocation.uuid) for id in range(2): allocation = obj_utils.create_test_allocation( self.context, uuid=uuidutils.generate_uuid()) data = self.get_json('/allocations', headers={ api_base.Version.string: '1.60', 'X-Project-Id': '12345'}) self.assertEqual(len(allocations), len(data['allocations'])) uuids = [n['uuid'] for n in data['allocations']] self.assertEqual(sorted(allocations), sorted(uuids)) def test_sort_key(self): allocations = [] for id_ in range(3): allocation = obj_utils.create_test_allocation( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocations.append(allocation.uuid) data = self.get_json('/allocations?sort_key=uuid', headers=self.headers) uuids = [n['uuid'] for n in data['allocations']] self.assertEqual(sorted(allocations), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra', 'internal_info', 'properties'] for invalid_key in invalid_keys_list: response = self.get_json('/allocations?sort_key=%s' % invalid_key, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def test_sort_key_allowed(self): allocation_uuids = [] for id_ in range(3, 0, -1): allocation = obj_utils.create_test_allocation( self.context, uuid=uuidutils.generate_uuid(), name='allocation%s' % id_) allocation_uuids.append(allocation.uuid) allocation_uuids.reverse() data = self.get_json('/allocations?sort_key=name', headers=self.headers) data_uuids = [p['uuid'] for p in data['allocations']] self.assertEqual(allocation_uuids, data_uuids) def test_get_all_by_state(self): for i in range(5): if i < 3: state = 'allocating' else: state = 'active' obj_utils.create_test_allocation( self.context, state=state, uuid=uuidutils.generate_uuid(), name='allocation%s' % i) data = self.get_json("/allocations?state=allocating", headers=self.headers) self.assertEqual(3, len(data['allocations'])) def test_get_all_by_owner(self): for i in range(5): if i < 3: owner = '12345' else: owner = '54321' obj_utils.create_test_allocation( self.context, owner=owner, uuid=uuidutils.generate_uuid(), name='allocation%s' % i) data = self.get_json("/allocations?owner=12345", headers=self.headers) self.assertEqual(3, len(data['allocations'])) def test_get_all_by_owner_not_allowed(self): response = self.get_json("/allocations?owner=12345", headers={api_base.Version.string: '1.59'}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_code) self.assertTrue(response.json['error_message']) def test_get_all_by_node_name(self): for i in range(5): if i < 3: node_id = self.node.id else: node_id = 100000 + i obj_utils.create_test_allocation( self.context, node_id=node_id, uuid=uuidutils.generate_uuid(), name='allocation%s' % i) data = self.get_json("/allocations?node=%s" % self.node.name, headers=self.headers) self.assertEqual(3, len(data['allocations'])) def test_get_all_by_node_uuid(self): obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/allocations?node=%s' % (self.node.uuid), headers=self.headers) self.assertEqual(1, len(data['allocations'])) def test_get_all_by_non_existing_node(self): obj_utils.create_test_allocation(self.context, node_id=self.node.id) response = self.get_json('/allocations?node=banana', headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_get_by_node_resource(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/nodes/%s/allocation' % self.node.uuid, headers=self.headers) self.assertEqual(allocation.uuid, data['uuid']) self.assertEqual({}, data["extra"]) self.assertEqual(self.node.uuid, data["node_uuid"]) def test_get_by_node_resource_invalid_api_version(self): obj_utils.create_test_allocation(self.context, node_id=self.node.id) response = self.get_json( '/nodes/%s/allocation' % self.node.uuid, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_by_node_resource_with_fields(self): obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json('/nodes/%s/allocation?fields=name,extra' % self.node.uuid, headers=self.headers) self.assertNotIn('uuid', data) self.assertIn('name', data) self.assertEqual({}, data["extra"]) def test_get_by_node_resource_and_id(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) response = self.get_json('/nodes/%s/allocation/%s' % (self.node.uuid, allocation.uuid), headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_by_node_resource_not_existed(self): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) res = self.get_json('/node/%s/allocation' % node.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) def test_by_node_invalid_node(self): res = self.get_json('/node/%s/allocation' % uuidutils.generate_uuid(), expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) def test_allocation_owner_hidden_in_lower_version(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id) data = self.get_json( '/allocations/%s' % allocation.uuid, headers={api_base.Version.string: '1.59'}) self.assertNotIn('owner', data) data = self.get_json( '/allocations/%s' % allocation.uuid, headers=self.headers) self.assertIn('owner', data) def test_allocation_owner_null_field(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id, owner=None) data = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertIsNone(data['owner']) def test_allocation_owner_present(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id, owner='12345') data = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual(data['owner'], '12345') def test_get_owner_field(self): allocation = obj_utils.create_test_allocation(self.context, node_id=self.node.id, owner='12345') fields = 'owner' response = self.get_json( '/allocations/%s?fields=%s' % (allocation.uuid, fields), headers=self.headers) self.assertIn('owner', response) class TestPatch(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPatch, self).setUp() self.allocation = obj_utils.create_test_allocation(self.context) def test_update_not_allowed(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers={api_base.Version.string: '1.56'}) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_update_not_found(self): uuid = uuidutils.generate_uuid() response = self.patch_json('/allocations/%s' % uuid, [{'path': '/name', 'value': 'b', 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_add(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_int) def test_add_non_existent(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_add_multi(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/extra/foo1', 'value': 'bar1', 'op': 'add'}, {'path': '/extra/foo2', 'value': 'bar2', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % self.allocation.uuid, headers=self.headers) expected = {"foo1": "bar1", "foo2": "bar2"} self.assertEqual(expected, result['extra']) def test_replace_invalid_name(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/name', 'value': '[test]', 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow') def test_replace_singular(self, mock_utcnow, mock_notify): test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/name', 'value': 'test', 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % self.allocation.uuid, headers=self.headers) self.assertEqual('test', result['name']) return_updated_at = timeutils.parse_isotime( result['updated_at']).replace(tzinfo=None) self.assertEqual(test_time, return_updated_at) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) def test_replace_name_with_none(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/name', 'value': None, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % self.allocation.uuid, headers=self.headers) self.assertIsNone(result['name']) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.Allocation, 'save') def test_update_error(self, mock_save, mock_notify): mock_save.side_effect = Exception() allocation = obj_utils.create_test_allocation(self.context) self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/name', 'value': 'new', 'op': 'replace'}], expect_errors=True, headers=self.headers) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def test_replace_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} allocation = obj_utils.create_test_allocation( self.context, extra=extra, uuid=uuidutils.generate_uuid()) new_value = 'new value' response = self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/extra/foo2', 'value': new_value, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) extra["foo2"] = new_value self.assertEqual(extra, result['extra']) def test_remove_uuid(self): response = self.patch_json('/allocations/%s' % self.allocation.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_remove_singular(self): allocation = obj_utils.create_test_allocation( self.context, extra={'a': 'b'}, uuid=uuidutils.generate_uuid()) response = self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/extra/a', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual(result['extra'], {}) # Assert nothing else was changed self.assertEqual(allocation.uuid, result['uuid']) def test_remove_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} allocation = obj_utils.create_test_allocation( self.context, extra=extra, uuid=uuidutils.generate_uuid()) # Removing one item from the collection response = self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/extra/foo2', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) extra.pop("foo2") self.assertEqual(extra, result['extra']) # Removing the collection response = self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/extra', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/allocations/%s' % allocation.uuid, headers=self.headers) self.assertEqual({}, result['extra']) # Assert nothing else was changed self.assertEqual(allocation.uuid, result['uuid']) def test_remove_non_existent_property_fail(self): response = self.patch_json( '/allocations/%s' % self.allocation.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_update_owner_not_acceptable(self): allocation = obj_utils.create_test_allocation( self.context, owner='12345', uuid=uuidutils.generate_uuid()) new_owner = '54321' response = self.patch_json('/allocations/%s' % allocation.uuid, [{'path': '/owner', 'value': new_owner, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) def _create_locally(_api, _ctx, allocation, topic): if 'node_id' in allocation and allocation.node_id: assert topic == 'node-topic', topic else: assert topic == 'some-topic', topic allocation.create() return allocation @mock.patch.object(rpcapi.ConductorAPI, 'create_allocation', _create_locally) class TestPost(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPost, self).setUp() self.mock_get_topic = self.useFixture( fixtures.MockPatchObject(rpcapi.ConductorAPI, 'get_random_topic') ).mock self.mock_get_topic.return_value = 'some-topic' self.mock_get_topic_for_node = self.useFixture( fixtures.MockPatchObject(rpcapi.ConductorAPI, 'get_topic_for') ).mock self.mock_get_topic_for_node.return_value = 'node-topic' @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_create_allocation(self, mock_utcnow, mock_notify): adict = apiutils.allocation_post_data() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual(adict['uuid'], response.json['uuid']) self.assertEqual('allocating', response.json['state']) self.assertIsNone(response.json['node_uuid']) self.assertEqual([], response.json['candidate_nodes']) self.assertEqual([], response.json['traits']) self.assertNotIn('node', response.json) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) self.assertIsNone(result['node_uuid']) self.assertEqual([], result['candidate_nodes']) self.assertEqual([], result['traits']) self.assertIsNone(result['owner']) self.assertNotIn('node', result) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/allocations/%s' % adict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) mock_notify.assert_has_calls([ mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END), ]) def test_create_allocation_invalid_api_version(self): adict = apiutils.allocation_post_data() response = self.post_json( '/allocations', adict, headers={api_base.Version.string: '1.50'}, expect_errors=True) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_create_allocation_doesnt_contain_id(self): with mock.patch.object(self.dbapi, 'create_allocation', wraps=self.dbapi.create_allocation) as cp_mock: adict = apiutils.allocation_post_data(extra={'foo': 123}) self.post_json('/allocations', adict, headers=self.headers) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['extra'], result['extra']) cp_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cp_mock.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_allocation_generate_uuid(self, mock_warn, mock_except): adict = apiutils.allocation_post_data() del adict['uuid'] response = self.post_json('/allocations', adict, headers=self.headers) result = self.get_json('/allocations/%s' % response.json['uuid'], headers=self.headers) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) self.assertFalse(mock_warn.called) self.assertFalse(mock_except.called) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.Allocation, 'create') def test_create_allocation_error(self, mock_create, mock_notify): mock_create.side_effect = Exception() adict = apiutils.allocation_post_data() self.post_json('/allocations', adict, headers=self.headers, expect_errors=True) mock_notify.assert_has_calls([ mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR), ]) def test_create_allocation_with_candidate_nodes(self): node1 = obj_utils.create_test_node(self.context, name='node-1') node2 = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) adict = apiutils.allocation_post_data( candidate_nodes=[node1.name, node2.uuid]) response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual([node1.uuid, node2.uuid], result['candidate_nodes']) def test_create_allocation_valid_extra(self): adict = apiutils.allocation_post_data( extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/allocations', adict, headers=self.headers) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['extra'], result['extra']) def test_create_allocation_with_no_extra(self): adict = apiutils.allocation_post_data() del adict['extra'] response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) def test_create_allocation_no_mandatory_field_resource_class(self): adict = apiutils.allocation_post_data() del adict['resource_class'] response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('resource_class', response.json['error_message']) def test_create_allocation_resource_class_too_long(self): adict = apiutils.allocation_post_data() adict['resource_class'] = 'f' * 81 response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_allocation_with_traits(self): adict = apiutils.allocation_post_data() adict['traits'] = ['CUSTOM_GPU', 'CUSTOM_FOO_BAR'] response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual(['CUSTOM_GPU', 'CUSTOM_FOO_BAR'], response.json['traits']) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual(['CUSTOM_GPU', 'CUSTOM_FOO_BAR'], result['traits']) def test_create_allocation_invalid_trait(self): adict = apiutils.allocation_post_data() adict['traits'] = ['CUSTOM_GPU', 'FOO_BAR'] response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_allocation_invalid_candidate_node_format(self): adict = apiutils.allocation_post_data( candidate_nodes=['invalid-format']) response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_allocation_candidate_node_not_found(self): adict = apiutils.allocation_post_data( candidate_nodes=['1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e']) response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_allocation_candidate_node_invalid(self): adict = apiutils.allocation_post_data( candidate_nodes=['this/is/not a/node/name']) response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_allocation_name_ok(self): name = 'foo' adict = apiutils.allocation_post_data(name=name) self.post_json('/allocations', adict, headers=self.headers) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(name, result['name']) def test_create_allocation_name_invalid(self): name = 'aa:bb_cc' adict = apiutils.allocation_post_data(name=name) response = self.post_json('/allocations', adict, headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_create_by_node_not_allowed(self): node = obj_utils.create_test_node(self.context) adict = apiutils.allocation_post_data() response = self.post_json('/nodes/%s/allocation' % node.uuid, adict, headers=self.headers, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_create_node_uuid_not_allowed(self): node = obj_utils.create_test_node(self.context) adict = apiutils.allocation_post_data() adict['node_uuid'] = node.uuid response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_allocation_owner(self): owner = '12345' adict = apiutils.allocation_post_data(owner=owner) self.post_json('/allocations', adict, headers=self.headers) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(owner, result['owner']) def test_create_allocation_owner_not_allowed(self): owner = '12345' adict = apiutils.allocation_post_data(owner=owner) response = self.post_json('/allocations', adict, headers={api_base.Version.string: '1.59'}, expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) def test_backfill(self): node = obj_utils.create_test_node(self.context) adict = apiutils.allocation_post_data(node=node.uuid) response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) self.assertNotIn('node', response.json) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual(node.uuid, result['node_uuid']) self.assertNotIn('node', result) def test_backfill_with_name(self): node = obj_utils.create_test_node(self.context, name='backfill-me') adict = apiutils.allocation_post_data(node=node.name) response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) self.assertNotIn('node', response.json) result = self.get_json('/allocations/%s' % adict['uuid'], headers=self.headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual(node.uuid, result['node_uuid']) self.assertNotIn('node', result) def test_backfill_without_resource_class(self): node = obj_utils.create_test_node(self.context, resource_class='bm-super') adict = {'node': node.uuid} response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/allocations/%s' % response.json['uuid'], headers=self.headers) self.assertEqual(node.uuid, result['node_uuid']) self.assertEqual('bm-super', result['resource_class']) def test_backfill_copy_instance_uuid(self): uuid = uuidutils.generate_uuid() node = obj_utils.create_test_node(self.context, instance_uuid=uuid, resource_class='bm-super') adict = {'node': node.uuid} response = self.post_json('/allocations', adict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/allocations/%s' % response.json['uuid'], headers=self.headers) self.assertEqual(uuid, result['uuid']) self.assertEqual(node.uuid, result['node_uuid']) self.assertEqual('bm-super', result['resource_class']) def test_backfill_node_not_found(self): adict = apiutils.allocation_post_data(node=uuidutils.generate_uuid()) response = self.post_json('/allocations', adict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_backfill_not_allowed(self): node = obj_utils.create_test_node(self.context) headers = {api_base.Version.string: '1.57'} adict = {'node': node.uuid} response = self.post_json('/allocations', adict, expect_errors=True, headers=headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(policy, 'authorize', autospec=True) def test_create_restricted_allocation(self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:create': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function owner = '12345' adict = apiutils.allocation_post_data() headers = {api_base.Version.string: '1.60', 'X-Project-Id': owner} response = self.post_json('/allocations', adict, headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual(owner, response.json['owner']) result = self.get_json('/allocations/%s' % adict['uuid'], headers=headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual(owner, result['owner']) @mock.patch.object(policy, 'authorize', autospec=True) def test_create_restricted_allocation_older_version(self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:create': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function owner = '12345' adict = apiutils.allocation_post_data() del adict['owner'] headers = {api_base.Version.string: '1.59', 'X-Project-Id': owner} response = self.post_json('/allocations', adict, headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/allocations/%s' % adict['uuid'], headers=headers) self.assertEqual(adict['uuid'], result['uuid']) @mock.patch.object(policy, 'authorize', autospec=True) def test_create_restricted_allocation_forbidden(self, mock_authorize): def mock_authorize_function(rule, target, creds): raise exception.HTTPForbidden(resource='fake') mock_authorize.side_effect = mock_authorize_function owner = '12345' adict = apiutils.allocation_post_data() headers = {api_base.Version.string: '1.60', 'X-Project-Id': owner} response = self.post_json('/allocations', adict, expect_errors=True, headers=headers) self.assertEqual(http_client.FORBIDDEN, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(policy, 'authorize', autospec=True) def test_create_restricted_allocation_with_owner(self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:create': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function owner = '12345' adict = apiutils.allocation_post_data(owner=owner) adict['owner'] = owner headers = {api_base.Version.string: '1.60', 'X-Project-Id': owner} response = self.post_json('/allocations', adict, headers=headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CREATED, response.status_int) self.assertEqual(owner, response.json['owner']) result = self.get_json('/allocations/%s' % adict['uuid'], headers=headers) self.assertEqual(adict['uuid'], result['uuid']) self.assertEqual(owner, result['owner']) @mock.patch.object(policy, 'authorize', autospec=True) def test_create_restricted_allocation_with_mismatch_owner( self, mock_authorize): def mock_authorize_function(rule, target, creds): if rule == 'baremetal:allocation:create': raise exception.HTTPForbidden(resource='fake') return True mock_authorize.side_effect = mock_authorize_function owner = '12345' adict = apiutils.allocation_post_data(owner=owner) adict['owner'] = '54321' headers = {api_base.Version.string: '1.60', 'X-Project-Id': owner} response = self.post_json('/allocations', adict, expect_errors=True, headers=headers) self.assertEqual(http_client.FORBIDDEN, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_allocation') class TestDelete(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestDelete, self).setUp() self.node = obj_utils.create_test_node(self.context) self.allocation = obj_utils.create_test_allocation( self.context, node_id=self.node.id, name='alloc1') self.mock_get_topic = self.useFixture( fixtures.MockPatchObject(rpcapi.ConductorAPI, 'get_random_topic') ).mock @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_allocation_by_id(self, mock_notify, mock_destroy): self.delete('/allocations/%s' % self.allocation.uuid, headers=self.headers) self.assertTrue(mock_destroy.called) mock_notify.assert_has_calls([ mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid), ]) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_allocation_node_locked(self, mock_notify, mock_destroy): self.node.reserve(self.context, 'fake', self.node.uuid) mock_destroy.side_effect = exception.NodeLocked(node='fake-node', host='fake-host') ret = self.delete('/allocations/%s' % self.allocation.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.CONFLICT, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_destroy.called) mock_notify.assert_has_calls([ mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid), ]) def test_delete_allocation_invalid_api_version(self, mock_destroy): response = self.delete('/allocations/%s' % self.allocation.uuid, expect_errors=True, headers={api_base.Version.string: '1.14'}) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_delete_allocation_invalid_api_version_without_check(self, mock_destroy): # Invalid name, but the check happens after the microversion check. response = self.delete('/allocations/ba!na!na1', expect_errors=True, headers={api_base.Version.string: '1.14'}) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_delete_allocation_by_name(self, mock_destroy): self.delete('/allocations/%s' % self.allocation.name, headers=self.headers) self.assertTrue(mock_destroy.called) def test_delete_allocation_by_name_with_json(self, mock_destroy): self.delete('/allocations/%s.json' % self.allocation.name, headers=self.headers) self.assertTrue(mock_destroy.called) def test_delete_allocation_by_name_not_existed(self, mock_destroy): res = self.delete('/allocations/%s' % 'blah', expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_allocation_by_node(self, mock_notify, mock_destroy): self.delete('/nodes/%s/allocation' % self.node.uuid, headers=self.headers) self.assertTrue(mock_destroy.called) mock_notify.assert_has_calls([ mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid), ]) def test_delete_allocation_by_node_not_existed(self, mock_destroy): node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) res = self.delete('/nodes/%s/allocation' % node.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) def test_delete_allocation_invalid_node(self, mock_destroy): res = self.delete('/nodes/%s/allocation' % uuidutils.generate_uuid(), expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) def test_delete_allocation_by_node_invalid_api_version(self, mock_destroy): obj_utils.create_test_allocation(self.context, node_id=self.node.id) response = self.delete( '/nodes/%s/allocation' % self.node.uuid, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertFalse(mock_destroy.called) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_event.py0000664000175000017500000002200313652514273024712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /events methods. """ from http import client as http_client import mock from ironic.api.controllers import base as api_base from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import versions from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api.utils import fake_event_validator def get_fake_port_event(): return {'event': 'network.bind_port', 'port_id': '11111111-aaaa-bbbb-cccc-555555555555', 'mac_address': 'de:ad:ca:fe:ba:be', 'status': 'ACTIVE', 'device_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:host_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:vnic_type': 'baremetal'} class TestPost(test_api_base.BaseApiTest): def setUp(self): super(TestPost, self).setUp() self.headers = {api_base.Version.string: str( versions.max_version_string())} @mock.patch.object(types.EventType, 'event_validators', {'valid.event': fake_event_validator}) @mock.patch.object(types.EventType, 'valid_events', {'valid.event'}) def test_events(self): events_dict = {'events': [{'event': 'valid.event'}]} response = self.post_json('/events', events_dict, headers=self.headers) self.assertEqual(http_client.NO_CONTENT, response.status_int) @mock.patch.object(types.EventType, 'event_validators', {'valid.event1': fake_event_validator, 'valid.event2': fake_event_validator, 'valid.event3': fake_event_validator}) @mock.patch.object(types.EventType, 'valid_events', {'valid.event1', 'valid.event2', 'valid.event3'}) def test_multiple_events(self): events_dict = {'events': [{'event': 'valid.event1'}, {'event': 'valid.event2'}, {'event': 'valid.event3'}]} response = self.post_json('/events', events_dict, headers=self.headers) self.assertEqual(http_client.NO_CONTENT, response.status_int) def test_events_does_not_contain_event(self): events_dict = {'events': [{'INVALID': 'fake.event'}]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(types.EventType, 'event_validators', {'valid.event': fake_event_validator}) def test_events_invalid_event(self): events_dict = {'events': [{'event': 'invalid.event'}]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_unknown_event_property(self): events_dict = {'events': [{'event': 'network.unbind_port', 'UNKNOWN': 'EVENT_PROPERTY'}]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_bind_port_events(self): events_dict = {'events': [get_fake_port_event()]} response = self.post_json('/events', events_dict, headers=self.headers) self.assertEqual(http_client.NO_CONTENT, response.status_int) def test_network_unbind_port_events(self): events_dict = {'events': [get_fake_port_event()]} events_dict['events'][0].update({'event': 'network.unbind_port'}) response = self.post_json('/events', events_dict, headers=self.headers) self.assertEqual(http_client.NO_CONTENT, response.status_int) def test_network_delete_port_events(self): events_dict = {'events': [get_fake_port_event()]} events_dict['events'][0].update({'event': 'network.delete_port'}) response = self.post_json('/events', events_dict, headers=self.headers) self.assertEqual(http_client.NO_CONTENT, response.status_int) def test_network_port_event_invalid_mac_address(self): port_evt = get_fake_port_event() port_evt.update({'mac_address': 'INVALID_MAC_ADDRESS'}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_port_event_invalid_device_id(self): port_evt = get_fake_port_event() port_evt.update({'device_id': 'DEVICE_ID_SHOULD_BE_UUID'}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_port_event_invalid_port_id(self): port_evt = get_fake_port_event() port_evt.update({'port_id': 'PORT_ID_SHOULD_BE_UUID'}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_port_event_invalid_status(self): port_evt = get_fake_port_event() port_evt.update({'status': ['status', 'SHOULD', 'BE', 'TEXT']}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_port_event_invalid_binding_vnic_type(self): port_evt = get_fake_port_event() port_evt.update({'binding:vnic_type': ['binding:vnic_type', 'SHOULD', 'BE', 'TEXT']}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_network_port_event_invalid_binding_host_id(self): port_evt = get_fake_port_event() port_evt.update({'binding:host_id': ['binding:host_id', 'IS', 'NODE_UUID', 'IN', 'IRONIC']}) events_dict = {'events': [port_evt]} response = self.post_json('/events', events_dict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(types.EventType, 'event_validators', {'valid.event': fake_event_validator}) @mock.patch.object(types.EventType, 'valid_events', {'valid.event'}) def test_events_unsupported_api_version(self): headers = {api_base.Version.string: '1.50'} events_dict = {'events': [{'event': 'valid.event'}]} response = self.post_json('/events', events_dict, expect_errors=True, headers=headers) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_types.py0000664000175000017500000004436413652514273024753 0ustar zuulzuul00000000000000# coding: utf-8 # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client import platform import mock from pecan import rest import wsme from ironic.api.controllers.v1 import types from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common import utils from ironic.tests import base from ironic.tests.unit.api import base as api_base from ironic.tests.unit.api.utils import fake_event_validator class TestMacAddressType(base.TestCase): def test_valid_mac_addr(self): test_mac = 'aa:bb:cc:11:22:33' with mock.patch.object(utils, 'validate_and_normalize_mac') as m_mock: types.MacAddressType.validate(test_mac) m_mock.assert_called_once_with(test_mac) def test_invalid_mac_addr(self): self.assertRaises(exception.InvalidMAC, types.MacAddressType.validate, 'invalid-mac') class TestUuidType(base.TestCase): def test_valid_uuid(self): test_uuid = '1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e' self.assertEqual(test_uuid, types.UuidType.validate(test_uuid)) def test_invalid_uuid(self): self.assertRaises(exception.InvalidUUID, types.UuidType.validate, 'invalid-uuid') @mock.patch("ironic.api.request") class TestNameType(base.TestCase): def test_valid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_name = 'hal-9000' self.assertEqual(test_name, types.NameType.validate(test_name)) def test_invalid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 self.assertRaises(exception.InvalidName, types.NameType.validate, '-this is not valid-') @mock.patch("ironic.api.request") class TestUuidOrNameType(base.TestCase): def test_valid_uuid(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_uuid = '1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e' self.assertTrue(types.UuidOrNameType.validate(test_uuid)) def test_valid_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 test_name = 'dc16-database5' self.assertTrue(types.UuidOrNameType.validate(test_name)) def test_invalid_uuid_or_name(self, mock_pecan_req): mock_pecan_req.version.minor = 10 self.assertRaises(exception.InvalidUuidOrName, types.UuidOrNameType.validate, 'inval#uuid%or*name') class MyBaseType(object): """Helper class, patched by objects of type MyPatchType""" mandatory = atypes.wsattr(str, mandatory=True) class MyPatchType(types.JsonPatchType): """Helper class for TestJsonPatchType tests.""" _api_base = MyBaseType _extra_non_removable_attrs = {'/non_removable'} @staticmethod def internal_attrs(): return ['/internal'] class MyTest(rest.RestController): """Helper class for TestJsonPatchType tests.""" @wsme.validate([MyPatchType]) @expose.expose([str], body=[MyPatchType]) def patch(self, patch): return patch class MyRoot(rest.RestController): test = MyTest() class TestJsonPatchType(api_base.BaseApiTest): root_controller = ('ironic.tests.unit.api.controllers.v1.' 'test_types.MyRoot') @mock.patch.object(platform, '_syscmd_uname', lambda *x: '') def setUp(self): super(TestJsonPatchType, self).setUp() def _patch_json(self, params, expect_errors=False): return self.app.patch_json('/test', params=params, headers={'Accept': 'application/json'}, expect_errors=expect_errors) def test_valid_patches(self): valid_patches = [{'path': '/extra/foo', 'op': 'remove'}, {'path': '/extra/foo', 'op': 'add', 'value': 'bar'}, {'path': '/str', 'op': 'replace', 'value': 'bar'}, {'path': '/bool', 'op': 'add', 'value': True}, {'path': '/int', 'op': 'add', 'value': 1}, {'path': '/float', 'op': 'add', 'value': 0.123}, {'path': '/list', 'op': 'add', 'value': [1, 2]}, {'path': '/none', 'op': 'add', 'value': None}, {'path': '/empty_dict', 'op': 'add', 'value': {}}, {'path': '/empty_list', 'op': 'add', 'value': []}, {'path': '/dict', 'op': 'add', 'value': {'cat': 'meow'}}] ret = self._patch_json(valid_patches, False) self.assertEqual(http_client.OK, ret.status_int) self.assertItemsEqual(valid_patches, ret.json) def test_cannot_update_internal_attr(self): patch = [{'path': '/internal', 'op': 'replace', 'value': 'foo'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_cannot_update_internal_dict_attr(self): patch = [{'path': '/internal/test', 'op': 'replace', 'value': 'foo'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_mandatory_attr(self): patch = [{'op': 'replace', 'path': '/mandatory', 'value': 'foo'}] ret = self._patch_json(patch, False) self.assertEqual(http_client.OK, ret.status_int) self.assertEqual(patch, ret.json) def test_cannot_remove_mandatory_attr(self): patch = [{'op': 'remove', 'path': '/mandatory'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_cannot_remove_extra_non_removable_attr(self): patch = [{'op': 'remove', 'path': '/non_removable'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_missing_required_fields_path(self): missing_path = [{'op': 'remove'}] ret = self._patch_json(missing_path, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_missing_required_fields_op(self): missing_op = [{'path': '/foo'}] ret = self._patch_json(missing_op, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_invalid_op(self): patch = [{'path': '/foo', 'op': 'invalid'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_invalid_path(self): patch = [{'path': 'invalid-path', 'op': 'remove'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_cannot_add_with_no_value(self): patch = [{'path': '/extra/foo', 'op': 'add'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) def test_cannot_replace_with_no_value(self): patch = [{'path': '/foo', 'op': 'replace'}] ret = self._patch_json(patch, True) self.assertEqual(http_client.BAD_REQUEST, ret.status_int) self.assertTrue(ret.json['error_message']) class TestBooleanType(base.TestCase): def test_valid_true_values(self): v = types.BooleanType() self.assertTrue(v.validate("true")) self.assertTrue(v.validate("TRUE")) self.assertTrue(v.validate("True")) self.assertTrue(v.validate("t")) self.assertTrue(v.validate("1")) self.assertTrue(v.validate("y")) self.assertTrue(v.validate("yes")) self.assertTrue(v.validate("on")) def test_valid_false_values(self): v = types.BooleanType() self.assertFalse(v.validate("false")) self.assertFalse(v.validate("FALSE")) self.assertFalse(v.validate("False")) self.assertFalse(v.validate("f")) self.assertFalse(v.validate("0")) self.assertFalse(v.validate("n")) self.assertFalse(v.validate("no")) self.assertFalse(v.validate("off")) def test_invalid_value(self): v = types.BooleanType() self.assertRaises(exception.Invalid, v.validate, "invalid-value") self.assertRaises(exception.Invalid, v.validate, "01") class TestJsonType(base.TestCase): def test_valid_values(self): vt = types.jsontype value = vt.validate("hello") self.assertEqual("hello", value) value = vt.validate(10) self.assertEqual(10, value) value = vt.validate(0.123) self.assertEqual(0.123, value) value = vt.validate(True) self.assertTrue(value) value = vt.validate([1, 2, 3]) self.assertEqual([1, 2, 3], value) value = vt.validate({'foo': 'bar'}) self.assertEqual({'foo': 'bar'}, value) value = vt.validate(None) self.assertIsNone(value) def test_invalid_values(self): vt = types.jsontype self.assertRaises(exception.Invalid, vt.validate, object()) def test_apimultitype_tostring(self): vts = str(types.jsontype) self.assertIn(str(str), vts) self.assertIn(str(int), vts) self.assertIn(str(float), vts) self.assertIn(str(types.BooleanType), vts) self.assertIn(str(list), vts) self.assertIn(str(dict), vts) self.assertIn(str(None), vts) class TestListType(base.TestCase): def test_list_type(self): v = types.ListType() self.assertEqual(['foo', 'bar'], v.validate('foo,bar')) self.assertNotEqual(['bar', 'foo'], v.validate('foo,bar')) self.assertEqual(['cat', 'meow'], v.validate("cat , meow")) self.assertEqual(['spongebob', 'squarepants'], v.validate("SpongeBob,SquarePants")) self.assertEqual(['foo', 'bar'], v.validate("foo, ,,bar")) self.assertEqual(['foo', 'bar'], v.validate("foo,foo,foo,bar")) self.assertIsInstance(v.validate('foo,bar'), list) class TestLocalLinkConnectionType(base.TestCase): def test_local_link_connection_type(self): v = types.locallinkconnectiontype value = {'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'value2', 'switch_info': 'value3'} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_type_datapath_id(self): v = types.locallinkconnectiontype value = {'switch_id': '0000000000000000', 'port_id': 'value2', 'switch_info': 'value3'} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_type_not_mac_or_datapath_id(self): v = types.locallinkconnectiontype value = {'switch_id': 'badid', 'port_id': 'value2', 'switch_info': 'value3'} self.assertRaises(exception.InvalidSwitchID, v.validate, value) def test_local_link_connection_type_invalid_key(self): v = types.locallinkconnectiontype value = {'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'value2', 'switch_info': 'value3', 'invalid_key': 'value'} self.assertRaisesRegex(exception.Invalid, 'are invalid keys', v.validate, value) def test_local_link_connection_type_missing_local_link_mandatory_key(self): v = types.locallinkconnectiontype value = {'switch_id': '0a:1b:2c:3d:4e:5f', 'switch_info': 'value3'} self.assertRaisesRegex(exception.Invalid, 'Missing mandatory', v.validate, value) def test_local_link_connection_type_local_link_keys_mandatory(self): v = types.locallinkconnectiontype value = {'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'value2'} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_type_empty_value(self): v = types.locallinkconnectiontype value = {} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_type_smart_nic_keys_mandatory(self): v = types.locallinkconnectiontype value = {'port_id': 'rep0-0', 'hostname': 'hostname'} self.assertTrue(v.validate_for_smart_nic(value)) self.assertTrue(v.validate(value)) def test_local_link_connection_type_smart_nic_keys_with_optional(self): v = types.locallinkconnectiontype value = {'port_id': 'rep0-0', 'hostname': 'hostname', 'switch_id': '0a:1b:2c:3d:4e:5f', 'switch_info': 'sw_info'} self.assertTrue(v.validate_for_smart_nic(value)) self.assertTrue(v.validate(value)) def test_local_link_connection_type_smart_nic_keys_hostname_missing(self): v = types.locallinkconnectiontype value = {'port_id': 'rep0-0'} self.assertFalse(v.validate_for_smart_nic(value)) self.assertRaises(exception.Invalid, v.validate, value) def test_local_link_connection_type_smart_nic_keys_port_id_missing(self): v = types.locallinkconnectiontype value = {'hostname': 'hostname'} self.assertFalse(v.validate_for_smart_nic(value)) self.assertRaises(exception.Invalid, v.validate, value) def test_local_link_connection_net_type_unmanaged(self): v = types.locallinkconnectiontype value = {'network_type': 'unmanaged'} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_net_type_unmanaged_combine_ok(self): v = types.locallinkconnectiontype value = {'network_type': 'unmanaged', 'switch_id': '0a:1b:2c:3d:4e:5f', 'port_id': 'rep0-0'} self.assertItemsEqual(value, v.validate(value)) def test_local_link_connection_net_type_invalid(self): v = types.locallinkconnectiontype value = {'network_type': 'invalid'} self.assertRaises(exception.Invalid, v.validate, value) @mock.patch("ironic.api.request", mock.Mock(version=mock.Mock(minor=10))) class TestVifType(base.TestCase): def test_vif_type(self): v = types.viftype value = {'id': 'foo'} self.assertItemsEqual(value, v.validate(value)) def test_vif_type_missing_mandatory_key(self): v = types.viftype value = {'foo': 'bar'} self.assertRaisesRegex(exception.Invalid, 'Missing mandatory', v.validate, value) def test_vif_type_optional_key(self): v = types.viftype value = {'id': 'foo', 'misc': 'something'} self.assertItemsEqual(value, v.frombasetype(value)) def test_vif_type_bad_id(self): v = types.viftype self.assertRaises(exception.InvalidUuidOrName, v.frombasetype, {'id': 5678}) class TestEventType(base.TestCase): def setUp(self): super(TestEventType, self).setUp() self.v = types.eventtype @mock.patch.object(types.EventType, 'event_validators', {'valid.event': fake_event_validator}) @mock.patch.object(types.EventType, 'valid_events', set(['valid.event'])) def test_simple_event_type(self): value = {'event': 'valid.event'} self.assertItemsEqual(value, self.v.validate(value)) @mock.patch.object(types.EventType, 'valid_events', set(['valid.event'])) def test_invalid_event_type(self): value = {'event': 'invalid.event'} self.assertRaisesRegex(exception.Invalid, 'invalid.event is not one of valid events:', self.v.validate, value) def test_event_missing_madatory_field(self): value = {'invalid': 'invalid'} self.assertRaisesRegex(exception.Invalid, 'Missing mandatory keys:', self.v.validate, value) def test_network_port_event(self): value = {'event': 'network.bind_port', 'port_id': '11111111-aaaa-bbbb-cccc-555555555555', 'mac_address': 'de:ad:ca:fe:ba:be', 'status': 'ACTIVE', 'device_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:host_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:vnic_type': 'baremetal' } self.assertItemsEqual(value, self.v.validate(value)) def test_invalid_mac_network_port_event(self): value = {'event': 'network.bind_port', 'port_id': '11111111-aaaa-bbbb-cccc-555555555555', 'mac_address': 'INVALID_MAC_ADDRESS', 'status': 'ACTIVE', 'device_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:host_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:vnic_type': 'baremetal' } self.assertRaisesRegex(exception.Invalid, 'Event validation failure for mac_address.', self.v.validate, value) def test_missing_mandatory_fields_network_port_event(self): value = {'event': 'network.bind_port', 'device_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:host_id': '22222222-aaaa-bbbb-cccc-555555555555', 'binding:vnic_type': 'baremetal' } self.assertRaisesRegex(exception.Invalid, 'Missing mandatory keys:', self.v.validate, value) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_volume_connector.py0000664000175000017500000013636313652514273027171 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /volume connectors/ methods. """ import datetime from http import client as http_client from urllib import parse as urlparse import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import notification_utils from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import volume_connector as api_volume_connector from ironic.api import types as atypes from ironic.common import exception from ironic.conductor import rpcapi from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.db import utils as dbutils from ironic.tests.unit.objects import utils as obj_utils def post_get_test_volume_connector(**kw): connector = apiutils.volume_connector_post_data(**kw) node = dbutils.get_test_node() connector['node_uuid'] = kw.get('node_uuid', node['uuid']) return connector class TestVolumeConnectorObject(base.TestCase): def test_volume_connector_init(self): connector_dict = apiutils.volume_connector_post_data(node_id=None) del connector_dict['extra'] connector = api_volume_connector.VolumeConnector(**connector_dict) self.assertEqual(atypes.Unset, connector.extra) class TestListVolumeConnectors(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestListVolumeConnectors, self).setUp() self.node = obj_utils.create_test_node(self.context) def test_empty(self): data = self.get_json('/volume/connectors', headers=self.headers) self.assertEqual([], data['connectors']) def test_one(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) data = self.get_json('/volume/connectors', headers=self.headers) self.assertEqual(connector.uuid, data['connectors'][0]["uuid"]) self.assertNotIn('extra', data['connectors'][0]) # never expose the node_id self.assertNotIn('node_id', data['connectors'][0]) def test_one_invalid_api_version(self): obj_utils.create_test_volume_connector(self.context, node_id=self.node.id) response = self.get_json( '/volume/connectors', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) data = self.get_json('/volume/connectors/%s' % connector.uuid, headers=self.headers) self.assertEqual(connector.uuid, data['uuid']) self.assertIn('extra', data) self.assertIn('node_uuid', data) # never expose the node_id self.assertNotIn('node_id', data) def test_get_one_invalid_api_version(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) response = self.get_json( '/volume/connectors/%s' % connector.uuid, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one_custom_fields(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) fields = 'connector_id,extra' data = self.get_json( '/volume/connectors/%s?fields=%s' % (connector.uuid, fields), headers=self.headers) # We always append "links" self.assertItemsEqual(['connector_id', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % i) data = self.get_json( '/volume/connectors?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['connectors'])) for connector in data['connectors']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], connector) def test_get_custom_fields_invalid_fields(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) fields = 'uuid,spongebob' response = self.get_json( '/volume/connectors/%s?fields=%s' % (connector.uuid, fields), headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_custom_fields_invalid_api_version(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) fields = 'uuid,extra' response = self.get_json( '/volume/connectors/%s?fields=%s' % (connector.uuid, fields), headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_detail(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) data = self.get_json('/volume/connectors?detail=True', headers=self.headers) self.assertEqual(connector.uuid, data['connectors'][0]["uuid"]) self.assertIn('extra', data['connectors'][0]) self.assertIn('node_uuid', data['connectors'][0]) # never expose the node_id self.assertNotIn('node_id', data['connectors'][0]) def test_detail_false(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) data = self.get_json('/volume/connectors?detail=False', headers=self.headers) self.assertEqual(connector.uuid, data['connectors'][0]["uuid"]) self.assertNotIn('extra', data['connectors'][0]) # never expose the node_id self.assertNotIn('node_id', data['connectors'][0]) def test_detail_invalid_api_version(self): obj_utils.create_test_volume_connector(self.context, node_id=self.node.id) response = self.get_json( '/volume/connectors?detail=True', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_detail_sepecified_by_path(self): obj_utils.create_test_volume_connector(self.context, node_id=self.node.id) response = self.get_json( '/volume/connectors/detail', headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_against_single(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) response = self.get_json('/volume/connectors/%s?detail=True' % connector.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_and_fields(self): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) fields = 'connector_id,extra' response = self.get_json('/volume/connectors/%s?detail=True&fields=%s' % (connector.uuid, fields), expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_many(self): connectors = [] for id_ in range(5): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % id_) connectors.append(connector.uuid) data = self.get_json('/volume/connectors', headers=self.headers) self.assertEqual(len(connectors), len(data['connectors'])) uuids = [n['uuid'] for n in data['connectors']] self.assertCountEqual(connectors, uuids) def test_links(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_connector(self.context, uuid=uuid, node_id=self.node.id) data = self.get_json('/volume/connectors/%s' % uuid, headers=self.headers) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark, headers=self.headers)) def test_collection_links(self): connectors = [] for id_ in range(5): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % id_) connectors.append(connector.uuid) data = self.get_json('/volume/connectors/?limit=3', headers=self.headers) self.assertEqual(3, len(data['connectors'])) next_marker = data['connectors'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/connectors', data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') connectors = [] for id_ in range(5): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % id_) connectors.append(connector.uuid) data = self.get_json('/volume/connectors', headers=self.headers) self.assertEqual(3, len(data['connectors'])) self.assertIn('volume/connectors', data['next']) next_marker = data['connectors'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_custom_fields(self): cfg.CONF.set_override('max_limit', 3, 'api') connectors = [] fields = 'uuid,extra' for i in range(5): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % i) connectors.append(connector.uuid) data = self.get_json( '/volume/connectors?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['connectors'])) self.assertIn('volume/connectors', data['next']) next_marker = data['connectors'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('fields', data['next']) def test_get_collection_pagination_no_uuid(self): fields = 'connector_id' limit = 2 connectors = [] for id_ in range(3): volume_connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, connector_id='test-connector_id-%s' % id_, uuid=uuidutils.generate_uuid()) connectors.append(volume_connector) data = self.get_json( '/volume/connectors?fields=%s&limit=%s' % (fields, limit), headers=self.headers) self.assertEqual(limit, len(data['connectors'])) self.assertIn('marker=%s' % connectors[limit - 1].uuid, data['next']) def test_collection_links_detail(self): connectors = [] for id_ in range(5): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % id_) connectors.append(connector.uuid) data = self.get_json('/volume/connectors?detail=True&limit=3', headers=self.headers) self.assertEqual(3, len(data['connectors'])) next_marker = data['connectors'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('volume/connectors', data['next']) self.assertIn('detail=True', data['next']) def test_sort_key(self): connectors = [] for id_ in range(3): connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), connector_id='test-connector_id-%s' % id_) connectors.append(connector.uuid) data = self.get_json('/volume/connectors?sort_key=uuid', headers=self.headers) uuids = [n['uuid'] for n in data['connectors']] self.assertEqual(sorted(connectors), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra'] for invalid_key in invalid_keys_list: response = self.get_json('/volume/connectors?sort_key=%s' % invalid_key, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) @mock.patch.object(api_utils, 'get_rpc_node') def test_get_all_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/volume/connectors specifying node_name - success mock_get_rpc_node.return_value = self.node for i in range(5): if i < 3: node_id = self.node.id else: node_id = 100000 + i obj_utils.create_test_volume_connector( self.context, node_id=node_id, uuid=uuidutils.generate_uuid(), connector_id='test-value-%s' % i) data = self.get_json("/volume/connectors?node=%s" % 'test-node', headers=self.headers) self.assertEqual(3, len(data['connectors'])) @mock.patch.object(api_utils, 'get_rpc_node') def test_detail_by_node_name_ok(self, mock_get_rpc_node): # GET /v1/volume/connectors?detail=True specifying node_name - success mock_get_rpc_node.return_value = self.node connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) data = self.get_json('/volume/connectors?detail=True&node=%s' % 'test-node', headers=self.headers) self.assertEqual(connector.uuid, data['connectors'][0]['uuid']) self.assertEqual(self.node.uuid, data['connectors'][0]['node_uuid']) @mock.patch.object(rpcapi.ConductorAPI, 'update_volume_connector') class TestPatch(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPatch, self).setUp() self.node = obj_utils.create_test_node(self.context) self.connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) p = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = p.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(p.stop) @mock.patch.object(notification_utils, '_emit_api_notification') def test_update_byid(self, mock_notify, mock_upd): extra = {'foo': 'bar'} mock_upd.return_value = self.connector mock_upd.return_value.extra = extra response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_update_invalid_api_version(self, mock_upd): headers = {api_base.Version.string: str(api_v1.min_version())} response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], headers=headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_update_not_found(self, mock_upd): uuid = uuidutils.generate_uuid() response = self.patch_json('/volume/connectors/%s' % uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_singular(self, mock_upd): connector_id = 'test-connector-id-999' mock_upd.return_value = self.connector mock_upd.return_value.connector_id = connector_id response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/connector_id', 'value': connector_id, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(connector_id, response.json['connector_id']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(connector_id, kargs.connector_id) @mock.patch.object(notification_utils, '_emit_api_notification') def test_replace_connector_id_already_exist(self, mock_notify, mock_upd): connector_id = 'test-connector-id-123' mock_upd.side_effect = \ exception.VolumeConnectorTypeAndIdAlreadyExists( type=None, connector_id=connector_id) response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/connector_id', 'value': connector_id, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(connector_id, kargs.connector_id) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_replace_invalid_power_state(self, mock_upd): connector_id = 'test-connector-id-123' mock_upd.side_effect = \ exception.InvalidStateRequested( action='volume connector update', node=self.node.uuid, state='power on') response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/connector_id', 'value': connector_id, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(connector_id, kargs.connector_id) def test_replace_node_uuid(self, mock_upd): mock_upd.return_value = self.connector response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_replace_node_uuid_invalid_type(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_uuid', 'value': 123, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(b'Expected a UUID for node_uuid, but received 123.', response.body) self.assertFalse(mock_upd.called) def test_add_node_uuid(self, mock_upd): mock_upd.return_value = self.connector response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_uuid', 'value': self.node.uuid, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) def test_add_node_uuid_invalid_type(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_uuid', 'value': 123, 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(b'Expected a UUID for node_uuid, but received 123.', response.body) self.assertFalse(mock_upd.called) def test_add_node_id(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_id', 'value': '1', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_node_id(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_id', 'value': '1', 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_remove_node_id(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_id', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertFalse(mock_upd.called) def test_replace_non_existent_node_uuid(self, mock_upd): node_uuid = '12506333-a81c-4d59-9987-889ed5f8687b' response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/node_uuid', 'value': node_uuid, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertIn(node_uuid, response.json['error_message']) self.assertFalse(mock_upd.called) def test_replace_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.connector.extra = extra self.connector.save() # mutate extra so we replace all of them extra = dict((k, extra[k] + 'x') for k in extra) patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'replace'}) mock_upd.return_value = self.connector mock_upd.return_value.extra = extra response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} self.connector.extra = extra self.connector.save() # Remove one item from the collection. extra.pop('foo1') mock_upd.return_value = self.connector mock_upd.return_value.extra = extra response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/extra/foo1', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Remove the collection. extra = {} mock_upd.return_value.extra = extra response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/extra', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual({}, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) # Assert nothing else was changed. self.assertEqual(self.connector.uuid, response.json['uuid']) self.assertEqual(self.connector.type, response.json['type']) self.assertEqual(self.connector.connector_id, response.json['connector_id']) def test_remove_non_existent_property_fail(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_remove_mandatory_field(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/value', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_root(self, mock_upd): connector_id = 'test-connector-id-123' mock_upd.return_value = self.connector mock_upd.return_value.connector_id = connector_id response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/connector_id', 'value': connector_id, 'op': 'add'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(connector_id, response.json['connector_id']) self.assertTrue(mock_upd.called) kargs = mock_upd.call_args[0][1] self.assertEqual(connector_id, kargs.connector_id) def test_add_root_non_existent(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) def test_add_multi(self, mock_upd): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} patch = [] for k in extra: patch.append({'path': '/extra/%s' % k, 'value': extra[k], 'op': 'add'}) mock_upd.return_value = self.connector mock_upd.return_value.extra = extra response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(extra, response.json['extra']) kargs = mock_upd.call_args[0][1] self.assertEqual(extra, kargs.extra) def test_remove_uuid(self, mock_upd): response = self.patch_json('/volume/connectors/%s' % self.connector.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertFalse(mock_upd.called) class TestPost(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestPost, self).setUp() self.node = obj_utils.create_test_node(self.context) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow') def test_create_volume_connector(self, mock_utcnow, mock_notify): pdict = post_get_test_volume_connector() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/volume/connectors', pdict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/volume/connectors/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header. self.assertIsNotNone(response.location) expected_location = '/v1/volume/connectors/%s' % pdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_create_volume_connector_invalid_api_version(self): pdict = post_get_test_volume_connector() response = self.post_json( '/volume/connectors', pdict, headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_create_volume_connector_doesnt_contain_id(self): with mock.patch.object( self.dbapi, 'create_volume_connector', wraps=self.dbapi.create_volume_connector) as cp_mock: pdict = post_get_test_volume_connector(extra={'foo': 123}) self.post_json('/volume/connectors', pdict, headers=self.headers) result = self.get_json('/volume/connectors/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) cp_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args. self.assertNotIn('id', cp_mock.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_volume_connector_generate_uuid(self, mock_warning, mock_exception): pdict = post_get_test_volume_connector() del pdict['uuid'] response = self.post_json('/volume/connectors', pdict, headers=self.headers) result = self.get_json('/volume/connectors/%s' % response.json['uuid'], headers=self.headers) self.assertEqual(pdict['connector_id'], result['connector_id']) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) self.assertFalse(mock_warning.called) self.assertFalse(mock_exception.called) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.VolumeConnector, 'create') def test_create_volume_connector_error(self, mock_create, mock_notify): mock_create.side_effect = Exception() cdict = post_get_test_volume_connector() self.post_json('/volume/connectors', cdict, headers=self.headers, expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_create_volume_connector_valid_extra(self): pdict = post_get_test_volume_connector( extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/volume/connectors', pdict, headers=self.headers) result = self.get_json('/volume/connectors/%s' % pdict['uuid'], headers=self.headers) self.assertEqual(pdict['extra'], result['extra']) def test_create_volume_connector_no_mandatory_field_type(self): pdict = post_get_test_volume_connector() del pdict['type'] response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_connector_no_mandatory_field_connector_id(self): pdict = post_get_test_volume_connector() del pdict['connector_id'] response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_connector_no_mandatory_field_node_uuid(self): pdict = post_get_test_volume_connector() del pdict['node_uuid'] response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_create_volume_connector_invalid_node_uuid_format(self): pdict = post_get_test_volume_connector(node_uuid=123) response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) self.assertIn(b'Expected a UUID but received 123.', response.body) def test_node_uuid_to_node_id_mapping(self): pdict = post_get_test_volume_connector(node_uuid=self.node['uuid']) self.post_json('/volume/connectors', pdict, headers=self.headers) # GET doesn't return the node_id it's an internal value connector = self.dbapi.get_volume_connector_by_uuid(pdict['uuid']) self.assertEqual(self.node['id'], connector.node_id) def test_create_volume_connector_node_uuid_not_found(self): pdict = post_get_test_volume_connector( node_uuid='1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e') response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_create_volume_connector_type_value_already_exist(self): connector_id = 'test-connector-id-456' pdict = post_get_test_volume_connector(connector_id=connector_id) self.post_json('/volume/connectors', pdict, headers=self.headers) pdict['uuid'] = uuidutils.generate_uuid() response = self.post_json('/volume/connectors', pdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.CONFLICT, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertIn(connector_id, response.json['error_message']) @mock.patch.object(rpcapi.ConductorAPI, 'destroy_volume_connector') class TestDelete(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} def setUp(self): super(TestDelete, self).setUp() self.node = obj_utils.create_test_node(self.context) self.connector = obj_utils.create_test_volume_connector( self.context, node_id=self.node.id) gtf = mock.patch.object(rpcapi.ConductorAPI, 'get_topic_for') self.mock_gtf = gtf.start() self.mock_gtf.return_value = 'test-topic' self.addCleanup(gtf.stop) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_volume_connector_byid(self, mock_notify, mock_dvc): self.delete('/volume/connectors/%s' % self.connector.uuid, expect_errors=True, headers=self.headers) self.assertTrue(mock_dvc.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END, node_uuid=self.node.uuid)]) def test_delete_volume_connector_byid_invalid_api_version(self, mock_dvc): headers = {api_base.Version.string: str(api_v1.min_version())} response = self.delete('/volume/connectors/%s' % self.connector.uuid, expect_errors=True, headers=headers) self.assertEqual(http_client.NOT_FOUND, response.status_int) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_volume_connector_node_locked(self, mock_notify, mock_dvc): self.node.reserve(self.context, 'fake', self.node.uuid) mock_dvc.side_effect = exception.NodeLocked(node='fake-node', host='fake-host') ret = self.delete('/volume/connectors/%s' % self.connector.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.CONFLICT, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dvc.called) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START, node_uuid=self.node.uuid), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR, node_uuid=self.node.uuid)]) def test_delete_volume_connector_invalid_power_state(self, mock_dvc): self.node.reserve(self.context, 'fake', self.node.uuid) mock_dvc.side_effect = exception.InvalidStateRequested( action='volume connector deletion', node=self.node.uuid, state='power on') ret = self.delete('/volume/connectors/%s' % self.connector.uuid, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, ret.status_code) self.assertTrue(ret.json['error_message']) self.assertTrue(mock_dvc.called) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_deploy_template.py0000664000175000017500000012536213652514273026774 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /deploy_templates/ methods. """ import datetime from http import client as http_client from urllib import parse as urlparse import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import deploy_template as api_deploy_template from ironic.api.controllers.v1 import notification_utils from ironic.common import exception from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as test_api_utils from ironic.tests.unit.objects import utils as obj_utils def _obj_to_api_step(obj_step): """Convert a deploy step in 'object' form to one in 'API' form.""" return { 'interface': obj_step['interface'], 'step': obj_step['step'], 'args': obj_step['args'], 'priority': obj_step['priority'], } class TestDeployTemplateObject(base.TestCase): def test_deploy_template_init(self): template_dict = test_api_utils.deploy_template_post_data() template = api_deploy_template.DeployTemplate(**template_dict) self.assertEqual(template_dict['uuid'], template.uuid) self.assertEqual(template_dict['name'], template.name) self.assertEqual(template_dict['extra'], template.extra) for t_dict_step, t_step in zip(template_dict['steps'], template.steps): self.assertEqual(t_dict_step['interface'], t_step.interface) self.assertEqual(t_dict_step['step'], t_step.step) self.assertEqual(t_dict_step['args'], t_step.args) self.assertEqual(t_dict_step['priority'], t_step.priority) def test_deploy_template_sample(self): sample = api_deploy_template.DeployTemplate.sample(expand=False) self.assertEqual('534e73fa-1014-4e58-969a-814cc0cb9d43', sample.uuid) self.assertEqual('CUSTOM_RAID1', sample.name) self.assertEqual({'foo': 'bar'}, sample.extra) class BaseDeployTemplatesAPITest(test_api_base.BaseApiTest): headers = {api_base.Version.string: str(api_v1.max_version())} invalid_version_headers = {api_base.Version.string: '1.54'} class TestListDeployTemplates(BaseDeployTemplatesAPITest): def test_empty(self): data = self.get_json('/deploy_templates', headers=self.headers) self.assertEqual([], data['deploy_templates']) def test_one(self): template = obj_utils.create_test_deploy_template(self.context) data = self.get_json('/deploy_templates', headers=self.headers) self.assertEqual(1, len(data['deploy_templates'])) self.assertEqual(template.uuid, data['deploy_templates'][0]['uuid']) self.assertEqual(template.name, data['deploy_templates'][0]['name']) self.assertNotIn('steps', data['deploy_templates'][0]) self.assertNotIn('extra', data['deploy_templates'][0]) def test_get_one(self): template = obj_utils.create_test_deploy_template(self.context) data = self.get_json('/deploy_templates/%s' % template.uuid, headers=self.headers) self.assertEqual(template.uuid, data['uuid']) self.assertEqual(template.name, data['name']) self.assertEqual(template.extra, data['extra']) for t_dict_step, t_step in zip(data['steps'], template.steps): self.assertEqual(t_dict_step['interface'], t_step['interface']) self.assertEqual(t_dict_step['step'], t_step['step']) self.assertEqual(t_dict_step['args'], t_step['args']) self.assertEqual(t_dict_step['priority'], t_step['priority']) def test_get_one_with_json(self): template = obj_utils.create_test_deploy_template(self.context) data = self.get_json('/deploy_templates/%s.json' % template.uuid, headers=self.headers) self.assertEqual(template.uuid, data['uuid']) def test_get_one_with_suffix(self): template = obj_utils.create_test_deploy_template(self.context, name='CUSTOM_DT1') data = self.get_json('/deploy_templates/%s' % template.uuid, headers=self.headers) self.assertEqual(template.uuid, data['uuid']) def test_get_one_custom_fields(self): template = obj_utils.create_test_deploy_template(self.context) fields = 'name,steps' data = self.get_json( '/deploy_templates/%s?fields=%s' % (template.uuid, fields), headers=self.headers) # We always append "links" self.assertItemsEqual(['name', 'steps', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,steps' for i in range(3): obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % i) data = self.get_json( '/deploy_templates?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['deploy_templates'])) for template in data['deploy_templates']: # We always append "links" self.assertItemsEqual(['uuid', 'steps', 'links'], template) def test_get_custom_fields_invalid_fields(self): template = obj_utils.create_test_deploy_template(self.context) fields = 'uuid,spongebob' response = self.get_json( '/deploy_templates/%s?fields=%s' % (template.uuid, fields), headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_all_invalid_api_version(self): obj_utils.create_test_deploy_template(self.context) response = self.get_json('/deploy_templates', headers=self.invalid_version_headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_get_one_invalid_api_version(self): template = obj_utils.create_test_deploy_template(self.context) response = self.get_json( '/deploy_templates/%s' % (template.uuid), headers=self.invalid_version_headers, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_detail_query(self): template = obj_utils.create_test_deploy_template(self.context) data = self.get_json('/deploy_templates?detail=True', headers=self.headers) self.assertEqual(template.uuid, data['deploy_templates'][0]['uuid']) self.assertIn('name', data['deploy_templates'][0]) self.assertIn('steps', data['deploy_templates'][0]) self.assertIn('extra', data['deploy_templates'][0]) def test_detail_query_false(self): obj_utils.create_test_deploy_template(self.context) data1 = self.get_json('/deploy_templates', headers=self.headers) data2 = self.get_json( '/deploy_templates?detail=False', headers=self.headers) self.assertEqual(data1['deploy_templates'], data2['deploy_templates']) def test_detail_using_query_false_and_fields(self): obj_utils.create_test_deploy_template(self.context) data = self.get_json( '/deploy_templates?detail=False&fields=steps', headers=self.headers) self.assertIn('steps', data['deploy_templates'][0]) self.assertNotIn('uuid', data['deploy_templates'][0]) self.assertNotIn('extra', data['deploy_templates'][0]) def test_detail_using_query_and_fields(self): obj_utils.create_test_deploy_template(self.context) response = self.get_json( '/deploy_templates?detail=True&fields=name', headers=self.headers, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_many(self): templates = [] for id_ in range(5): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) templates.append(template.uuid) data = self.get_json('/deploy_templates', headers=self.headers) self.assertEqual(len(templates), len(data['deploy_templates'])) uuids = [n['uuid'] for n in data['deploy_templates']] self.assertCountEqual(templates, uuids) def test_links(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_deploy_template(self.context, uuid=uuid) data = self.get_json('/deploy_templates/%s' % uuid, headers=self.headers) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark, headers=self.headers)) def test_collection_links(self): templates = [] for id_ in range(5): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) templates.append(template.uuid) data = self.get_json('/deploy_templates/?limit=3', headers=self.headers) self.assertEqual(3, len(data['deploy_templates'])) next_marker = data['deploy_templates'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') templates = [] for id_ in range(5): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) templates.append(template.uuid) data = self.get_json('/deploy_templates', headers=self.headers) self.assertEqual(3, len(data['deploy_templates'])) next_marker = data['deploy_templates'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_custom_fields(self): cfg.CONF.set_override('max_limit', 3, 'api') templates = [] fields = 'uuid,steps' for i in range(5): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % i) templates.append(template.uuid) data = self.get_json('/deploy_templates?fields=%s' % fields, headers=self.headers) self.assertEqual(3, len(data['deploy_templates'])) next_marker = data['deploy_templates'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('fields', data['next']) def test_get_collection_pagination_no_uuid(self): fields = 'name' limit = 2 templates = [] for id_ in range(3): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) templates.append(template) data = self.get_json( '/deploy_templates?fields=%s&limit=%s' % (fields, limit), headers=self.headers) self.assertEqual(limit, len(data['deploy_templates'])) self.assertIn('marker=%s' % templates[limit - 1].uuid, data['next']) def test_sort_key(self): templates = [] for id_ in range(3): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) templates.append(template.uuid) data = self.get_json('/deploy_templates?sort_key=uuid', headers=self.headers) uuids = [n['uuid'] for n in data['deploy_templates']] self.assertEqual(sorted(templates), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['extra', 'foo', 'steps'] for invalid_key in invalid_keys_list: path = '/deploy_templates?sort_key=%s' % invalid_key response = self.get_json(path, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def _test_sort_key_allowed(self, detail=False): template_uuids = [] for id_ in range(3, 0, -1): template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT%s' % id_) template_uuids.append(template.uuid) template_uuids.reverse() url = '/deploy_templates?sort_key=name&detail=%s' % str(detail) data = self.get_json(url, headers=self.headers) data_uuids = [p['uuid'] for p in data['deploy_templates']] self.assertEqual(template_uuids, data_uuids) def test_sort_key_allowed(self): self._test_sort_key_allowed() def test_detail_sort_key_allowed(self): self._test_sort_key_allowed(detail=True) def test_sensitive_data_masked(self): template = obj_utils.get_test_deploy_template(self.context) template.steps[0]['args']['password'] = 'correcthorsebatterystaple' template.create() data = self.get_json('/deploy_templates/%s' % template.uuid, headers=self.headers) self.assertEqual("******", data['steps'][0]['args']['password']) @mock.patch.object(objects.DeployTemplate, 'save', autospec=True) class TestPatch(BaseDeployTemplatesAPITest): def setUp(self): super(TestPatch, self).setUp() self.template = obj_utils.create_test_deploy_template( self.context, name='CUSTOM_DT1') def _test_update_ok(self, mock_save, patch): response = self.patch_json('/deploy_templates/%s' % self.template.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) mock_save.assert_called_once_with(mock.ANY) return response def _test_update_bad_request(self, mock_save, patch, error_msg): response = self.patch_json('/deploy_templates/%s' % self.template.uuid, patch, expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) self.assertRegex(response.json['error_message'], error_msg) self.assertFalse(mock_save.called) return response @mock.patch.object(notification_utils, '_emit_api_notification', autospec=True) def test_update_by_id(self, mock_notify, mock_save): name = 'CUSTOM_DT2' patch = [{'path': '/name', 'value': name, 'op': 'add'}] response = self._test_update_ok(mock_save, patch) self.assertEqual(name, response.json['name']) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) def test_update_by_name(self, mock_save): steps = [{ 'interface': 'bios', 'step': 'apply_configuration', 'args': {'foo': 'bar'}, 'priority': 42 }] patch = [{'path': '/steps', 'value': steps, 'op': 'replace'}] response = self.patch_json('/deploy_templates/%s' % self.template.name, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) mock_save.assert_called_once_with(mock.ANY) self.assertEqual(steps, response.json['steps']) def test_update_by_name_with_json(self, mock_save): interface = 'bios' path = '/deploy_templates/%s.json' % self.template.name response = self.patch_json(path, [{'path': '/steps/0/interface', 'value': interface, 'op': 'replace'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(interface, response.json['steps'][0]['interface']) def test_update_name_standard_trait(self, mock_save): name = 'HW_CPU_X86_VMX' patch = [{'path': '/name', 'value': name, 'op': 'replace'}] response = self._test_update_ok(mock_save, patch) self.assertEqual(name, response.json['name']) def test_update_name_custom_trait(self, mock_save): name = 'CUSTOM_DT2' patch = [{'path': '/name', 'value': name, 'op': 'replace'}] response = self._test_update_ok(mock_save, patch) self.assertEqual(name, response.json['name']) def test_update_invalid_name(self, mock_save): self._test_update_bad_request( mock_save, [{'path': '/name', 'value': 'aa:bb_cc', 'op': 'replace'}], 'Deploy template name must be a valid trait') def test_update_by_id_invalid_api_version(self, mock_save): name = 'CUSTOM_DT2' headers = self.invalid_version_headers response = self.patch_json('/deploy_templates/%s' % self.template.uuid, [{'path': '/name', 'value': name, 'op': 'add'}], headers=headers, expect_errors=True) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) self.assertFalse(mock_save.called) def test_update_by_name_old_api_version(self, mock_save): name = 'CUSTOM_DT2' response = self.patch_json('/deploy_templates/%s' % self.template.name, [{'path': '/name', 'value': name, 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) self.assertFalse(mock_save.called) def test_update_not_found(self, mock_save): name = 'CUSTOM_DT2' uuid = uuidutils.generate_uuid() response = self.patch_json('/deploy_templates/%s' % uuid, [{'path': '/name', 'value': name, 'op': 'add'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertTrue(response.json['error_message']) self.assertFalse(mock_save.called) @mock.patch.object(notification_utils, '_emit_api_notification', autospec=True) def test_replace_name_already_exist(self, mock_notify, mock_save): name = 'CUSTOM_DT2' obj_utils.create_test_deploy_template(self.context, uuid=uuidutils.generate_uuid(), name=name) mock_save.side_effect = exception.DeployTemplateAlreadyExists( uuid=self.template.uuid) response = self.patch_json('/deploy_templates/%s' % self.template.uuid, [{'path': '/name', 'value': name, 'op': 'replace'}], expect_errors=True, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.CONFLICT, response.status_code) self.assertTrue(response.json['error_message']) mock_save.assert_called_once_with(mock.ANY) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def test_replace_invalid_name_too_long(self, mock_save): name = 'CUSTOM_' + 'X' * 249 patch = [{'path': '/name', 'op': 'replace', 'value': name}] self._test_update_bad_request( mock_save, patch, 'Deploy template name must be a valid trait') def test_replace_invalid_name_not_a_trait(self, mock_save): name = 'not-a-trait' patch = [{'path': '/name', 'op': 'replace', 'value': name}] self._test_update_bad_request( mock_save, patch, 'Deploy template name must be a valid trait') def test_replace_invalid_name_none(self, mock_save): patch = [{'path': '/name', 'op': 'replace', 'value': None}] self._test_update_bad_request( mock_save, patch, "Deploy template name cannot be None") def test_replace_duplicate_step(self, mock_save): # interface & step combination must be unique. steps = [ { 'interface': 'raid', 'step': 'create_configuration', 'args': {'foo': '%d' % i}, 'priority': i, } for i in range(2) ] patch = [{'path': '/steps', 'op': 'replace', 'value': steps}] self._test_update_bad_request( mock_save, patch, "Duplicate deploy steps") def test_replace_invalid_step_interface_fail(self, mock_save): step = { 'interface': 'foo', 'step': 'apply_configuration', 'args': {'foo': 'bar'}, 'priority': 42 } patch = [{'path': '/steps/0', 'op': 'replace', 'value': step}] self._test_update_bad_request( mock_save, patch, "Invalid input for field/attribute interface.") def test_replace_non_existent_step_fail(self, mock_save): step = { 'interface': 'bios', 'step': 'apply_configuration', 'args': {'foo': 'bar'}, 'priority': 42 } patch = [{'path': '/steps/1', 'op': 'replace', 'value': step}] self._test_update_bad_request( mock_save, patch, "list assignment index out of range|" "can't replace outside of list") def test_replace_empty_step_list_fail(self, mock_save): patch = [{'path': '/steps', 'op': 'replace', 'value': []}] self._test_update_bad_request( mock_save, patch, 'No deploy steps specified') def _test_remove_not_allowed(self, mock_save, field, error_msg): patch = [{'path': '/%s' % field, 'op': 'remove'}] self._test_update_bad_request(mock_save, patch, error_msg) def test_remove_uuid(self, mock_save): self._test_remove_not_allowed( mock_save, 'uuid', "'/uuid' is an internal attribute and can not be updated") def test_remove_name(self, mock_save): self._test_remove_not_allowed( mock_save, 'name', "'/name' is a mandatory attribute and can not be removed") def test_remove_steps(self, mock_save): self._test_remove_not_allowed( mock_save, 'steps', "'/steps' is a mandatory attribute and can not be removed") def test_remove_foo(self, mock_save): self._test_remove_not_allowed( mock_save, 'foo', "can't remove non-existent object 'foo'") def test_replace_step_invalid_interface(self, mock_save): patch = [{'path': '/steps/0/interface', 'op': 'replace', 'value': 'foo'}] self._test_update_bad_request( mock_save, patch, "Invalid input for field/attribute interface.") def test_replace_multi(self, mock_save): steps = [ { 'interface': 'raid', 'step': 'create_configuration%d' % i, 'args': {}, 'priority': 10, } for i in range(3) ] template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT2', steps=steps) # mutate steps so we replace all of them for step in steps: step['priority'] = step['priority'] + 1 patch = [] for i, step in enumerate(steps): patch.append({'path': '/steps/%s' % i, 'value': step, 'op': 'replace'}) response = self.patch_json('/deploy_templates/%s' % template.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(steps, response.json['steps']) mock_save.assert_called_once_with(mock.ANY) def test_remove_multi(self, mock_save): steps = [ { 'interface': 'raid', 'step': 'create_configuration%d' % i, 'args': {}, 'priority': 10, } for i in range(3) ] template = obj_utils.create_test_deploy_template( self.context, uuid=uuidutils.generate_uuid(), name='CUSTOM_DT2', steps=steps) # Removing one step from the collection steps.pop(1) response = self.patch_json('/deploy_templates/%s' % template.uuid, [{'path': '/steps/1', 'op': 'remove'}], headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(steps, response.json['steps']) mock_save.assert_called_once_with(mock.ANY) def test_remove_non_existent_property_fail(self, mock_save): patch = [{'path': '/non-existent', 'op': 'remove'}] self._test_update_bad_request( mock_save, patch, "can't remove non-existent object 'non-existent'") def test_remove_non_existent_step_fail(self, mock_save): patch = [{'path': '/steps/1', 'op': 'remove'}] self._test_update_bad_request( mock_save, patch, "can't remove non-existent object '1'") def test_remove_only_step_fail(self, mock_save): patch = [{'path': '/steps/0', 'op': 'remove'}] self._test_update_bad_request( mock_save, patch, "No deploy steps specified") def test_remove_non_existent_step_property_fail(self, mock_save): patch = [{'path': '/steps/0/non-existent', 'op': 'remove'}] self._test_update_bad_request( mock_save, patch, "can't remove non-existent object 'non-existent'") def test_add_root_non_existent(self, mock_save): patch = [{'path': '/foo', 'value': 'bar', 'op': 'add'}] self._test_update_bad_request( mock_save, patch, "Adding a new attribute \\(/foo\\)") def test_add_too_high_index_step_fail(self, mock_save): step = { 'interface': 'bios', 'step': 'apply_configuration', 'args': {'foo': 'bar'}, 'priority': 42 } patch = [{'path': '/steps/2', 'op': 'add', 'value': step}] self._test_update_bad_request( mock_save, patch, "can't insert outside of list") def test_add_multi(self, mock_save): steps = [ { 'interface': 'raid', 'step': 'create_configuration%d' % i, 'args': {}, 'priority': 10, } for i in range(3) ] patch = [] for i, step in enumerate(steps): patch.append({'path': '/steps/%d' % i, 'value': step, 'op': 'add'}) response = self.patch_json('/deploy_templates/%s' % self.template.uuid, patch, headers=self.headers) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) self.assertEqual(steps, response.json['steps'][:-1]) self.assertEqual(_obj_to_api_step(self.template.steps[0]), response.json['steps'][-1]) mock_save.assert_called_once_with(mock.ANY) class TestPost(BaseDeployTemplatesAPITest): @mock.patch.object(notification_utils, '_emit_api_notification', autospec=True) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_create(self, mock_utcnow, mock_notify): tdict = test_api_utils.post_get_test_deploy_template() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/deploy_templates', tdict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/deploy_templates/%s' % tdict['uuid'], headers=self.headers) self.assertEqual(tdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/deploy_templates/%s' % tdict['uuid'] self.assertEqual(expected_location, urlparse.urlparse(response.location).path) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) def test_create_invalid_api_version(self): tdict = test_api_utils.post_get_test_deploy_template() response = self.post_json( '/deploy_templates', tdict, headers=self.invalid_version_headers, expect_errors=True) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_create_doesnt_contain_id(self): with mock.patch.object( self.dbapi, 'create_deploy_template', wraps=self.dbapi.create_deploy_template) as mock_create: tdict = test_api_utils.post_get_test_deploy_template() self.post_json('/deploy_templates', tdict, headers=self.headers) self.get_json('/deploy_templates/%s' % tdict['uuid'], headers=self.headers) mock_create.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', mock_create.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_generate_uuid(self, mock_warn, mock_except): tdict = test_api_utils.post_get_test_deploy_template() del tdict['uuid'] response = self.post_json('/deploy_templates', tdict, headers=self.headers) result = self.get_json('/deploy_templates/%s' % response.json['uuid'], headers=self.headers) self.assertTrue(uuidutils.is_uuid_like(result['uuid'])) self.assertFalse(mock_warn.called) self.assertFalse(mock_except.called) @mock.patch.object(notification_utils, '_emit_api_notification', autospec=True) @mock.patch.object(objects.DeployTemplate, 'create', autospec=True) def test_create_error(self, mock_create, mock_notify): mock_create.side_effect = Exception() tdict = test_api_utils.post_get_test_deploy_template() self.post_json('/deploy_templates', tdict, headers=self.headers, expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def _test_create_ok(self, tdict): response = self.post_json('/deploy_templates', tdict, headers=self.headers) self.assertEqual(http_client.CREATED, response.status_int) def _test_create_bad_request(self, tdict, error_msg): response = self.post_json('/deploy_templates', tdict, expect_errors=True, headers=self.headers) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertIn(error_msg, response.json['error_message']) def test_create_long_name(self): name = 'CUSTOM_' + 'X' * 248 tdict = test_api_utils.post_get_test_deploy_template(name=name) self._test_create_ok(tdict) def test_create_standard_trait_name(self): name = 'HW_CPU_X86_VMX' tdict = test_api_utils.post_get_test_deploy_template(name=name) self._test_create_ok(tdict) def test_create_name_invalid_too_long(self): name = 'CUSTOM_' + 'X' * 249 tdict = test_api_utils.post_get_test_deploy_template(name=name) self._test_create_bad_request( tdict, 'Deploy template name must be a valid trait') def test_create_name_invalid_not_a_trait(self): name = 'not-a-trait' tdict = test_api_utils.post_get_test_deploy_template(name=name) self._test_create_bad_request( tdict, 'Deploy template name must be a valid trait') def test_create_steps_invalid_duplicate(self): steps = [ { 'interface': 'raid', 'step': 'create_configuration', 'args': {'foo': '%d' % i}, 'priority': i, } for i in range(2) ] tdict = test_api_utils.post_get_test_deploy_template(steps=steps) self._test_create_bad_request(tdict, "Duplicate deploy steps") def _test_create_no_mandatory_field(self, field): tdict = test_api_utils.post_get_test_deploy_template() del tdict[field] self._test_create_bad_request(tdict, "Mandatory field missing") def test_create_no_mandatory_field_name(self): self._test_create_no_mandatory_field('name') def test_create_no_mandatory_field_steps(self): self._test_create_no_mandatory_field('steps') def _test_create_no_mandatory_step_field(self, field): tdict = test_api_utils.post_get_test_deploy_template() del tdict['steps'][0][field] self._test_create_bad_request(tdict, "Mandatory field missing") def test_create_no_mandatory_step_field_interface(self): self._test_create_no_mandatory_step_field('interface') def test_create_no_mandatory_step_field_step(self): self._test_create_no_mandatory_step_field('step') def test_create_no_mandatory_step_field_args(self): self._test_create_no_mandatory_step_field('args') def test_create_no_mandatory_step_field_priority(self): self._test_create_no_mandatory_step_field('priority') def _test_create_invalid_field(self, field, value, error_msg): tdict = test_api_utils.post_get_test_deploy_template() tdict[field] = value self._test_create_bad_request(tdict, error_msg) def test_create_invalid_field_name(self): self._test_create_invalid_field( 'name', 42, 'Invalid input for field/attribute name') def test_create_invalid_field_name_none(self): self._test_create_invalid_field( 'name', None, "Deploy template name cannot be None") def test_create_invalid_field_steps(self): self._test_create_invalid_field( 'steps', {}, "Invalid input for field/attribute template") def test_create_invalid_field_empty_steps(self): self._test_create_invalid_field( 'steps', [], "No deploy steps specified") def test_create_invalid_field_extra(self): self._test_create_invalid_field( 'extra', 42, "Invalid input for field/attribute template") def test_create_invalid_field_foo(self): self._test_create_invalid_field( 'foo', 'bar', "Unknown attribute for argument template: foo") def _test_create_invalid_step_field(self, field, value, error_msg=None): tdict = test_api_utils.post_get_test_deploy_template() tdict['steps'][0][field] = value if error_msg is None: error_msg = "Invalid input for field/attribute" self._test_create_bad_request(tdict, error_msg) def test_create_invalid_step_field_interface1(self): self._test_create_invalid_step_field('interface', [3]) def test_create_invalid_step_field_interface2(self): self._test_create_invalid_step_field('interface', 'foo') def test_create_invalid_step_field_step(self): self._test_create_invalid_step_field('step', 42) def test_create_invalid_step_field_args1(self): self._test_create_invalid_step_field('args', 'not a dict') def test_create_invalid_step_field_args2(self): self._test_create_invalid_step_field('args', []) def test_create_invalid_step_field_priority(self): self._test_create_invalid_step_field('priority', 'not a number') def test_create_invalid_step_field_negative_priority(self): self._test_create_invalid_step_field('priority', -1) def test_create_invalid_step_field_foo(self): self._test_create_invalid_step_field( 'foo', 'bar', "Unknown attribute for argument template.steps: foo") def test_create_step_string_priority(self): tdict = test_api_utils.post_get_test_deploy_template() tdict['steps'][0]['priority'] = '42' self._test_create_ok(tdict) def test_create_complex_step_args(self): tdict = test_api_utils.post_get_test_deploy_template() tdict['steps'][0]['args'] = {'foo': [{'bar': 'baz'}]} self._test_create_ok(tdict) @mock.patch.object(objects.DeployTemplate, 'destroy', autospec=True) class TestDelete(BaseDeployTemplatesAPITest): def setUp(self): super(TestDelete, self).setUp() self.template = obj_utils.create_test_deploy_template(self.context) @mock.patch.object(notification_utils, '_emit_api_notification', autospec=True) def test_delete_by_uuid(self, mock_notify, mock_destroy): self.delete('/deploy_templates/%s' % self.template.uuid, headers=self.headers) mock_destroy.assert_called_once_with(mock.ANY) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) def test_delete_by_uuid_with_json(self, mock_destroy): self.delete('/deploy_templates/%s.json' % self.template.uuid, headers=self.headers) mock_destroy.assert_called_once_with(mock.ANY) def test_delete_by_name(self, mock_destroy): self.delete('/deploy_templates/%s' % self.template.name, headers=self.headers) mock_destroy.assert_called_once_with(mock.ANY) def test_delete_by_name_with_json(self, mock_destroy): self.delete('/deploy_templates/%s.json' % self.template.name, headers=self.headers) mock_destroy.assert_called_once_with(mock.ANY) def test_delete_invalid_api_version(self, mock_dpt): response = self.delete('/deploy_templates/%s' % self.template.uuid, expect_errors=True, headers=self.invalid_version_headers) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_delete_old_api_version(self, mock_dpt): # Names like CUSTOM_1 were not valid in API 1.1, but the check should # go after the microversion check. response = self.delete('/deploy_templates/%s' % self.template.name, expect_errors=True) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status_int) def test_delete_by_name_non_existent(self, mock_dpt): res = self.delete('/deploy_templates/%s' % 'blah', expect_errors=True, headers=self.headers) self.assertEqual(http_client.NOT_FOUND, res.status_code) ironic-15.0.0/ironic/tests/unit/api/controllers/v1/__init__.py0000664000175000017500000000000013652514273024262 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/api/controllers/v1/test_chassis.py0000664000175000017500000007340413652514273025241 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the API /chassis/ methods. """ import datetime from http import client as http_client from urllib import parse as urlparse import mock from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.api.controllers import base as api_base from ironic.api.controllers import v1 as api_v1 from ironic.api.controllers.v1 import chassis as api_chassis from ironic.api.controllers.v1 import notification_utils from ironic.api import types as atypes from ironic import objects from ironic.objects import fields as obj_fields from ironic.tests import base from ironic.tests.unit.api import base as test_api_base from ironic.tests.unit.api import utils as apiutils from ironic.tests.unit.objects import utils as obj_utils class TestChassisObject(base.TestCase): def test_chassis_init(self): chassis_dict = apiutils.chassis_post_data() del chassis_dict['description'] chassis = api_chassis.Chassis(**chassis_dict) self.assertEqual(atypes.Unset, chassis.description) def test_chassis_sample(self): expected_description = 'Sample chassis' sample = api_chassis.Chassis.sample(expand=False) self.assertEqual(expected_description, sample.as_dict()['description']) class TestListChassis(test_api_base.BaseApiTest): def test_empty(self): data = self.get_json('/chassis') self.assertEqual([], data['chassis']) def test_one(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis') self.assertEqual(chassis.uuid, data['chassis'][0]["uuid"]) self.assertNotIn('extra', data['chassis'][0]) self.assertNotIn('nodes', data['chassis'][0]) def test_get_one(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/%s' % chassis['uuid']) self.assertEqual(chassis.uuid, data['uuid']) self.assertIn('extra', data) self.assertIn('nodes', data) def test_get_one_custom_fields(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'extra,description' data = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.max_version())}) # We always append "links" self.assertItemsEqual(['description', 'extra', 'links'], data) def test_get_collection_custom_fields(self): fields = 'uuid,extra' for i in range(3): obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) data = self.get_json( '/chassis?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(3, len(data['chassis'])) for ch in data['chassis']: # We always append "links" self.assertItemsEqual(['uuid', 'extra', 'links'], ch) def test_get_custom_fields_invalid_fields(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'uuid,spongebob' response = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.max_version())}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn('spongebob', response.json['error_message']) def test_get_custom_fields_invalid_api_version(self): chassis = obj_utils.create_test_chassis(self.context) fields = 'uuid,extra' response = self.get_json( '/chassis/%s?fields=%s' % (chassis.uuid, fields), headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.NOT_ACCEPTABLE, response.status_int) def test_detail(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/detail') self.assertEqual(chassis.uuid, data['chassis'][0]["uuid"]) self.assertIn('extra', data['chassis'][0]) self.assertIn('nodes', data['chassis'][0]) def test_detail_query(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json( '/chassis?detail=True', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(chassis.uuid, data['chassis'][0]["uuid"]) self.assertIn('extra', data['chassis'][0]) self.assertIn('nodes', data['chassis'][0]) def test_detail_query_false(self): obj_utils.create_test_chassis(self.context) data1 = self.get_json( '/chassis', headers={api_base.Version.string: str(api_v1.max_version())}) data2 = self.get_json( '/chassis?detail=False', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(data1['chassis'], data2['chassis']) def test_detail_using_query_and_fields(self): obj_utils.create_test_chassis(self.context) response = self.get_json( '/chassis?detail=True&fields=description', headers={api_base.Version.string: str(api_v1.max_version())}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_using_query_false_and_fields(self): obj_utils.create_test_chassis(self.context) data = self.get_json( '/chassis?detail=False&fields=description', headers={api_base.Version.string: str(api_v1.max_version())}) self.assertIn('description', data['chassis'][0]) self.assertNotIn('uuid', data['chassis'][0]) def test_detail_using_query_old_version(self): obj_utils.create_test_chassis(self.context) response = self.get_json( '/chassis?detail=True', headers={api_base.Version.string: str(api_v1.min_version())}, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_detail_against_single(self): chassis = obj_utils.create_test_chassis(self.context) response = self.get_json('/chassis/%s/detail' % chassis['uuid'], expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) def test_many(self): ch_list = [] for id_ in range(5): chassis = obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) ch_list.append(chassis.uuid) data = self.get_json('/chassis') self.assertEqual(len(ch_list), len(data['chassis'])) uuids = [n['uuid'] for n in data['chassis']] self.assertCountEqual(ch_list, uuids) def _test_links(self, public_url=None): cfg.CONF.set_override('public_endpoint', public_url, 'api') uuid = uuidutils.generate_uuid() obj_utils.create_test_chassis(self.context, uuid=uuid) data = self.get_json('/chassis/%s' % uuid) self.assertIn('links', data) self.assertEqual(2, len(data['links'])) self.assertIn(uuid, data['links'][0]['href']) for l in data['links']: bookmark = l['rel'] == 'bookmark' self.assertTrue(self.validate_link(l['href'], bookmark=bookmark)) if public_url is not None: expected = [{'href': '%s/v1/chassis/%s' % (public_url, uuid), 'rel': 'self'}, {'href': '%s/chassis/%s' % (public_url, uuid), 'rel': 'bookmark'}] for i in expected: self.assertIn(i, data['links']) def test_links(self): self._test_links() def test_links_public_url(self): self._test_links(public_url='http://foo') def test_collection_links(self): for id in range(5): obj_utils.create_test_chassis(self.context, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis/?limit=3') self.assertEqual(3, len(data['chassis'])) next_marker = data['chassis'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_default_limit(self): cfg.CONF.set_override('max_limit', 3, 'api') for id_ in range(5): obj_utils.create_test_chassis(self.context, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis') self.assertEqual(3, len(data['chassis'])) next_marker = data['chassis'][-1]['uuid'] self.assertIn(next_marker, data['next']) def test_collection_links_custom_fields(self): fields = 'extra,uuid' cfg.CONF.set_override('max_limit', 3, 'api') for i in range(5): obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) data = self.get_json( '/chassis?fields=%s' % fields, headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(3, len(data['chassis'])) next_marker = data['chassis'][-1]['uuid'] self.assertIn(next_marker, data['next']) self.assertIn('fields', data['next']) def test_get_collection_pagination_no_uuid(self): fields = 'extra' limit = 2 chassis_list = [] for id_ in range(3): chassis = obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) chassis_list.append(chassis) data = self.get_json( '/chassis?fields=%s&limit=%s' % (fields, limit), headers={api_base.Version.string: str(api_v1.max_version())}) self.assertEqual(limit, len(data['chassis'])) self.assertIn('marker=%s' % chassis_list[limit - 1].uuid, data['next']) def test_sort_key(self): ch_list = [] for id_ in range(3): chassis = obj_utils.create_test_chassis( self.context, uuid=uuidutils.generate_uuid()) ch_list.append(chassis.uuid) data = self.get_json('/chassis?sort_key=uuid') uuids = [n['uuid'] for n in data['chassis']] self.assertEqual(sorted(ch_list), uuids) def test_sort_key_invalid(self): invalid_keys_list = ['foo', 'extra'] for invalid_key in invalid_keys_list: response = self.get_json('/chassis?sort_key=%s' % invalid_key, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(invalid_key, response.json['error_message']) def test_nodes_subresource_link(self): chassis = obj_utils.create_test_chassis(self.context) data = self.get_json('/chassis/%s' % chassis.uuid) self.assertIn('nodes', data) def test_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) for id_ in range(2): obj_utils.create_test_node(self.context, chassis_id=chassis.id, uuid=uuidutils.generate_uuid()) data = self.get_json('/chassis/%s/nodes' % chassis.uuid) self.assertEqual(2, len(data['nodes'])) self.assertNotIn('next', data) # Test collection pagination data = self.get_json('/chassis/%s/nodes?limit=1' % chassis.uuid) self.assertEqual(1, len(data['nodes'])) self.assertIn('next', data) def test_nodes_subresource_no_uuid(self): response = self.get_json('/chassis/nodes', expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) def test_nodes_subresource_chassis_not_found(self): non_existent_uuid = 'eeeeeeee-cccc-aaaa-bbbb-cccccccccccc' response = self.get_json('/chassis/%s/nodes' % non_existent_uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) class TestPatch(test_api_base.BaseApiTest): def setUp(self): super(TestPatch, self).setUp() obj_utils.create_test_chassis(self.context) def test_update_not_found(self): uuid = uuidutils.generate_uuid() response = self.patch_json('/chassis/%s' % uuid, [{'path': '/extra/a', 'value': 'b', 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow') def test_replace_singular(self, mock_utcnow, mock_notify): chassis = obj_utils.get_test_chassis(self.context) description = 'chassis-new-description' test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'value': description, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertEqual(description, result['description']) return_updated_at = timeutils.parse_isotime( result['updated_at']).replace(tzinfo=None) self.assertEqual(test_time, return_updated_at) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.Chassis, 'save') def test_update_error(self, mock_save, mock_notify): mock_save.side_effect = Exception() chassis = obj_utils.get_test_chassis(self.context) self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'value': 'new', 'op': 'replace'}], expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'update', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def test_replace_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} chassis = obj_utils.create_test_chassis(self.context, extra=extra, uuid=uuidutils.generate_uuid()) new_value = 'new value' response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo2', 'value': new_value, 'op': 'replace'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) extra["foo2"] = new_value self.assertEqual(extra, result['extra']) def test_remove_singular(self): chassis = obj_utils.create_test_chassis(self.context, extra={'a': 'b'}, uuid=uuidutils.generate_uuid()) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertIsNone(result['description']) # Assert nothing else was changed self.assertEqual(chassis.uuid, result['uuid']) self.assertEqual(chassis.extra, result['extra']) def test_remove_multi(self): extra = {"foo1": "bar1", "foo2": "bar2", "foo3": "bar3"} chassis = obj_utils.create_test_chassis(self.context, extra=extra, description="foobar", uuid=uuidutils.generate_uuid()) # Removing one item from the collection response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo2', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) extra.pop("foo2") self.assertEqual(extra, result['extra']) # Removing the collection response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra', 'op': 'remove'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) self.assertEqual({}, result['extra']) # Assert nothing else was changed self.assertEqual(chassis.uuid, result['uuid']) self.assertEqual(chassis.description, result['description']) def test_remove_non_existent_property_fail(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json( '/chassis/%s' % chassis.uuid, [{'path': '/extra/non-existent', 'op': 'remove'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_code) self.assertTrue(response.json['error_message']) def test_add_root(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/description', 'value': 'test', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_int) def test_add_root_non_existent(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertTrue(response.json['error_message']) def test_add_multi(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/extra/foo1', 'value': 'bar1', 'op': 'add'}, {'path': '/extra/foo2', 'value': 'bar2', 'op': 'add'}]) self.assertEqual('application/json', response.content_type) self.assertEqual(http_client.OK, response.status_code) result = self.get_json('/chassis/%s' % chassis.uuid) expected = {"foo1": "bar1", "foo2": "bar2"} self.assertEqual(expected, result['extra']) def test_patch_nodes_subresource(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s/nodes' % chassis.uuid, [{'path': '/extra/foo', 'value': 'bar', 'op': 'add'}], expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_remove_uuid(self): chassis = obj_utils.get_test_chassis(self.context) response = self.patch_json('/chassis/%s' % chassis.uuid, [{'path': '/uuid', 'op': 'remove'}], expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) class TestPost(test_api_base.BaseApiTest): @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(timeutils, 'utcnow') def test_create_chassis(self, mock_utcnow, mock_notify): cdict = apiutils.chassis_post_data() test_time = datetime.datetime(2000, 1, 1, 0, 0) mock_utcnow.return_value = test_time response = self.post_json('/chassis', cdict) self.assertEqual(http_client.CREATED, response.status_int) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['uuid'], result['uuid']) self.assertFalse(result['updated_at']) return_created_at = timeutils.parse_isotime( result['created_at']).replace(tzinfo=None) self.assertEqual(test_time, return_created_at) # Check location header self.assertIsNotNone(response.location) expected_location = '/v1/chassis/%s' % cdict['uuid'] self.assertEqual(urlparse.urlparse(response.location).path, expected_location) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) @mock.patch.object(notification_utils, '_emit_api_notification') @mock.patch.object(objects.Chassis, 'create') def test_create_chassis_error(self, mock_save, mock_notify): mock_save.side_effect = Exception() cdict = apiutils.chassis_post_data() self.post_json('/chassis', cdict, expect_errors=True) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'create', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def test_create_chassis_doesnt_contain_id(self): with mock.patch.object(self.dbapi, 'create_chassis', wraps=self.dbapi.create_chassis) as cc_mock: cdict = apiutils.chassis_post_data(extra={'foo': 123}) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['extra'], result['extra']) cc_mock.assert_called_once_with(mock.ANY) # Check that 'id' is not in first arg of positional args self.assertNotIn('id', cc_mock.call_args[0][0]) @mock.patch.object(notification_utils.LOG, 'exception', autospec=True) @mock.patch.object(notification_utils.LOG, 'warning', autospec=True) def test_create_chassis_generate_uuid(self, mock_warning, mock_exception): cdict = apiutils.chassis_post_data() del cdict['uuid'] self.post_json('/chassis', cdict) result = self.get_json('/chassis') self.assertEqual(cdict['description'], result['chassis'][0]['description']) self.assertTrue(uuidutils.is_uuid_like(result['chassis'][0]['uuid'])) self.assertFalse(mock_warning.called) self.assertFalse(mock_exception.called) def test_post_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) ndict = apiutils.node_post_data() ndict['chassis_uuid'] = chassis.uuid response = self.post_json('/chassis/nodes', ndict, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_create_chassis_valid_extra(self): cdict = apiutils.chassis_post_data(extra={'str': 'foo', 'int': 123, 'float': 0.1, 'bool': True, 'list': [1, 2], 'none': None, 'dict': {'cat': 'meow'}}) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(cdict['extra'], result['extra']) def test_create_chassis_unicode_description(self): descr = u'\u0430\u043c\u043e' cdict = apiutils.chassis_post_data(description=descr) self.post_json('/chassis', cdict) result = self.get_json('/chassis/%s' % cdict['uuid']) self.assertEqual(descr, result['description']) def test_create_chassis_toolong_description(self): descr = 'a' * 256 valid_error_message = ('Value should have a maximum character ' 'requirement of 255') cdict = apiutils.chassis_post_data(description=descr) response = self.post_json('/chassis', cdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(valid_error_message, response.json['error_message']) def test_create_chassis_invalid_description(self): descr = 1334 valid_error_message = 'Value should be string' cdict = apiutils.chassis_post_data(description=descr) response = self.post_json('/chassis', cdict, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertIn(valid_error_message, response.json['error_message']) class TestDelete(test_api_base.BaseApiTest): @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_chassis(self, mock_notify): chassis = obj_utils.create_test_chassis(self.context) self.delete('/chassis/%s' % chassis.uuid) response = self.get_json('/chassis/%s' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.END)]) @mock.patch.object(notification_utils, '_emit_api_notification') def test_delete_chassis_with_node(self, mock_notify): chassis = obj_utils.create_test_chassis(self.context) obj_utils.create_test_node(self.context, chassis_id=chassis.id) response = self.delete('/chassis/%s' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.BAD_REQUEST, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) self.assertIn(chassis.uuid, response.json['error_message']) mock_notify.assert_has_calls([mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.INFO, obj_fields.NotificationStatus.START), mock.call(mock.ANY, mock.ANY, 'delete', obj_fields.NotificationLevel.ERROR, obj_fields.NotificationStatus.ERROR)]) def test_delete_chassis_not_found(self): uuid = uuidutils.generate_uuid() response = self.delete('/chassis/%s' % uuid, expect_errors=True) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual('application/json', response.content_type) self.assertTrue(response.json['error_message']) def test_delete_nodes_subresource(self): chassis = obj_utils.create_test_chassis(self.context) response = self.delete('/chassis/%s/nodes' % chassis.uuid, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) ironic-15.0.0/ironic/tests/unit/api/controllers/test_base.py0000664000175000017500000001232013652514273024156 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client import mock from webob import exc from ironic.api.controllers import base as cbase from ironic.tests.unit.api import base class TestBase(base.BaseApiTest): def test_api_setup(self): pass def test_bad_uri(self): response = self.get_json('/bad/path', expect_errors=True, headers={"Accept": "application/json"}) self.assertEqual(http_client.NOT_FOUND, response.status_int) self.assertEqual("application/json", response.content_type) self.assertTrue(response.json['error_message']) class TestVersion(base.BaseApiTest): @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_init(self, mock_parse): a = mock.Mock() b = mock.Mock() mock_parse.return_value = (a, b) v = cbase.Version('test', 'foo', 'bar') mock_parse.assert_called_with('test', 'foo', 'bar') self.assertEqual(a, v.major) self.assertEqual(b, v.minor) @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_repr(self, mock_parse): mock_parse.return_value = (123, 456) v = cbase.Version('test', mock.ANY, mock.ANY) result = "%s" % v self.assertEqual('123.456', result) @mock.patch('ironic.api.controllers.base.Version.parse_headers') def test_repr_with_strings(self, mock_parse): mock_parse.return_value = ('abc', 'def') v = cbase.Version('test', mock.ANY, mock.ANY) result = "%s" % v self.assertEqual('abc.def', result) def test_parse_headers_ok(self): version = cbase.Version.parse_headers( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) self.assertEqual((123, 456), version) def test_parse_headers_latest(self): for s in ['latest', 'LATEST']: version = cbase.Version.parse_headers( {cbase.Version.string: s}, mock.ANY, '1.9') self.assertEqual((1, 9), version) def test_parse_headers_bad_length(self): self.assertRaises( exc.HTTPNotAcceptable, cbase.Version.parse_headers, {cbase.Version.string: '1'}, mock.ANY, mock.ANY) self.assertRaises( exc.HTTPNotAcceptable, cbase.Version.parse_headers, {cbase.Version.string: '1.2.3'}, mock.ANY, mock.ANY) def test_parse_no_header(self): # this asserts that the minimum version string of "1.1" is applied version = cbase.Version.parse_headers({}, '1.1', '1.5') self.assertEqual((1, 1), version) def test_equals(self): ver_1 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_3 = cbase.Version( {cbase.Version.string: '654.321'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__eq__')) self.assertEqual(ver_1, ver_2) # Force __eq__ to be called and return False self.assertFalse(ver_1 == ver_3) # noqa def test_not_equals(self): ver_1 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_3 = cbase.Version( {cbase.Version.string: '654.321'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__ne__')) self.assertNotEqual(ver_1, ver_3) # Force __ne__ to be called and return False self.assertFalse(ver_1 != ver_2) # noqa def test_greaterthan(self): ver_1 = cbase.Version( {cbase.Version.string: '123.457'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__gt__')) self.assertGreater(ver_1, ver_2) # Force __gt__ to be called and return False self.assertFalse(ver_2 > ver_1) # noqa def test_lessthan(self): # __lt__ is created by @functools.total_ordering, make sure it exists # and works ver_1 = cbase.Version( {cbase.Version.string: '123.456'}, mock.ANY, mock.ANY) ver_2 = cbase.Version( {cbase.Version.string: '123.457'}, mock.ANY, mock.ANY) self.assertTrue(hasattr(ver_1, '__lt__')) self.assertLess(ver_1, ver_2) # Force __lt__ to be called and return False self.assertFalse(ver_2 < ver_1) # noqa ironic-15.0.0/ironic/tests/unit/api/controllers/__init__.py0000664000175000017500000000000013652514273023734 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/api/test_hooks.py0000664000175000017500000002566313652514273022037 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the Pecan API hooks.""" from http import client as http_client import json import mock from oslo_config import cfg import oslo_messaging as messaging from ironic.api.controllers import root from ironic.api import hooks from ironic.common import context from ironic.common import policy from ironic.tests import base as tests_base from ironic.tests.unit.api import base class FakeRequest(object): def __init__(self, headers, context, environ): self.headers = headers self.context = context self.environ = environ or {} self.version = (1, 0) self.host_url = 'http://127.0.0.1:6385' class FakeRequestState(object): def __init__(self, headers=None, context=None, environ=None): self.request = FakeRequest(headers, context, environ) self.response = FakeRequest(headers, context, environ) def fake_headers(admin=False): headers = { 'X-Auth-Token': '8d9f235ca7464dd7ba46f81515797ea0', 'X-Domain-Id': 'None', 'X-Domain-Name': 'None', 'X-Project-Domain-Id': 'default', 'X-Project-Domain-Name': 'Default', 'X-Project-Id': 'b4efa69d4ffa4973863f2eefc094f7f8', 'X-Project-Name': 'admin', 'X-Role': '_member_,admin', 'X-Roles': '_member_,admin', 'X-Tenant': 'foo', 'X-Tenant-Id': 'b4efa69d4ffa4973863f2eefc094f7f8', 'X-Tenant-Name': 'foo', 'X-User': 'foo', 'X-User-Domain-Id': 'default', 'X-User-Domain-Name': 'Default', 'X-User-Id': '604ab2a197c442c2a84aba66708a9e1e', 'X-User-Name': 'foo', 'X-OpenStack-Ironic-API-Version': '1.0' } if admin: headers.update({ 'X-Project-Name': 'admin', 'X-Role': '_member_,admin', 'X-Roles': '_member_,admin', 'X-Tenant': 'admin', 'X-Tenant-Name': 'admin', }) else: headers.update({ 'X-Project-Name': 'foo', 'X-Role': '_member_', 'X-Roles': '_member_', }) return headers def headers_to_environ(headers, **kwargs): environ = {} for k, v in headers.items(): environ['HTTP_%s' % k.replace('-', '_').upper()] = v environ.update(kwargs) return environ class TestNoExceptionTracebackHook(base.BaseApiTest): TRACE = [u'Traceback (most recent call last):', u' File "/opt/stack/ironic/ironic/common/rpc/amqp.py",' ' line 434, in _process_data\\n **args)', u' File "/opt/stack/ironic/ironic/common/rpc/' 'dispatcher.py", line 172, in dispatch\\n result =' ' getattr(proxyobj, method)(ctxt, **kwargs)'] MSG_WITHOUT_TRACE = "Test exception message." MSG_WITH_TRACE = MSG_WITHOUT_TRACE + "\n" + "\n".join(TRACE) def setUp(self): super(TestNoExceptionTracebackHook, self).setUp() p = mock.patch.object(root.Root, 'convert') self.root_convert_mock = p.start() self.addCleanup(p.stop) def test_hook_exception_success(self): self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(self.MSG_WITHOUT_TRACE, actual_msg) def test_hook_remote_error_success(self): test_exc_type = 'TestException' self.root_convert_mock.side_effect = messaging.rpc.RemoteError( test_exc_type, self.MSG_WITHOUT_TRACE, self.TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) # NOTE(max_lobur): For RemoteError the client message will still have # some garbage because in RemoteError traceback is serialized as a list # instead of'\n'.join(trace). But since RemoteError is kind of very # rare thing (happens due to wrong deserialization settings etc.) # we don't care about this garbage. expected_msg = ("Remote error: %s %s" % (test_exc_type, self.MSG_WITHOUT_TRACE) + "\n['") actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(expected_msg, actual_msg) def _test_hook_without_traceback(self): msg = "Error message without traceback \n but \n multiline" self.root_convert_mock.side_effect = Exception(msg) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads(response.json['error_message'])['faultstring'] self.assertEqual(msg, actual_msg) def test_hook_without_traceback(self): self._test_hook_without_traceback() def test_hook_without_traceback_debug(self): cfg.CONF.set_override('debug', True) self._test_hook_without_traceback() def test_hook_without_traceback_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) self._test_hook_without_traceback() def _test_hook_on_serverfault(self): self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE) response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads( response.json['error_message'])['faultstring'] return actual_msg def test_hook_on_serverfault(self): msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_serverfault_debug(self): cfg.CONF.set_override('debug', True) msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_serverfault_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) msg = self._test_hook_on_serverfault() self.assertEqual(self.MSG_WITH_TRACE, msg) def _test_hook_on_clientfault(self): client_error = Exception(self.MSG_WITH_TRACE) client_error.code = http_client.BAD_REQUEST self.root_convert_mock.side_effect = client_error response = self.get_json('/', path_prefix='', expect_errors=True) actual_msg = json.loads( response.json['error_message'])['faultstring'] return actual_msg def test_hook_on_clientfault(self): msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_clientfault_debug(self): cfg.CONF.set_override('debug', True) msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITHOUT_TRACE, msg) def test_hook_on_clientfault_debug_tracebacks(self): cfg.CONF.set_override('debug_tracebacks_in_api', True) msg = self._test_hook_on_clientfault() self.assertEqual(self.MSG_WITH_TRACE, msg) class TestContextHook(base.BaseApiTest): @mock.patch.object(context, 'RequestContext') @mock.patch.object(policy, 'check') def _test_context_hook(self, mock_policy, mock_ctx, is_admin=False, is_public_api=False, auth_strategy='keystone', request_id=None): cfg.CONF.set_override('auth_strategy', auth_strategy) headers = fake_headers(admin=is_admin) environ = headers_to_environ(headers, is_public_api=is_public_api) reqstate = FakeRequestState(headers=headers, environ=environ) context_hook = hooks.ContextHook(None) ctx = mock.Mock() if request_id: ctx.request_id = request_id mock_ctx.from_environ.return_value = ctx policy_dict = {'user_id': 'foo'} # Lots of other values here ctx.to_policy_values.return_value = policy_dict mock_policy.return_value = is_admin context_hook.before(reqstate) creds_dict = {'is_public_api': is_public_api} mock_ctx.from_environ.assert_called_once_with(environ, **creds_dict) mock_policy.assert_called_once_with('is_admin', policy_dict, policy_dict) self.assertIs(is_admin, ctx.is_admin) if auth_strategy == 'noauth': self.assertIsNone(ctx.auth_token) return context_hook, reqstate def test_context_hook_not_admin(self): self._test_context_hook() def test_context_hook_admin(self): self._test_context_hook(is_admin=True) def test_context_hook_public_api(self): self._test_context_hook(is_admin=True, is_public_api=True) def test_context_hook_noauth_token_removed(self): self._test_context_hook(auth_strategy='noauth') def test_context_hook_after_add_request_id(self): context_hook, reqstate = self._test_context_hook(is_admin=True, request_id='fake-id') context_hook.after(reqstate) self.assertEqual('fake-id', reqstate.response.headers['Openstack-Request-Id']) def test_context_hook_after_miss_context(self): response = self.get_json('/bad/path', expect_errors=True) self.assertNotIn('Openstack-Request-Id', response.headers) class TestPolicyDeprecation(tests_base.TestCase): @mock.patch.object(hooks, 'CHECKED_DEPRECATED_POLICY_ARGS', False) @mock.patch.object(hooks.LOG, 'warning') @mock.patch.object(policy, 'get_enforcer') def test_policy_deprecation_check(self, enforcer_mock, warning_mock): rules = {'is_member': 'project_name:demo or tenant:baremetal', 'is_default_project_domain': 'project_domain_id:default'} enforcer_mock.return_value = mock.Mock(file_rules=rules, autospec=True) hooks.policy_deprecation_check() self.assertEqual(1, warning_mock.call_count) class TestPublicUrlHook(base.BaseApiTest): def test_before_host_url(self): headers = fake_headers() reqstate = FakeRequestState(headers=headers) trusted_call_hook = hooks.PublicUrlHook() trusted_call_hook.before(reqstate) self.assertEqual(reqstate.request.host_url, reqstate.request.public_url) def test_before_public_endpoint(self): cfg.CONF.set_override('public_endpoint', 'http://foo', 'api') headers = fake_headers() reqstate = FakeRequestState(headers=headers) trusted_call_hook = hooks.PublicUrlHook() trusted_call_hook.before(reqstate) self.assertEqual('http://foo', reqstate.request.public_url) ironic-15.0.0/ironic/tests/unit/api/test_audit.py0000664000175000017500000000370113652514273022007 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests to assert that audit middleware works as expected. """ from keystonemiddleware import audit import mock from oslo_config import cfg from ironic.common import exception from ironic.tests.unit.api import base CONF = cfg.CONF class TestAuditMiddleware(base.BaseApiTest): """Provide a basic smoke test to ensure audit middleware is active. The tests below provide minimal confirmation that the audit middleware is called, and may be configured. For comprehensive tests, please consult the test suite in keystone audit_middleware. """ @mock.patch.object(audit, 'AuditMiddleware') def test_enable_audit_request(self, mock_audit): CONF.audit.enabled = True self._make_app() mock_audit.assert_called_once_with( mock.ANY, audit_map_file=CONF.audit.audit_map_file, ignore_req_list=CONF.audit.ignore_req_list) @mock.patch.object(audit, 'AuditMiddleware') def test_enable_audit_request_error(self, mock_audit): CONF.audit.enabled = True mock_audit.side_effect = IOError("file access error") self.assertRaises(exception.InputFileError, self._make_app) @mock.patch.object(audit, 'AuditMiddleware') def test_disable_audit_request(self, mock_audit): CONF.audit.enabled = False self._make_app() self.assertFalse(mock_audit.called) ironic-15.0.0/ironic/tests/unit/api/test_acl.py0000664000175000017500000001015113652514273021435 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for ACL. Checks whether certain kinds of requests are blocked or allowed to be processed. """ from http import client as http_client import mock from oslo_config import cfg from ironic.tests.unit.api import base from ironic.tests.unit.api import utils from ironic.tests.unit.db import utils as db_utils cfg.CONF.import_opt('cache', 'keystonemiddleware.auth_token', group='keystone_authtoken') class TestACL(base.BaseApiTest): def setUp(self): super(TestACL, self).setUp() self.environ = {'fake.cache': utils.FakeMemcache()} self.fake_db_node = db_utils.get_test_node(chassis_id=None) self.node_path = '/nodes/%s' % self.fake_db_node['uuid'] def get_json(self, path, expect_errors=False, headers=None, q=None, **param): q = [] if q is None else q return super(TestACL, self).get_json(path, expect_errors=expect_errors, headers=headers, q=q, extra_environ=self.environ, **param) def _make_app(self): cfg.CONF.set_override('cache', 'fake.cache', group='keystone_authtoken') cfg.CONF.set_override('auth_strategy', 'keystone') return super(TestACL, self)._make_app() def test_non_authenticated(self): response = self.get_json(self.node_path, expect_errors=True) self.assertEqual(http_client.UNAUTHORIZED, response.status_int) def test_authenticated(self): with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_db_node response = self.get_json( self.node_path, headers={'X-Auth-Token': utils.ADMIN_TOKEN}) self.assertEqual(self.fake_db_node['uuid'], response['uuid']) mock_get_node.assert_called_once_with(self.fake_db_node['uuid']) def test_non_admin(self): response = self.get_json(self.node_path, headers={'X-Auth-Token': utils.MEMBER_TOKEN}, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_non_admin_with_admin_header(self): response = self.get_json(self.node_path, headers={'X-Auth-Token': utils.MEMBER_TOKEN, 'X-Roles': 'admin'}, expect_errors=True) self.assertEqual(http_client.FORBIDDEN, response.status_int) def test_public_api(self): # expect_errors should be set to True: If expect_errors is set to False # the response gets converted to JSON and we cannot read the response # code so easy. for route in ('/', '/v1'): response = self.get_json(route, path_prefix='', expect_errors=True) self.assertEqual(http_client.OK, response.status_int) def test_public_api_with_path_extensions(self): routes = {'/v1/': http_client.OK, '/v1.json': http_client.OK, '/v1.xml': http_client.NOT_FOUND} for url in routes: response = self.get_json(url, path_prefix='', expect_errors=True) self.assertEqual(routes[url], response.status_int) ironic-15.0.0/ironic/tests/unit/api/base.py0000664000175000017500000002437313652514273020564 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for API tests.""" # NOTE: Ported from ceilometer/tests/api.py (subsequently moved to # ceilometer/tests/api/__init__.py). This should be oslo'ified: # https://bugs.launchpad.net/ironic/+bug/1255115. from urllib import parse as urlparse import mock from oslo_config import cfg import pecan import pecan.testing from ironic.tests.unit.db import base as db_base PATH_PREFIX = '/v1' cfg.CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token') class BaseApiTest(db_base.DbTestCase): """Pecan controller functional testing class. Used for functional tests of Pecan controllers where you need to test your literal application and its integration with the framework. """ SOURCE_DATA = {'test_source': {'somekey': '666'}} root_controller = 'ironic.api.controllers.root.RootController' def setUp(self): super(BaseApiTest, self).setUp() cfg.CONF.set_override("auth_version", "v3", group='keystone_authtoken') cfg.CONF.set_override("admin_user", "admin", group='keystone_authtoken') cfg.CONF.set_override("auth_strategy", "noauth") self.app = self._make_app() def reset_pecan(): pecan.set_config({}, overwrite=True) self.addCleanup(reset_pecan) p = mock.patch('ironic.api.controllers.v1.Controller._check_version') self._check_version = p.start() self.addCleanup(p.stop) def _make_app(self): # Determine where we are so we can set up paths in the config root_dir = self.path_get() self.app_config = { 'app': { 'root': self.root_controller, 'modules': ['ironic.api'], 'static_root': '%s/public' % root_dir, 'debug': True, 'template_path': '%s/api/templates' % root_dir, 'acl_public_routes': ['/', '/v1'], }, } return pecan.testing.load_test_app(self.app_config) def _request_json(self, path, params, expect_errors=False, headers=None, method="post", extra_environ=None, status=None, path_prefix=PATH_PREFIX): """Sends simulated HTTP request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param method: Request method type. Appropriate method function call should be used rather than passing attribute in. :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response :param path_prefix: prefix of the url path """ full_path = path_prefix + path print('%s: %s %s' % (method.upper(), full_path, params)) response = getattr(self.app, "%s_json" % method)( str(full_path), params=params, headers=headers, status=status, extra_environ=extra_environ, expect_errors=expect_errors ) print('GOT:%s' % response) return response def put_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP PUT request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="put") def post_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP POST request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="post") def patch_json(self, path, params, expect_errors=False, headers=None, extra_environ=None, status=None): """Sends simulated HTTP PATCH request to Pecan test app. :param path: url path of target service :param params: content for wsgi.input of request :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response """ return self._request_json(path=path, params=params, expect_errors=expect_errors, headers=headers, extra_environ=extra_environ, status=status, method="patch") def delete(self, path, expect_errors=False, headers=None, extra_environ=None, status=None, path_prefix=PATH_PREFIX): """Sends simulated HTTP DELETE request to Pecan test app. :param path: url path of target service :param expect_errors: Boolean value; whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param status: expected status code of response :param path_prefix: prefix of the url path """ full_path = path_prefix + path print('DELETE: %s' % (full_path)) response = self.app.delete(str(full_path), headers=headers, status=status, extra_environ=extra_environ, expect_errors=expect_errors) print('GOT:%s' % response) return response def get_json(self, path, expect_errors=False, headers=None, extra_environ=None, q=None, path_prefix=PATH_PREFIX, **params): """Sends simulated HTTP GET request to Pecan test app. :param path: url path of target service :param expect_errors: Boolean value;whether an error is expected based on request :param headers: a dictionary of headers to send along with the request :param extra_environ: a dictionary of environ variables to send along with the request :param q: list of queries consisting of: field, value, op, and type keys :param path_prefix: prefix of the url path :param params: content for wsgi.input of request """ q = q if q is not None else [] full_path = path_prefix + path query_params = {'q.field': [], 'q.value': [], 'q.op': [], } for query in q: for name in ['field', 'op', 'value']: query_params['q.%s' % name].append(query.get(name, '')) all_params = {} all_params.update(params) if q: all_params.update(query_params) print('GET: %s %r' % (full_path, all_params)) response = self.app.get(full_path, params=all_params, headers=headers, extra_environ=extra_environ, expect_errors=expect_errors) if not expect_errors: response = response.json print('GOT:%s' % response) return response def validate_link(self, link, bookmark=False, headers=None): """Checks if the given link can get correct data.""" # removes the scheme and net location parts of the link url_parts = list(urlparse.urlparse(link)) url_parts[0] = url_parts[1] = '' # bookmark link should not have the version in the URL if bookmark and url_parts[2].startswith(PATH_PREFIX): return False full_path = urlparse.urlunparse(url_parts) try: self.get_json(full_path, path_prefix='', headers=headers) return True except Exception: return False ironic-15.0.0/ironic/tests/unit/api/test_proxy_middleware.py0000664000175000017500000000445513652514273024266 0ustar zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests to assert that proxy headers middleware works as expected. """ from oslo_config import cfg from ironic.tests.unit.api import base CONF = cfg.CONF class TestProxyHeadersMiddleware(base.BaseApiTest): """Provide a basic smoke test to ensure proxy headers middleware works.""" def setUp(self): CONF.set_override('public_endpoint', 'http://spam.ham/eggs', group='api') self.proxy_headers = {"X-Forwarded-Proto": "https", "X-Forwarded-Host": "mycloud.com", "X-Forwarded-Prefix": "/ironic"} super(TestProxyHeadersMiddleware, self).setUp() def test_proxy_headers_enabled(self): """Test enabled proxy headers middleware overriding public_endpoint""" # NOTE(pas-ha) setting config option and re-creating app # as the middleware registers its config option on instantiation CONF.set_override('enable_proxy_headers_parsing', True, group='oslo_middleware') self.app = self._make_app() response = self.get_json('/', path_prefix="", headers=self.proxy_headers) href = response["default_version"]["links"][0]["href"] self.assertTrue(href.startswith("https://mycloud.com/ironic")) def test_proxy_headers_disabled(self): """Test proxy headers middleware disabled by default""" response = self.get_json('/', path_prefix="", headers=self.proxy_headers) href = response["default_version"]["links"][0]["href"] # check that [api]public_endpoint is used when proxy headers parsing # is disabled self.assertTrue(href.startswith("http://spam.ham/eggs")) ironic-15.0.0/ironic/tests/unit/api/utils.py0000664000175000017500000001751013652514273021005 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utils for testing the API service. """ import datetime import hashlib import json from ironic.api.controllers.v1 import chassis as chassis_controller from ironic.api.controllers.v1 import deploy_template as dt_controller from ironic.api.controllers.v1 import node as node_controller from ironic.api.controllers.v1 import port as port_controller from ironic.api.controllers.v1 import portgroup as portgroup_controller from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import volume_connector as vc_controller from ironic.api.controllers.v1 import volume_target as vt_controller from ironic.tests.unit.db import utils as db_utils ADMIN_TOKEN = '4562138218392831' MEMBER_TOKEN = '4562138218392832' ADMIN_TOKEN_HASH = hashlib.sha256(ADMIN_TOKEN.encode()).hexdigest() MEMBER_TOKEN_HASH = hashlib.sha256(MEMBER_TOKEN.encode()).hexdigest() ADMIN_BODY = { 'access': { 'token': {'id': ADMIN_TOKEN, 'expires': '2100-09-11T00:00:00'}, 'user': {'id': 'user_id1', 'name': 'user_name1', 'tenantId': '123i2910', 'tenantName': 'mytenant', 'roles': [{'name': 'admin'}]}, } } MEMBER_BODY = { 'access': { 'token': {'id': MEMBER_TOKEN, 'expires': '2100-09-11T00:00:00'}, 'user': {'id': 'user_id2', 'name': 'user-good', 'tenantId': 'project-good', 'tenantName': 'goodies', 'roles': [{'name': 'Member'}]}, } } class FakeMemcache(object): """Fake cache that is used for keystone tokens lookup.""" # NOTE(lucasagomes): keystonemiddleware >= 2.0.0 the token cache # keys are sha256 hashes of the token key. This was introduced in # https://review.opendev.org/#/c/186971 _cache = { 'tokens/%s' % ADMIN_TOKEN: ADMIN_BODY, 'tokens/%s' % ADMIN_TOKEN_HASH: ADMIN_BODY, 'tokens/%s' % MEMBER_TOKEN: MEMBER_BODY, 'tokens/%s' % MEMBER_TOKEN_HASH: MEMBER_BODY, } def __init__(self): self.set_key = None self.set_value = None self.token_expiration = None def get(self, key): dt = datetime.datetime.utcnow() + datetime.timedelta(minutes=5) return json.dumps((self._cache.get(key), dt.isoformat())) def set(self, key, value, time=0, min_compress_len=0): self.set_value = value self.set_key = key def remove_internal(values, internal): # NOTE(yuriyz): internal attributes should not be posted, except uuid int_attr = [attr.lstrip('/') for attr in internal if attr != '/uuid'] return {k: v for (k, v) in values.items() if k not in int_attr} def node_post_data(**kw): node = db_utils.get_test_node(**kw) # These values are not part of the API object node.pop('version') node.pop('conductor_affinity') node.pop('chassis_id') node.pop('tags') node.pop('traits') node.pop('allocation_id') # NOTE(jroll): pop out fields that were introduced in later API versions, # unless explicitly requested. Otherwise, these will cause tests using # older API versions to fail. for field in api_utils.VERSIONED_FIELDS: if field not in kw: node.pop(field, None) internal = node_controller.NodePatchType.internal_attrs() return remove_internal(node, internal) def port_post_data(**kw): port = db_utils.get_test_port(**kw) # These values are not part of the API object port.pop('version') port.pop('node_id') port.pop('portgroup_id') internal = port_controller.PortPatchType.internal_attrs() return remove_internal(port, internal) def volume_connector_post_data(**kw): connector = db_utils.get_test_volume_connector(**kw) # These values are not part of the API object connector.pop('node_id') connector.pop('version') internal = vc_controller.VolumeConnectorPatchType.internal_attrs() return remove_internal(connector, internal) def volume_target_post_data(**kw): target = db_utils.get_test_volume_target(**kw) # These values are not part of the API object target.pop('node_id') target.pop('version') internal = vt_controller.VolumeTargetPatchType.internal_attrs() return remove_internal(target, internal) def chassis_post_data(**kw): chassis = db_utils.get_test_chassis(**kw) # version is not part of the API object chassis.pop('version') internal = chassis_controller.ChassisPatchType.internal_attrs() return remove_internal(chassis, internal) def post_get_test_node(**kw): # NOTE(lucasagomes): When creating a node via API (POST) # we have to use chassis_uuid node = node_post_data(**kw) chassis = db_utils.get_test_chassis() node['chassis_uuid'] = kw.get('chassis_uuid', chassis['uuid']) return node def portgroup_post_data(**kw): """Return a Portgroup object without internal attributes.""" portgroup = db_utils.get_test_portgroup(**kw) # These values are not part of the API object portgroup.pop('version') portgroup.pop('node_id') # NOTE(jroll): pop out fields that were introduced in later API versions, # unless explicitly requested. Otherwise, these will cause tests using # older API versions to fail. new_api_ver_arguments = ['mode', 'properties'] for arg in new_api_ver_arguments: if arg not in kw: portgroup.pop(arg) internal = portgroup_controller.PortgroupPatchType.internal_attrs() return remove_internal(portgroup, internal) def post_get_test_portgroup(**kw): """Return a Portgroup object with appropriate attributes.""" portgroup = portgroup_post_data(**kw) node = db_utils.get_test_node() portgroup['node_uuid'] = kw.get('node_uuid', node['uuid']) return portgroup _ALLOCATION_POST_FIELDS = {'resource_class', 'uuid', 'traits', 'candidate_nodes', 'name', 'extra', 'node', 'owner'} def allocation_post_data(node=None, **kw): """Return an Allocation object without internal attributes.""" allocation = db_utils.get_test_allocation(**kw) if node: # This is not a database field, so it has to be handled explicitly allocation['node'] = node return {key: value for key, value in allocation.items() if key in _ALLOCATION_POST_FIELDS} def fake_event_validator(v): """A fake event validator""" return v def deploy_template_post_data(**kw): """Return a DeployTemplate object without internal attributes.""" template = db_utils.get_test_deploy_template(**kw) # These values are not part of the API object template.pop('version') # Remove internal attributes from each step. step_internal = types.JsonPatchType.internal_attrs() step_internal.append('deploy_template_id') template['steps'] = [remove_internal(step, step_internal) for step in template['steps']] # Remove internal attributes from the template. dt_patch = dt_controller.DeployTemplatePatchType internal = dt_patch.internal_attrs() return remove_internal(template, internal) def post_get_test_deploy_template(**kw): """Return a DeployTemplate object with appropriate attributes.""" return deploy_template_post_data(**kw) ironic-15.0.0/ironic/tests/unit/api/__init__.py0000664000175000017500000000000013652514273021366 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/stubs.py0000664000175000017500000000465313652514273020240 0ustar zuulzuul00000000000000# Copyright (c) 2011 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glanceclient import exc as glance_exc NOW_GLANCE_FORMAT = "2010-10-11T10:30:22" class _GlanceWrapper(object): def __init__(self, wrapped): self.wrapped = wrapped def __iter__(self): return iter(()) class StubGlanceClient(object): fake_wrapped = object() def __init__(self, images=None): self._images = [] _images = images or [] map(lambda image: self.create(**image), _images) # NOTE(bcwaldon): HACK to get client.images.* to work self.images = lambda: None for fn in ('get', 'data'): setattr(self.images, fn, getattr(self, fn)) def get(self, image_id): for image in self._images: if image.id == str(image_id): return image raise glance_exc.NotFound(image_id) def data(self, image_id): self.get(image_id) return _GlanceWrapper(self.fake_wrapped) class FakeImage(dict): def __init__(self, metadata): IMAGE_ATTRIBUTES = ['size', 'disk_format', 'owner', 'container_format', 'checksum', 'id', 'name', 'deleted', 'status', 'min_disk', 'min_ram', 'tags', 'visibility', 'protected', 'file', 'schema', 'os_hash_algo', 'os_hash_value'] raw = dict.fromkeys(IMAGE_ATTRIBUTES) raw.update(metadata) # raw['created_at'] = NOW_GLANCE_FORMAT # raw['updated_at'] = NOW_GLANCE_FORMAT super(FakeImage, self).__init__(raw) def __getattr__(self, key): try: return self[key] except KeyError: raise AttributeError(key) def __setattr__(self, key, value): if key in self: self[key] = value else: raise AttributeError(key) ironic-15.0.0/ironic/tests/unit/cmd/0000775000175000017500000000000013652514443017260 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/cmd/test_status.py0000664000175000017500000000264613652514273022225 0ustar zuulzuul00000000000000# Copyright (c) 2018 NEC, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_upgradecheck.upgradecheck import Code from ironic.cmd import dbsync from ironic.cmd import status from ironic.tests.unit.db import base as db_base class TestUpgradeChecks(db_base.DbTestCase): def setUp(self): super(TestUpgradeChecks, self).setUp() self.cmd = status.Checks() def test__check_obj_versions(self): check_result = self.cmd._check_obj_versions() self.assertEqual(Code.SUCCESS, check_result.code) @mock.patch.object(dbsync.DBCommand, 'check_obj_versions', autospec=True) def test__check_obj_versions_bad(self, mock_check): msg = 'This is bad' mock_check.return_value = msg check_result = self.cmd._check_obj_versions() self.assertEqual(Code.FAILURE, check_result.code) self.assertEqual(msg, check_result.details) ironic-15.0.0/ironic/tests/unit/cmd/test_conductor.py0000664000175000017500000000462013652514273022674 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from ironic.cmd import conductor from ironic.tests.unit.db import base as db_base class ConductorStartTestCase(db_base.DbTestCase): @mock.patch.object(conductor, 'LOG', autospec=True) def test_warn_about_unsafe_shred_parameters_defaults(self, log_mock): conductor.warn_about_unsafe_shred_parameters(cfg.CONF) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor, 'LOG', autospec=True) def test_warn_about_unsafe_shred_parameters_zeros(self, log_mock): cfg.CONF.set_override('shred_random_overwrite_iterations', 0, 'deploy') cfg.CONF.set_override('shred_final_overwrite_with_zeros', True, 'deploy') conductor.warn_about_unsafe_shred_parameters(cfg.CONF) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor, 'LOG', autospec=True) def test_warn_about_unsafe_shred_parameters_random_no_zeros(self, log_mock): cfg.CONF.set_override('shred_random_overwrite_iterations', 1, 'deploy') cfg.CONF.set_override('shred_final_overwrite_with_zeros', False, 'deploy') conductor.warn_about_unsafe_shred_parameters(cfg.CONF) self.assertFalse(log_mock.warning.called) @mock.patch.object(conductor, 'LOG', autospec=True) def test_warn_about_unsafe_shred_parameters_produces_a_warning(self, log_mock): cfg.CONF.set_override('shred_random_overwrite_iterations', 0, 'deploy') cfg.CONF.set_override('shred_final_overwrite_with_zeros', False, 'deploy') conductor.warn_about_unsafe_shred_parameters(cfg.CONF) self.assertTrue(log_mock.warning.called) ironic-15.0.0/ironic/tests/unit/cmd/test_dbsync.py0000664000175000017500000003252013652514273022156 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.cmd import dbsync from ironic.common import context from ironic.db import migration from ironic.tests.unit.db import base as db_base class DbSyncTestCase(db_base.DbTestCase): def test_upgrade_and_version(self): migration.upgrade('head') v = migration.version() self.assertTrue(v) class OnlineMigrationTestCase(db_base.DbTestCase): def setUp(self): super(OnlineMigrationTestCase, self).setUp() self.context = context.get_admin_context() self.db_cmds = dbsync.DBCommand() def test_check_obj_versions(self): with mock.patch.object(self.dbapi, 'check_versions', autospec=True) as mock_check_versions: mock_check_versions.return_value = True msg = self.db_cmds.check_obj_versions() self.assertIsNone(msg) mock_check_versions.assert_called_once_with(ignore_models=()) def test_check_obj_versions_bad(self): with mock.patch.object(self.dbapi, 'check_versions', autospec=True) as mock_check_versions: mock_check_versions.return_value = False msg = self.db_cmds.check_obj_versions() self.assertIsNotNone(msg) mock_check_versions.assert_called_once_with(ignore_models=()) def test_check_obj_versions_ignore_models(self): with mock.patch.object(self.dbapi, 'check_versions', autospec=True) as mock_check_versions: mock_check_versions.return_value = True msg = self.db_cmds.check_obj_versions(ignore_missing_tables=True) self.assertIsNone(msg) mock_check_versions.assert_called_once_with( ignore_models=dbsync.NEW_MODELS) @mock.patch.object(dbsync.DBCommand, 'check_obj_versions', autospec=True) def test_check_versions_bad(self, mock_check_versions): mock_check_versions.return_value = 'This is bad' exit = self.assertRaises(SystemExit, self.db_cmds._check_versions) mock_check_versions.assert_called_once_with( mock.ANY, ignore_missing_tables=False) self.assertEqual(2, exit.code) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions(self, mock_migrations): mock_migrations.__iter__.return_value = ((self.dbapi, 'foo'),) mock_func = mock.MagicMock(side_effect=((15, 15),), __name__='foo') with mock.patch.object(self.dbapi, 'foo', mock_func, create=True): self.assertTrue( self.db_cmds._run_migration_functions(self.context, 50, {})) mock_func.assert_called_once_with(self.context, 50) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_none(self, mock_migrations): # No migration functions to run mock_migrations.__iter__.return_value = () self.assertTrue( self.db_cmds._run_migration_functions(self.context, 50, {})) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_exception(self, mock_migrations): mock_migrations.__iter__.return_value = ((self.dbapi, 'foo'),) # Migration function raises exception mock_func = mock.MagicMock(side_effect=TypeError("bar"), __name__='foo') with mock.patch.object(self.dbapi, 'foo', mock_func, create=True): self.assertRaises( TypeError, self.db_cmds._run_migration_functions, self.context, 50, {}) mock_func.assert_called_once_with(self.context, 50) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_2(self, mock_migrations): # 2 migration functions, migration completed mock_migrations.__iter__.return_value = ((self.dbapi, 'func1'), (self.dbapi, 'func2')) mock_func1 = mock.MagicMock(side_effect=((15, 15),), __name__='func1') mock_func2 = mock.MagicMock(side_effect=((20, 20),), __name__='func2') with mock.patch.object(self.dbapi, 'func1', mock_func1, create=True): with mock.patch.object(self.dbapi, 'func2', mock_func2, create=True): options = {'func1': {'key': 'value'}, 'func2': {'x': 1, 'y': 2}} self.assertTrue(self.db_cmds._run_migration_functions( self.context, 50, options)) mock_func1.assert_called_once_with(self.context, 50, key='value') mock_func2.assert_called_once_with(self.context, 35, x=1, y=2) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_2_notdone(self, mock_migrations): # 2 migration functions; only first function was run but not completed mock_migrations.__iter__.return_value = ((self.dbapi, 'func1'), (self.dbapi, 'func2')) mock_func1 = mock.MagicMock(side_effect=((15, 10),), __name__='func1') mock_func2 = mock.MagicMock(side_effect=((20, 0),), __name__='func2') with mock.patch.object(self.dbapi, 'func1', mock_func1, create=True): with mock.patch.object(self.dbapi, 'func2', mock_func2, create=True): self.assertFalse(self.db_cmds._run_migration_functions( self.context, 10, {})) mock_func1.assert_called_once_with(self.context, 10) self.assertFalse(mock_func2.called) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_2_onedone(self, mock_migrations): # 2 migration functions; only first function was run and completed mock_migrations.__iter__.return_value = ((self.dbapi, 'func1'), (self.dbapi, 'func2')) mock_func1 = mock.MagicMock(side_effect=((10, 10),), __name__='func1') mock_func2 = mock.MagicMock(side_effect=((20, 0),), __name__='func2') with mock.patch.object(self.dbapi, 'func1', mock_func1, create=True): with mock.patch.object(self.dbapi, 'func2', mock_func2, create=True): self.assertFalse(self.db_cmds._run_migration_functions( self.context, 10, {})) mock_func1.assert_called_once_with(self.context, 10) self.assertFalse(mock_func2.called) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_2_done(self, mock_migrations): # 2 migration functions; migrations completed mock_migrations.__iter__.return_value = ((self.dbapi, 'func1'), (self.dbapi, 'func2')) mock_func1 = mock.MagicMock(side_effect=((10, 10),), __name__='func1') mock_func2 = mock.MagicMock(side_effect=((0, 0),), __name__='func2') with mock.patch.object(self.dbapi, 'func1', mock_func1, create=True): with mock.patch.object(self.dbapi, 'func2', mock_func2, create=True): self.assertTrue(self.db_cmds._run_migration_functions( self.context, 15, {})) mock_func1.assert_called_once_with(self.context, 15) mock_func2.assert_called_once_with(self.context, 5) @mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True) def test__run_migration_functions_two_calls_done(self, mock_migrations): # 2 migration functions; migrations completed after calling twice mock_migrations.__iter__.return_value = ((self.dbapi, 'func1'), (self.dbapi, 'func2')) mock_func1 = mock.MagicMock(side_effect=((10, 10), (0, 0)), __name__='func1') mock_func2 = mock.MagicMock(side_effect=((0, 0), (0, 0)), __name__='func2') with mock.patch.object(self.dbapi, 'func1', mock_func1, create=True): with mock.patch.object(self.dbapi, 'func2', mock_func2, create=True): self.assertFalse(self.db_cmds._run_migration_functions( self.context, 10, {})) mock_func1.assert_called_once_with(self.context, 10) self.assertFalse(mock_func2.called) self.assertTrue(self.db_cmds._run_migration_functions( self.context, 10, {})) mock_func1.assert_has_calls((mock.call(self.context, 10),) * 2) mock_func2.assert_called_once_with(self.context, 10) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations(self, mock_functions): mock_functions.return_value = True exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations) self.assertEqual(0, exit.code) mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 50, {}) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_with_options(self, mock_functions): mock_functions.return_value = True exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations, options=["m1.key1=value1", "m1.key2=value2", "m2.key3=value3"]) self.assertEqual(0, exit.code) mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 50, {'m1': {'key1': 'value1', 'key2': 'value2'}, 'm2': {'key3': 'value3'}}) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_invalid_option1(self, mock_functions): mock_functions.return_value = True exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations, options=["m1key1=value1"]) self.assertEqual(127, exit.code) self.assertFalse(mock_functions.called) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_invalid_option2(self, mock_functions): mock_functions.return_value = True exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations, options=["m1.key1value1"]) self.assertEqual(127, exit.code) self.assertFalse(mock_functions.called) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_batches(self, mock_functions): mock_functions.side_effect = (False, True) exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations) self.assertEqual(0, exit.code) mock_functions.assert_has_calls( (mock.call(self.db_cmds, mock.ANY, 50, {}),) * 2) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_notdone(self, mock_functions): mock_functions.return_value = False exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations, max_count=30) self.assertEqual(1, exit.code) mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 30, {}) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_max_count_neg(self, mock_functions): mock_functions.return_value = False exit = self.assertRaises(SystemExit, self.db_cmds._run_online_data_migrations, max_count=-4) self.assertEqual(127, exit.code) self.assertFalse(mock_functions.called) @mock.patch.object(dbsync.DBCommand, '_run_migration_functions', autospec=True) def test__run_online_data_migrations_exception(self, mock_functions): mock_functions.side_effect = TypeError("yuck") self.assertRaises(TypeError, self.db_cmds._run_online_data_migrations) mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 50, {}) ironic-15.0.0/ironic/tests/unit/cmd/__init__.py0000664000175000017500000000000013652514273021360 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/dhcp/0000775000175000017500000000000013652514443017433 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/dhcp/test_neutron.py0000664000175000017500000005270113652514273022544 0ustar zuulzuul00000000000000# # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as neutron_client_exc from oslo_utils import uuidutils from ironic.common import dhcp_factory from ironic.common import exception from ironic.common import pxe_utils from ironic.conductor import task_manager from ironic.dhcp import neutron from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils class TestNeutron(db_base.DbTestCase): def setUp(self): super(TestNeutron, self).setUp() self.config( cleaning_network='00000000-0000-0000-0000-000000000000', group='neutron') self.config(dhcp_provider='neutron', group='dhcp') self.node = object_utils.create_test_node(self.context) self.ports = [ object_utils.create_test_port( self.context, node_id=self.node.id, id=2, uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c782', address='52:54:00:cf:2d:32')] # Very simple neutron port representation self.neutron_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f00', 'mac_address': '52:54:00:cf:2d:32'} dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch('ironic.common.neutron.get_client', autospec=True) @mock.patch('ironic.common.neutron.update_neutron_port', autospec=True) def test_update_port_dhcp_opts(self, update_mock, client_mock): opts = [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0'}, {'opt_name': 'tftp-server', 'opt_value': '1.1.1.1'}, {'opt_name': 'server-ip-address', 'opt_value': '1.1.1.1'}] port_id = 'fake-port-id' expected = {'port': {'extra_dhcp_opts': opts}} port_data = { "id": port_id, "fixed_ips": [ { "ip_address": "192.168.1.3", } ], } client_mock.return_value.show_port.return_value = {'port': port_data} api = dhcp_factory.DHCPFactory() with task_manager.acquire(self.context, self.node.uuid) as task: api.provider.update_port_dhcp_opts(port_id, opts, context=task.context) update_mock.assert_called_once_with( self.context, port_id, expected) @mock.patch('ironic.common.neutron.get_client', autospec=True) @mock.patch('ironic.common.neutron.update_neutron_port', autospec=True) def test_update_port_dhcp_opts_v6(self, update_mock, client_mock): opts = [{'opt_name': 'bootfile-name', 'opt_value': 'pxelinux.0', 'ip_version': 4}, {'opt_name': 'tftp-server', 'opt_value': '1.1.1.1', 'ip_version': 4}, {'opt_name': 'server-ip-address', 'opt_value': '1.1.1.1', 'ip_version': 4}, {'opt_name': 'bootfile-url', 'opt_value': 'tftp://::1/file.name', 'ip_version': 6}] port_id = 'fake-port-id' expected = { 'port': { 'extra_dhcp_opts': [{ 'opt_name': 'bootfile-url', 'opt_value': 'tftp://::1/file.name', 'ip_version': 6}] } } port_data = { "id": port_id, "fixed_ips": [ { "ip_address": "2001:db8::201", } ], } client_mock.return_value.show_port.return_value = {'port': port_data} api = dhcp_factory.DHCPFactory() with task_manager.acquire(self.context, self.node.uuid) as task: api.provider.update_port_dhcp_opts(port_id, opts, context=task.context) update_mock.assert_called_once_with( task.context, port_id, expected) @mock.patch('ironic.common.neutron.get_client', autospec=True) @mock.patch('ironic.common.neutron.update_neutron_port', autospec=True) def test_update_port_dhcp_opts_with_exception(self, update_mock, client_mock): opts = [{}] port_id = 'fake-port-id' port_data = { "id": port_id, "fixed_ips": [ { "ip_address": "192.168.1.3", } ], } client_mock.return_value.show_port.return_value = {'port': port_data} update_mock.side_effect = ( neutron_client_exc.NeutronClientException()) api = dhcp_factory.DHCPFactory() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.FailedToUpdateDHCPOptOnPort, api.provider.update_port_dhcp_opts, port_id, opts, context=task.context) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp(self, mock_gnvi, mock_updo): mock_gnvi.return_value = {'ports': {'port-uuid': 'vif-uuid'}, 'portgroups': {}} with task_manager.acquire(self.context, self.node.uuid) as task: opts = pxe_utils.dhcp_options_for_instance(task) api = dhcp_factory.DHCPFactory() api.update_dhcp(task, opts) mock_updo.assert_called_once_with(mock.ANY, 'vif-uuid', opts, context=task.context) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp_no_vif_data(self, mock_gnvi, mock_updo): mock_gnvi.return_value = {'portgroups': {}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() self.assertRaises(exception.FailedToUpdateDHCPOptOnPort, api.update_dhcp, task, self.node) self.assertFalse(mock_updo.called) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp_some_failures(self, mock_gnvi, mock_updo): # confirm update is called twice, one fails, but no exception raised mock_gnvi.return_value = {'ports': {'p1': 'v1', 'p2': 'v2'}, 'portgroups': {}} exc = exception.FailedToUpdateDHCPOptOnPort('fake exception') mock_updo.side_effect = [None, exc] with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() api.update_dhcp(task, self.node) mock_gnvi.assert_called_once_with(task) self.assertEqual(2, mock_updo.call_count) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp_fails(self, mock_gnvi, mock_updo): # confirm update is called twice, both fail, and exception is raised mock_gnvi.return_value = {'ports': {'p1': 'v1', 'p2': 'v2'}, 'portgroups': {}} exc = exception.FailedToUpdateDHCPOptOnPort('fake exception') mock_updo.side_effect = [exc, exc] with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory() self.assertRaises(exception.FailedToUpdateDHCPOptOnPort, api.update_dhcp, task, self.node) mock_gnvi.assert_called_once_with(task) self.assertEqual(2, mock_updo.call_count) @mock.patch.object(neutron, 'LOG', autospec=True) @mock.patch('time.sleep', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp_set_sleep_and_fake(self, mock_gnvi, mock_ts, mock_log): mock_gnvi.return_value = {'ports': {'port-uuid': 'vif-uuid'}, 'portgroups': {}} self.config(port_setup_delay=30, group='neutron') with task_manager.acquire(self.context, self.node.uuid) as task: opts = pxe_utils.dhcp_options_for_instance(task) api = dhcp_factory.DHCPFactory() with mock.patch.object(api.provider, 'update_port_dhcp_opts', autospec=True) as mock_updo: api.update_dhcp(task, opts) mock_log.debug.assert_called_once_with( "Waiting %d seconds for Neutron.", 30) mock_ts.assert_called_with(30) mock_updo.assert_called_once_with('vif-uuid', opts, context=task.context) @mock.patch.object(neutron, 'LOG', autospec=True) @mock.patch('ironic.common.network.get_node_vif_ids', autospec=True) def test_update_dhcp_unset_sleep_and_fake(self, mock_gnvi, mock_log): mock_gnvi.return_value = {'ports': {'port-uuid': 'vif-uuid'}, 'portgroups': {}} with task_manager.acquire(self.context, self.node.uuid) as task: opts = pxe_utils.dhcp_options_for_instance(task) api = dhcp_factory.DHCPFactory() with mock.patch.object(api.provider, 'update_port_dhcp_opts', autospec=True) as mock_updo: api.update_dhcp(task, opts) mock_log.debug.assert_not_called() mock_updo.assert_called_once_with('vif-uuid', opts, context=task.context) def test__get_fixed_ip_address(self): port_id = 'fake-port-id' expected = "192.168.1.3" api = dhcp_factory.DHCPFactory().provider port_data = { "id": port_id, "network_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "admin_state_up": True, "status": "ACTIVE", "mac_address": "fa:16:3e:4c:2c:30", "fixed_ips": [ { "ip_address": "192.168.1.3", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "device_id": 'bece68a3-2f8b-4e66-9092-244493d6aba7', } fake_client = mock.Mock() fake_client.show_port.return_value = {'port': port_data} result = api._get_fixed_ip_address(port_id, fake_client) self.assertEqual(expected, result) fake_client.show_port.assert_called_once_with(port_id) def test__get_fixed_ip_address_invalid_ip(self): port_id = 'fake-port-id' api = dhcp_factory.DHCPFactory().provider port_data = { "id": port_id, "network_id": "3cb9bc59-5699-4588-a4b1-b87f96708bc6", "admin_state_up": True, "status": "ACTIVE", "mac_address": "fa:16:3e:4c:2c:30", "fixed_ips": [ { "ip_address": "invalid.ip", "subnet_id": "f8a6e8f8-c2ec-497c-9f23-da9616de54ef" } ], "device_id": 'bece68a3-2f8b-4e66-9092-244493d6aba7', } fake_client = mock.Mock() fake_client.show_port.return_value = {'port': port_data} self.assertRaises(exception.InvalidIPv4Address, api._get_fixed_ip_address, port_id, fake_client) fake_client.show_port.assert_called_once_with(port_id) def test__get_fixed_ip_address_with_exception(self): port_id = 'fake-port-id' api = dhcp_factory.DHCPFactory().provider fake_client = mock.Mock() fake_client.show_port.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.NetworkError, api._get_fixed_ip_address, port_id, fake_client) fake_client.show_port.assert_called_once_with(port_id) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def _test__get_port_ip_address(self, mock_gfia, network): expected = "192.168.1.3" fake_vif = 'test-vif-%s' % network port = object_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={ 'cleaning_vif_port_id': (fake_vif if network == 'cleaning' else None), 'provisioning_vif_port_id': (fake_vif if network == 'provisioning' else None), 'tenant_vif_port_id': (fake_vif if network == 'tenant' else None), } ) mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_port_ip_address(task, port, mock.sentinel.client) self.assertEqual(expected, result) mock_gfia.assert_called_once_with(mock.ANY, fake_vif, mock.sentinel.client) def test__get_port_ip_address_tenant(self): self._test__get_port_ip_address(network='tenant') def test__get_port_ip_address_cleaning(self): self._test__get_port_ip_address(network='cleaning') def test__get_port_ip_address_provisioning(self): self._test__get_port_ip_address(network='provisioning') @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def test__get_port_ip_address_for_portgroup(self, mock_gfia): expected = "192.168.1.3" pg = object_utils.create_test_portgroup( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={'tenant_vif_port_id': 'test-vif-A'}) mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_port_ip_address(task, pg, mock.sentinel.client) self.assertEqual(expected, result) mock_gfia.assert_called_once_with(mock.ANY, 'test-vif-A', mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def test__get_port_ip_address_with_exception(self, mock_gfia): expected = "192.168.1.3" port = object_utils.create_test_port(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid()) mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider self.assertRaises(exception.FailedToGetIPAddressOnPort, api._get_port_ip_address, task, port, mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def test__get_port_ip_address_for_portgroup_with_exception( self, mock_gfia): expected = "192.168.1.3" pg = object_utils.create_test_portgroup(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid()) mock_gfia.return_value = expected with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider self.assertRaises(exception.FailedToGetIPAddressOnPort, api._get_port_ip_address, task, pg, mock.sentinel.client) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def _test__get_ip_addresses_ports(self, key, mock_gfia): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} ip_address = '10.10.0.1' expected = [ip_address] port = object_utils.create_test_port(self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), **kwargs1) mock_gfia.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_ip_addresses(task, [port], mock.sentinel.client) self.assertEqual(expected, result) def test__get_ip_addresses_ports_extra(self): self._test__get_ip_addresses_ports('extra') def test__get_ip_addresses_ports_int_info(self): self._test__get_ip_addresses_ports('internal_info') @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_fixed_ip_address', autospec=True) def _test__get_ip_addresses_portgroup(self, key, mock_gfia): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} ip_address = '10.10.0.1' expected = [ip_address] pg = object_utils.create_test_portgroup( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), **kwargs1) mock_gfia.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api._get_ip_addresses(task, [pg], mock.sentinel.client) self.assertEqual(expected, result) def test__get_ip_addresses_portgroup_extra(self): self._test__get_ip_addresses_portgroup('extra') def test__get_ip_addresses_portgroup_int_info(self): self._test__get_ip_addresses_portgroup('internal_info') @mock.patch('ironic.common.neutron.get_client', autospec=True) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_port_ip_address', autospec=True) def test_get_ip_addresses(self, get_ip_mock, client_mock): ip_address = '10.10.0.1' expected = [ip_address] get_ip_mock.return_value = ip_address with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider result = api.get_ip_addresses(task) get_ip_mock.assert_called_once_with(mock.ANY, task, task.ports[0], client_mock.return_value) self.assertEqual(expected, result) @mock.patch('ironic.common.neutron.get_client', autospec=True) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi._get_port_ip_address', autospec=True) def test_get_ip_addresses_for_port_and_portgroup(self, get_ip_mock, client_mock): object_utils.create_test_portgroup( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={'tenant_vif_port_id': 'test-vif-A'}) with task_manager.acquire(self.context, self.node.uuid) as task: api = dhcp_factory.DHCPFactory().provider api.get_ip_addresses(task) get_ip_mock.assert_has_calls( [mock.call(mock.ANY, task, task.ports[0], client_mock.return_value), mock.call(mock.ANY, task, task.portgroups[0], client_mock.return_value)] ) ironic-15.0.0/ironic/tests/unit/dhcp/test_factory.py0000664000175000017500000000764313652514273022526 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import mock import stevedore from ironic.common import dhcp_factory from ironic.common import exception from ironic.dhcp import base as base_class from ironic.dhcp import neutron from ironic.dhcp import none from ironic.tests import base class TestDHCPFactory(base.TestCase): def setUp(self): super(TestDHCPFactory, self).setUp() self.config(endpoint_override='test-url', timeout=30, group='neutron') dhcp_factory.DHCPFactory._dhcp_provider = None self.addCleanup(setattr, dhcp_factory.DHCPFactory, '_dhcp_provider', None) def test_default_dhcp(self): # dhcp provider should default to neutron api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, neutron.NeutronDHCPApi) def test_set_none_dhcp(self): self.config(dhcp_provider='none', group='dhcp') api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, none.NoneDHCPApi) def test_set_neutron_dhcp(self): self.config(dhcp_provider='neutron', group='dhcp') api = dhcp_factory.DHCPFactory() self.assertIsInstance(api.provider, neutron.NeutronDHCPApi) def test_only_one_dhcp(self): self.config(dhcp_provider='none', group='dhcp') dhcp_factory.DHCPFactory() with mock.patch.object(dhcp_factory.DHCPFactory, '_set_dhcp_provider') as mock_set_dhcp: # There is already a dhcp_provider, so this shouldn't call # _set_dhcp_provider again. dhcp_factory.DHCPFactory() self.assertEqual(0, mock_set_dhcp.call_count) def test_set_bad_dhcp(self): self.config(dhcp_provider='bad_dhcp', group='dhcp') self.assertRaises(exception.DHCPLoadError, dhcp_factory.DHCPFactory) @mock.patch.object(stevedore.driver, 'DriverManager', autospec=True) def test_dhcp_some_error(self, mock_drv_mgr): mock_drv_mgr.side_effect = Exception('No module mymod found.') self.assertRaises(exception.DHCPLoadError, dhcp_factory.DHCPFactory) class CompareBasetoModules(base.TestCase): def test_drivers_match_dhcp_base(self): signature_method = inspect.signature def _get_public_apis(inst): methods = {} for (name, value) in inspect.getmembers(inst, inspect.ismethod): if name.startswith("_"): continue methods[name] = value return methods def _compare_classes(baseclass, driverclass): basemethods = _get_public_apis(baseclass) implmethods = _get_public_apis(driverclass) for name in basemethods: baseargs = signature_method(basemethods[name]) implargs = signature_method(implmethods[name]) self.assertEqual( baseargs, implargs, "%s args of %s don't match base %s" % ( name, driverclass, baseclass) ) _compare_classes(base_class.BaseDHCP, none.NoneDHCPApi) _compare_classes(base_class.BaseDHCP, neutron.NeutronDHCPApi) ironic-15.0.0/ironic/tests/unit/dhcp/__init__.py0000664000175000017500000000000013652514273021533 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/objects/0000775000175000017500000000000013652514443020146 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/objects/test_port.py0000664000175000017500000003344413652514273022554 0ustar zuulzuul00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import types import mock from oslo_config import cfg from testtools import matchers from ironic.common import exception from ironic import objects from ironic.objects import base as obj_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class TestPortObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestPortObject, self).setUp() self.fake_port = db_utils.get_test_port() def test_get_by_id(self): port_id = self.fake_port['id'] with mock.patch.object(self.dbapi, 'get_port_by_id', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, port_id) mock_get_port.assert_called_once_with(port_id) self.assertEqual(self.context, port._context) def test_get_by_uuid(self): uuid = self.fake_port['uuid'] with mock.patch.object(self.dbapi, 'get_port_by_uuid', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, uuid) mock_get_port.assert_called_once_with(uuid) self.assertEqual(self.context, port._context) def test_get_by_address(self): address = self.fake_port['address'] with mock.patch.object(self.dbapi, 'get_port_by_address', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port port = objects.Port.get(self.context, address) mock_get_port.assert_called_once_with(address, owner=None) self.assertEqual(self.context, port._context) def test_get_bad_id_and_uuid_and_address(self): self.assertRaises(exception.InvalidIdentity, objects.Port.get, self.context, 'not-a-uuid') def test_create(self): port = objects.Port(self.context, **self.fake_port) with mock.patch.object(self.dbapi, 'create_port', autospec=True) as mock_create_port: mock_create_port.return_value = db_utils.get_test_port() port.create() args, _kwargs = mock_create_port.call_args self.assertEqual(objects.Port.VERSION, args[0]['version']) def test_save(self): uuid = self.fake_port['uuid'] address = "b2:54:00:cf:2d:40" test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_port_by_uuid', autospec=True) as mock_get_port: mock_get_port.return_value = self.fake_port with mock.patch.object(self.dbapi, 'update_port', autospec=True) as mock_update_port: mock_update_port.return_value = ( db_utils.get_test_port(address=address, updated_at=test_time)) p = objects.Port.get_by_uuid(self.context, uuid) p.address = address p.save() mock_get_port.assert_called_once_with(uuid) mock_update_port.assert_called_once_with( uuid, {'version': objects.Port.VERSION, 'address': "b2:54:00:cf:2d:40"}) self.assertEqual(self.context, p._context) res_updated_at = (p.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_port['uuid'] returns = [self.fake_port, db_utils.get_test_port(address="c3:54:00:cf:2d:40")] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_port_by_uuid', side_effect=returns, autospec=True) as mock_get_port: p = objects.Port.get_by_uuid(self.context, uuid) self.assertEqual("52:54:00:cf:2d:31", p.address) p.refresh() self.assertEqual("c3:54:00:cf:2d:40", p.address) self.assertEqual(expected, mock_get_port.call_args_list) self.assertEqual(self.context, p._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() address = "b2:54:00:cf:2d:40" db_node = db_utils.create_test_node() db_port = db_utils.create_test_port(node_id=db_node.id) p = objects.Port.get_by_uuid(self.context, db_port.uuid) p_copy = objects.Port.get_by_uuid(self.context, db_port.uuid) p.address = address p.save() p_copy.refresh() p_copy.address = 'aa:bb:cc:dd:ee:ff' # Ensure this passes and an exception is not generated p_copy.save() def test_list(self): with mock.patch.object(self.dbapi, 'get_port_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_port] ports = objects.Port.list(self.context) self.assertThat(ports, matchers.HasLength(1)) self.assertIsInstance(ports[0], objects.Port) self.assertEqual(self.context, ports[0]._context) @mock.patch.object(obj_base.IronicObject, 'supports_version', spec_set=types.FunctionType) def test_supports_physical_network_supported(self, mock_sv): mock_sv.return_value = True self.assertTrue(objects.Port.supports_physical_network()) mock_sv.assert_called_once_with((1, 7)) @mock.patch.object(obj_base.IronicObject, 'supports_version', spec_set=types.FunctionType) def test_supports_physical_network_unsupported(self, mock_sv): mock_sv.return_value = False self.assertFalse(objects.Port.supports_physical_network()) mock_sv.assert_called_once_with((1, 7)) def test_payload_schemas(self): self._check_payload_schemas(objects.port, objects.Port.fields) @mock.patch.object(obj_base.IronicObject, 'supports_version', spec_set=types.FunctionType) def test_supports_is_smartnic_supported(self, mock_sv): mock_sv.return_value = True self.assertTrue(objects.Port.supports_is_smartnic()) mock_sv.assert_called_once_with((1, 9)) @mock.patch.object(obj_base.IronicObject, 'supports_version', spec_set=types.FunctionType) def test_supports_is_smartnic_unsupported(self, mock_sv): mock_sv.return_value = False self.assertFalse(objects.Port.supports_is_smartnic()) mock_sv.assert_called_once_with((1, 9)) class TestConvertToVersion(db_base.DbTestCase): def setUp(self): super(TestConvertToVersion, self).setUp() self.vif_id = 'some_uuid' extra = {'vif_port_id': self.vif_id} self.fake_port = db_utils.get_test_port(extra=extra) def test_physnet_supported_missing(self): # Physical network not set, should be set to default. port = objects.Port(self.context, **self.fake_port) delattr(port, 'physical_network') port.obj_reset_changes() port._convert_to_version("1.7") self.assertIsNone(port.physical_network) self.assertEqual({'physical_network': None}, port.obj_get_changes()) def test_physnet_supported_set(self): # Physical network set, no change required. port = objects.Port(self.context, **self.fake_port) port.physical_network = 'physnet1' port.obj_reset_changes() port._convert_to_version("1.7") self.assertEqual('physnet1', port.physical_network) self.assertEqual({}, port.obj_get_changes()) def test_physnet_unsupported_missing(self): # Physical network not set, no change required. port = objects.Port(self.context, **self.fake_port) delattr(port, 'physical_network') port.obj_reset_changes() port._convert_to_version("1.6") self.assertNotIn('physical_network', port) self.assertEqual({}, port.obj_get_changes()) def test_physnet_unsupported_set_remove(self): # Physical network set, should be removed. port = objects.Port(self.context, **self.fake_port) port.physical_network = 'physnet1' port.obj_reset_changes() port._convert_to_version("1.6") self.assertNotIn('physical_network', port) self.assertEqual({}, port.obj_get_changes()) def test_physnet_unsupported_set_no_remove_non_default(self): # Physical network set, should be set to default. port = objects.Port(self.context, **self.fake_port) port.physical_network = 'physnet1' port.obj_reset_changes() port._convert_to_version("1.6", False) self.assertIsNone(port.physical_network) self.assertEqual({'physical_network': None}, port.obj_get_changes()) def test_physnet_unsupported_set_no_remove_default(self): # Physical network set, no change required. port = objects.Port(self.context, **self.fake_port) port.physical_network = None port.obj_reset_changes() port._convert_to_version("1.6", False) self.assertIsNone(port.physical_network) self.assertEqual({}, port.obj_get_changes()) def test_vif_in_extra_lower_version(self): # no conversion port = objects.Port(self.context, **self.fake_port) port._convert_to_version("1.7", False) self.assertFalse('tenant_vif_port_id' in port.internal_info) def test_vif_in_extra(self): for v in ['1.8', '1.9']: port = objects.Port(self.context, **self.fake_port) port._convert_to_version(v, False) self.assertEqual(self.vif_id, port.internal_info['tenant_vif_port_id']) def test_vif_in_extra_not_in_extra(self): port = objects.Port(self.context, **self.fake_port) port.extra.pop('vif_port_id') port._convert_to_version('1.8', False) self.assertFalse('tenant_vif_port_id' in port.internal_info) def test_vif_in_extra_in_internal_info(self): vif2 = 'another_uuid' port = objects.Port(self.context, **self.fake_port) port.internal_info['tenant_vif_port_id'] = vif2 port._convert_to_version('1.8', False) # no change self.assertEqual(vif2, port.internal_info['tenant_vif_port_id']) def test_is_smartnic_unsupported(self): port = objects.Port(self.context, **self.fake_port) port._convert_to_version("1.8") self.assertNotIn('is_smartnic', port) def test_is_smartnic_supported(self): port = objects.Port(self.context, **self.fake_port) port._convert_to_version("1.9") self.assertIn('is_smartnic', port) def test_is_smartnic_supported_missing(self): # is_smartnic is not set, should be set to default. port = objects.Port(self.context, **self.fake_port) delattr(port, 'is_smartnic') port.obj_reset_changes() port._convert_to_version("1.9") self.assertFalse(port.is_smartnic) self.assertIn('is_smartnic', port.obj_get_changes()) self.assertFalse(port.obj_get_changes()['is_smartnic']) def test_is_smartnic_supported_set(self): # is_smartnic is set, no change required. port = objects.Port(self.context, **self.fake_port) port.is_smartnic = True port.obj_reset_changes() port._convert_to_version("1.9") self.assertTrue(port.is_smartnic) self.assertNotIn('is_smartnic', port.obj_get_changes()) def test_is_smartnic_unsupported_missing(self): # is_smartnic is not set, no change required. port = objects.Port(self.context, **self.fake_port) delattr(port, 'is_smartnic') port.obj_reset_changes() port._convert_to_version("1.8") self.assertNotIn('is_smartnic', port) self.assertNotIn('is_smartnic', port.obj_get_changes()) def test_is_smartnic_unsupported_set_remove(self): # is_smartnic is set, should be removed. port = objects.Port(self.context, **self.fake_port) port.is_smartnic = False port.obj_reset_changes() port._convert_to_version("1.8") self.assertNotIn('is_smartnic', port) self.assertNotIn('is_smartnic', port.obj_get_changes()) def test_is_smartnic_unsupported_set_no_remove_non_default(self): # is_smartnic is set, should be set to default. port = objects.Port(self.context, **self.fake_port) port.is_smartnic = True port.obj_reset_changes() port._convert_to_version("1.8", False) self.assertFalse(port.is_smartnic) self.assertIn('is_smartnic', port.obj_get_changes()) self.assertFalse(port.obj_get_changes()['is_smartnic']) def test_is_smartnic_unsupported_set_no_remove_default(self): # is_smartnic is set, no change required. port = objects.Port(self.context, **self.fake_port) port.is_smartnic = False port.obj_reset_changes() port._convert_to_version("1.8", False) self.assertFalse(port.is_smartnic) self.assertNotIn('is_smartnic', port.obj_get_changes()) ironic-15.0.0/ironic/tests/unit/objects/test_trait.py0000664000175000017500000001135613652514273022711 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import context from ironic.db import api as dbapi from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestTraitObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestTraitObject, self).setUp() self.ctxt = context.get_admin_context() self.fake_trait = db_utils.get_test_node_trait() self.node_id = self.fake_trait['node_id'] @mock.patch.object(dbapi.IMPL, 'get_node_traits_by_node_id', autospec=True) def test_get_by_id(self, mock_get_traits): mock_get_traits.return_value = [self.fake_trait] traits = objects.TraitList.get_by_node_id(self.context, self.node_id) mock_get_traits.assert_called_once_with(self.node_id) self.assertEqual(self.context, traits._context) self.assertEqual(1, len(traits)) self.assertEqual(self.fake_trait['trait'], traits[0].trait) self.assertEqual(self.fake_trait['node_id'], traits[0].node_id) @mock.patch.object(dbapi.IMPL, 'set_node_traits', autospec=True) def test_create_list(self, mock_set_traits): fake_trait2 = db_utils.get_test_node_trait(trait='CUSTOM_TRAIT2') traits = [self.fake_trait['trait'], fake_trait2['trait']] mock_set_traits.return_value = [self.fake_trait, fake_trait2] result = objects.TraitList.create(self.context, self.node_id, traits) mock_set_traits.assert_called_once_with(self.node_id, traits, '1.0') self.assertEqual(self.context, result._context) self.assertEqual(2, len(result)) self.assertEqual(self.fake_trait['node_id'], result[0].node_id) self.assertEqual(self.fake_trait['trait'], result[0].trait) self.assertEqual(fake_trait2['node_id'], result[1].node_id) self.assertEqual(fake_trait2['trait'], result[1].trait) @mock.patch.object(dbapi.IMPL, 'unset_node_traits', autospec=True) def test_destroy_list(self, mock_unset_traits): objects.TraitList.destroy(self.context, self.node_id) mock_unset_traits.assert_called_once_with(self.node_id) @mock.patch.object(dbapi.IMPL, 'add_node_trait', autospec=True) def test_create(self, mock_add_trait): trait = objects.Trait(context=self.context, node_id=self.node_id, trait="fake") mock_add_trait.return_value = self.fake_trait trait.create() mock_add_trait.assert_called_once_with(self.node_id, 'fake', '1.0') self.assertEqual(self.fake_trait['trait'], trait.trait) self.assertEqual(self.fake_trait['node_id'], trait.node_id) @mock.patch.object(dbapi.IMPL, 'delete_node_trait', autospec=True) def test_destroy(self, mock_delete_trait): objects.Trait.destroy(self.context, self.node_id, "trait") mock_delete_trait.assert_called_once_with(self.node_id, "trait") @mock.patch.object(dbapi.IMPL, 'node_trait_exists', autospec=True) def test_exists(self, mock_trait_exists): mock_trait_exists.return_value = True result = objects.Trait.exists(self.context, self.node_id, "trait") self.assertTrue(result) mock_trait_exists.assert_called_once_with(self.node_id, "trait") def test_get_trait_names(self): trait = objects.Trait(context=self.context, node_id=self.fake_trait['node_id'], trait=self.fake_trait['trait']) trait_list = objects.TraitList(context=self.context, objects=[trait]) result = trait_list.get_trait_names() self.assertEqual([self.fake_trait['trait']], result) def test_as_dict(self): trait = objects.Trait(context=self.context, node_id=self.fake_trait['node_id'], trait=self.fake_trait['trait']) trait_list = objects.TraitList(context=self.context, objects=[trait]) result = trait_list.as_dict() expected = {'objects': [{'node_id': self.fake_trait['node_id'], 'trait': self.fake_trait['trait']}]} self.assertEqual(expected, result) ironic-15.0.0/ironic/tests/unit/objects/test_node.py0000664000175000017500000015321013652514273022507 0ustar zuulzuul00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_serialization import jsonutils from oslo_utils import uuidutils from testtools import matchers from ironic.common import context from ironic.common import exception from ironic import objects from ironic.objects import node as node_objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestNodeObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestNodeObject, self).setUp() self.ctxt = context.get_admin_context() self.fake_node = db_utils.get_test_node() self.node = obj_utils.get_test_node(self.ctxt, **self.fake_node) def test_as_dict_insecure(self): self.node.driver_info['ipmi_password'] = 'fake' self.node.instance_info['configdrive'] = 'data' self.node.driver_internal_info['agent_secret_token'] = 'abc' d = self.node.as_dict() self.assertEqual('fake', d['driver_info']['ipmi_password']) self.assertEqual('data', d['instance_info']['configdrive']) self.assertEqual('abc', d['driver_internal_info']['agent_secret_token']) # Ensure the node can be serialised. jsonutils.dumps(d) def test_as_dict_secure(self): self.node.driver_info['ipmi_password'] = 'fake' self.node.instance_info['configdrive'] = 'data' self.node.driver_internal_info['agent_secret_token'] = 'abc' d = self.node.as_dict(secure=True) self.assertEqual('******', d['driver_info']['ipmi_password']) self.assertEqual('******', d['instance_info']['configdrive']) self.assertEqual('******', d['driver_internal_info']['agent_secret_token']) # Ensure the node can be serialised. jsonutils.dumps(d) def test_as_dict_with_traits(self): self.fake_node['traits'] = ['CUSTOM_1'] self.node = obj_utils.get_test_node(self.ctxt, **self.fake_node) d = self.node.as_dict() expected_traits = {'objects': [{'trait': 'CUSTOM_1'}]} self.assertEqual(expected_traits, d['traits']) # Ensure the node can be serialised. jsonutils.dumps(d) def test_get_by_id(self): node_id = self.fake_node['id'] with mock.patch.object(self.dbapi, 'get_node_by_id', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, node_id) mock_get_node.assert_called_once_with(node_id) self.assertEqual(self.context, node._context) def test_get_by_uuid(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) mock_get_node.assert_called_once_with(uuid) self.assertEqual(self.context, node._context) def test_get_bad_id_and_uuid(self): self.assertRaises(exception.InvalidIdentity, objects.Node.get, self.context, 'not-a-uuid') def test_get_by_name(self): node_name = 'test' fake_node = db_utils.get_test_node(name=node_name) with mock.patch.object(self.dbapi, 'get_node_by_name', autospec=True) as mock_get_node: mock_get_node.return_value = fake_node node = objects.Node.get_by_name(self.context, node_name) mock_get_node.assert_called_once_with(node_name) self.assertEqual(self.context, node._context) def test_get_by_name_node_not_found(self): with mock.patch.object(self.dbapi, 'get_node_by_name', autospec=True) as mock_get_node: node_name = 'non-existent' mock_get_node.side_effect = exception.NodeNotFound(node=node_name) self.assertRaises(exception.NodeNotFound, objects.Node.get_by_name, self.context, node_name) def test_get_by_instance_uuid(self): uuid = self.fake_node['instance_uuid'] with mock.patch.object(self.dbapi, 'get_node_by_instance', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get_by_instance_uuid(self.context, uuid) mock_get_node.assert_called_once_with(uuid) self.assertEqual(self.context, node._context) def test_get_by_instance_not_found(self): with mock.patch.object(self.dbapi, 'get_node_by_instance', autospec=True) as mock_get_node: instance = 'non-existent' mock_get_node.side_effect = \ exception.InstanceNotFound(instance=instance) self.assertRaises(exception.InstanceNotFound, objects.Node.get_by_instance_uuid, self.context, instance) def test_get_by_port_addresses(self): with mock.patch.object(self.dbapi, 'get_node_by_port_addresses', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get_by_port_addresses(self.context, ['aa:bb:cc:dd:ee:ff']) mock_get_node.assert_called_once_with(['aa:bb:cc:dd:ee:ff']) self.assertEqual(self.context, node._context) def test_save(self): uuid = self.fake_node['uuid'] test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: mock_update_node.return_value = db_utils.get_test_node( properties={"fake": "property"}, driver='fake-driver', driver_internal_info={}, updated_at=test_time) n = objects.Node.get(self.context, uuid) self.assertEqual({"private_state": "secret value"}, n.driver_internal_info) n.properties = {"fake": "property"} n.driver = "fake-driver" n.save() mock_get_node.assert_called_once_with(uuid) mock_update_node.assert_called_once_with( uuid, {'properties': {"fake": "property"}, 'driver': 'fake-driver', 'driver_internal_info': {}, 'version': objects.Node.VERSION}) self.assertEqual(self.context, n._context) res_updated_at = (n.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) self.assertEqual({}, n.driver_internal_info) @mock.patch.object(node_objects, 'LOG', autospec=True) def test_save_truncated(self, log_mock): uuid = self.fake_node['uuid'] test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: mock_update_node.return_value = db_utils.get_test_node( properties={'fake': 'property'}, driver='fake-driver', driver_internal_info={}, updated_at=test_time) n = objects.Node.get(self.context, uuid) self.assertEqual({'private_state': 'secret value'}, n.driver_internal_info) n.properties = {'fake': 'property'} n.driver = 'fake-driver' last_error = 'BOOM' * 2000 maintenance_reason = last_error n.last_error = last_error n.maintenance_reason = maintenance_reason n.save() self.assertEqual([ mock.call.info( 'Truncating too long %s to %s characters for node %s', 'last_error', node_objects.CONF.log_in_db_max_size, uuid), mock.call.info( 'Truncating too long %s to %s characters for node %s', 'maintenance_reason', node_objects.CONF.log_in_db_max_size, uuid)], log_mock.mock_calls) mock_get_node.assert_called_once_with(uuid) mock_update_node.assert_called_once_with( uuid, { 'properties': {'fake': 'property'}, 'driver': 'fake-driver', 'driver_internal_info': {}, 'version': objects.Node.VERSION, 'maintenance_reason': maintenance_reason[ 0:node_objects.CONF.log_in_db_max_size], 'last_error': last_error[ 0:node_objects.CONF.log_in_db_max_size] } ) self.assertEqual(self.context, n._context) res_updated_at = (n.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) self.assertEqual({}, n.driver_internal_info) def test_save_updated_at_field(self): uuid = self.fake_node['uuid'] extra = {"test": 123} test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: mock_update_node.return_value = ( db_utils.get_test_node(extra=extra, updated_at=test_time)) n = objects.Node.get(self.context, uuid) self.assertEqual({"private_state": "secret value"}, n.driver_internal_info) n.properties = {"fake": "property"} n.extra = extra n.driver = "fake-driver" n.driver_internal_info = {} n.save() mock_get_node.assert_called_once_with(uuid) mock_update_node.assert_called_once_with( uuid, {'properties': {"fake": "property"}, 'driver': 'fake-driver', 'driver_internal_info': {}, 'extra': {'test': 123}, 'version': objects.Node.VERSION}) self.assertEqual(self.context, n._context) res_updated_at = n.updated_at.replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_save_with_traits(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: n = objects.Node.get(self.context, uuid) trait = objects.Trait(self.context, node_id=n.id, trait='CUSTOM_1') n.traits = objects.TraitList(self.context, objects=[trait]) self.assertRaises(exception.BadRequest, n.save) self.assertFalse(mock_update_node.called) def test_save_with_conductor_group(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: mock_update_node.return_value = ( db_utils.get_test_node(conductor_group='group1')) n = objects.Node.get(self.context, uuid) n.conductor_group = 'group1' n.save() self.assertTrue(mock_update_node.called) mock_update_node.assert_called_once_with( uuid, {'conductor_group': 'group1', 'version': objects.Node.VERSION}) def test_save_with_conductor_group_uppercase(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: mock_update_node.return_value = ( db_utils.get_test_node(conductor_group='group1')) n = objects.Node.get(self.context, uuid) n.conductor_group = 'GROUP1' n.save() mock_update_node.assert_called_once_with( uuid, {'conductor_group': 'group1', 'version': objects.Node.VERSION}) def test_save_with_conductor_group_fail(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'update_node', autospec=True) as mock_update_node: n = objects.Node.get(self.context, uuid) n.conductor_group = 'group:1' self.assertRaises(exception.InvalidConductorGroup, n.save) self.assertFalse(mock_update_node.called) def test_refresh(self): uuid = self.fake_node['uuid'] returns = [dict(self.fake_node, properties={"fake": "first"}), dict(self.fake_node, properties={"fake": "second"})] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_node_by_uuid', side_effect=returns, autospec=True) as mock_get_node: n = objects.Node.get(self.context, uuid) self.assertEqual({"fake": "first"}, n.properties) n.refresh() self.assertEqual({"fake": "second"}, n.properties) self.assertEqual(expected, mock_get_node.call_args_list) self.assertEqual(self.context, n._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() db_node = db_utils.create_test_node() n = objects.Node.get_by_uuid(self.context, db_node.uuid) n_copy = objects.Node.get_by_uuid(self.context, db_node.uuid) n.name = 'b240' n.save() n_copy.refresh() n_copy.name = 'aaff' # Ensure this passes and an exception is not generated n_copy.save() def test_list(self): with mock.patch.object(self.dbapi, 'get_node_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_node] nodes = objects.Node.list(self.context) self.assertThat(nodes, matchers.HasLength(1)) self.assertIsInstance(nodes[0], objects.Node) self.assertEqual(self.context, nodes[0]._context) def test_reserve(self): with mock.patch.object(self.dbapi, 'reserve_node', autospec=True) as mock_reserve: mock_reserve.return_value = self.fake_node node_id = self.fake_node['id'] fake_tag = 'fake-tag' node = objects.Node.reserve(self.context, fake_tag, node_id) self.assertIsInstance(node, objects.Node) mock_reserve.assert_called_once_with(fake_tag, node_id) self.assertEqual(self.context, node._context) def test_reserve_node_not_found(self): with mock.patch.object(self.dbapi, 'reserve_node', autospec=True) as mock_reserve: node_id = 'non-existent' mock_reserve.side_effect = exception.NodeNotFound(node=node_id) self.assertRaises(exception.NodeNotFound, objects.Node.reserve, self.context, 'fake-tag', node_id) def test_release(self): with mock.patch.object(self.dbapi, 'release_node', autospec=True) as mock_release: node_id = self.fake_node['id'] fake_tag = 'fake-tag' objects.Node.release(self.context, fake_tag, node_id) mock_release.assert_called_once_with(fake_tag, node_id) def test_release_node_not_found(self): with mock.patch.object(self.dbapi, 'release_node', autospec=True) as mock_release: node_id = 'non-existent' mock_release.side_effect = exception.NodeNotFound(node=node_id) self.assertRaises(exception.NodeNotFound, objects.Node.release, self.context, 'fake-tag', node_id) def test_touch_provisioning(self): with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node with mock.patch.object(self.dbapi, 'touch_node_provisioning', autospec=True) as mock_touch: node = objects.Node.get(self.context, self.fake_node['uuid']) node.touch_provisioning() mock_touch.assert_called_once_with(node.id) def test_create(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) with mock.patch.object(self.dbapi, 'create_node', autospec=True) as mock_create_node: mock_create_node.return_value = db_utils.get_test_node() node.create() args, _kwargs = mock_create_node.call_args self.assertEqual(objects.Node.VERSION, args[0]['version']) self.assertEqual(1, mock_create_node.call_count) def test_create_with_invalid_properties(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.properties = {"local_gb": "5G"} self.assertRaises(exception.InvalidParameterValue, node.create) def test_create_with_traits(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) trait = objects.Trait(self.context, node_id=node.id, trait='CUSTOM_1') node.traits = objects.TraitList(self.context, objects=[trait]) self.assertRaises(exception.BadRequest, node.create) def test_update_with_invalid_properties(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) node.properties = {"local_gb": "5G", "memory_mb": "5", 'cpus': '-1', 'cpu_arch': 'x86_64'} self.assertRaisesRegex(exception.InvalidParameterValue, ".*local_gb=5G, cpus=-1$", node.save) mock_get_node.assert_called_once_with(uuid) def test__validate_property_values_success(self): uuid = self.fake_node['uuid'] with mock.patch.object(self.dbapi, 'get_node_by_uuid', autospec=True) as mock_get_node: mock_get_node.return_value = self.fake_node node = objects.Node.get(self.context, uuid) values = self.fake_node expect = { 'cpu_arch': 'x86_64', "cpus": '8', "local_gb": '10', "memory_mb": '4096', } node._validate_property_values(values['properties']) self.assertEqual(expect, values['properties']) def test_payload_schemas(self): self._check_payload_schemas(objects.node, objects.Node.fields) class TestConvertToVersion(db_base.DbTestCase): def setUp(self): super(TestConvertToVersion, self).setUp() self.ctxt = context.get_admin_context() self.fake_node = db_utils.get_test_node(driver='fake-hardware') def test_rescue_supported_missing(self): # rescue_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'rescue_interface') node.obj_reset_changes() node._convert_to_version("1.22") self.assertIsNone(node.rescue_interface) self.assertEqual({'rescue_interface': None}, node.obj_get_changes()) def test_rescue_supported_set(self): # rescue_interface set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.rescue_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.22") self.assertEqual('fake', node.rescue_interface) self.assertEqual({}, node.obj_get_changes()) def test_rescue_unsupported_missing(self): # rescue_interface not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'rescue_interface') node.obj_reset_changes() node._convert_to_version("1.21") self.assertNotIn('rescue_interface', node) self.assertEqual({}, node.obj_get_changes()) def test_rescue_unsupported_set_remove(self): # rescue_interface set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.rescue_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.21") self.assertNotIn('rescue_interface', node) self.assertEqual({}, node.obj_get_changes()) def test_rescue_unsupported_set_no_remove_non_default(self): # rescue_interface set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.rescue_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.21", False) self.assertIsNone(node.rescue_interface) self.assertEqual({'rescue_interface': None, 'traits': None}, node.obj_get_changes()) def test_rescue_unsupported_set_no_remove_default(self): # rescue_interface set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.rescue_interface = None node.traits = None node.obj_reset_changes() node._convert_to_version("1.21", False) self.assertIsNone(node.rescue_interface) self.assertEqual({}, node.obj_get_changes()) def test_traits_supported_missing(self): # traits not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'traits') node.obj_reset_changes() node._convert_to_version("1.23") self.assertIsNone(node.traits) self.assertEqual({'traits': None}, node.obj_get_changes()) def test_traits_supported_set(self): # traits set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) traits = objects.TraitList( objects=[objects.Trait('CUSTOM_TRAIT')]) traits.obj_reset_changes() node.traits = traits node.obj_reset_changes() node._convert_to_version("1.23") self.assertEqual(traits, node.traits) self.assertEqual({}, node.obj_get_changes()) def test_traits_unsupported_missing_remove(self): # traits not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'traits') node.obj_reset_changes() node._convert_to_version("1.22") self.assertNotIn('traits', node) self.assertEqual({}, node.obj_get_changes()) def test_traits_unsupported_missing(self): # traits not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'traits') node.obj_reset_changes() node._convert_to_version("1.22", False) self.assertNotIn('traits', node) self.assertEqual({}, node.obj_get_changes()) def test_trait_unsupported_set_no_remove_non_default(self): # traits set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.traits = objects.TraitList(self.ctxt) node.traits.obj_reset_changes() node.obj_reset_changes() node._convert_to_version("1.22", False) self.assertIsNone(node.traits) self.assertEqual({'traits': None}, node.obj_get_changes()) def test_trait_unsupported_set_no_remove_default(self): # traits set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.traits = None node.obj_reset_changes() node._convert_to_version("1.22", False) self.assertIsNone(node.traits) self.assertEqual({}, node.obj_get_changes()) def test_bios_supported_missing(self): # bios_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'bios_interface') node.obj_reset_changes() node._convert_to_version("1.24") self.assertIsNone(node.bios_interface) self.assertEqual({'bios_interface': None}, node.obj_get_changes()) def test_bios_supported_set(self): # bios_interface set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.bios_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.24") self.assertEqual('fake', node.bios_interface) self.assertEqual({}, node.obj_get_changes()) def test_bios_unsupported_missing(self): # bios_interface not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'bios_interface') node.obj_reset_changes() node._convert_to_version("1.23") self.assertNotIn('bios_interface', node) self.assertEqual({}, node.obj_get_changes()) def test_bios_unsupported_set_remove(self): # bios_interface set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.bios_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.23") self.assertNotIn('bios_interface', node) self.assertEqual({}, node.obj_get_changes()) def test_bios_unsupported_set_no_remove_non_default(self): # bios_interface set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.bios_interface = 'fake' node.obj_reset_changes() node._convert_to_version("1.23", False) self.assertIsNone(node.bios_interface) self.assertEqual({'bios_interface': None}, node.obj_get_changes()) def test_bios_unsupported_set_no_remove_default(self): # bios_interface set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.bios_interface = None node.obj_reset_changes() node._convert_to_version("1.23", False) self.assertIsNone(node.bios_interface) self.assertEqual({}, node.obj_get_changes()) def test_fault_supported_missing(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'fault') node.obj_reset_changes() node._convert_to_version("1.25") self.assertIsNone(node.fault) self.assertEqual({'fault': None}, node.obj_get_changes()) def test_fault_supported_untouched(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.maintenance = True node.fault = 'a fake fault' node.obj_reset_changes() node._convert_to_version("1.25") self.assertEqual('a fake fault', node.fault) self.assertEqual({}, node.obj_get_changes()) def test_fault_unsupported_missing(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'fault') node.obj_reset_changes() node._convert_to_version("1.24") self.assertNotIn('fault', node) self.assertEqual({}, node.obj_get_changes()) def test_fault_unsupported_set_remove(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.maintenance = True node.fault = 'some fake fault' node.obj_reset_changes() node._convert_to_version("1.24") self.assertNotIn('fault', node) self.assertEqual({}, node.obj_get_changes()) def test_fault_unsupported_set_remove_in_maintenance(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.maintenance = True node.fault = 'some fake type' node.obj_reset_changes() node._convert_to_version("1.24", False) self.assertIsNone(node.fault) self.assertEqual({'fault': None}, node.obj_get_changes()) def test_conductor_group_supported_set(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.conductor_group = 'group1' node.obj_reset_changes() node._convert_to_version('1.27') self.assertEqual('group1', node.conductor_group) self.assertEqual({}, node.obj_get_changes()) def test_conductor_group_supported_unset(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'conductor_group') node.obj_reset_changes() node._convert_to_version('1.27') self.assertEqual('', node.conductor_group) self.assertEqual({'conductor_group': ''}, node.obj_get_changes()) def test_conductor_group_unsupported_set(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.conductor_group = 'group1' node.obj_reset_changes() node._convert_to_version('1.26') self.assertNotIn('conductor_group', node) self.assertEqual({}, node.obj_get_changes()) def test_conductor_group_unsupported_unset(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'conductor_group') node.obj_reset_changes() node._convert_to_version('1.26') self.assertNotIn('conductor_group', node) self.assertEqual({}, node.obj_get_changes()) def test_conductor_group_unsupported_set_no_remove(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.conductor_group = 'group1' node.obj_reset_changes() node._convert_to_version('1.26', remove_unavailable_fields=False) self.assertEqual('', node.conductor_group) self.assertEqual({'conductor_group': ''}, node.obj_get_changes()) def test_automated_clean_supported_missing(self): # automated_clean_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'automated_clean') node.obj_reset_changes() node._convert_to_version("1.28") self.assertIsNone(node.automated_clean) self.assertEqual({'automated_clean': None}, node.obj_get_changes()) def test_automated_clean_supported_set(self): # automated_clean set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.automated_clean = True node.obj_reset_changes() node._convert_to_version("1.28") self.assertEqual(True, node.automated_clean) self.assertEqual({}, node.obj_get_changes()) def test_automated_clean_unsupported_missing(self): # automated_clean not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'automated_clean') node.obj_reset_changes() node._convert_to_version("1.27") self.assertNotIn('automated_clean', node) self.assertEqual({}, node.obj_get_changes()) def test_automated_clean_unsupported_set_remove(self): # automated_clean set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.automated_clean = True node.obj_reset_changes() node._convert_to_version("1.27") self.assertNotIn('automated_clean', node) self.assertEqual({}, node.obj_get_changes()) def test_automated_clean_unsupported_set_no_remove_non_default(self): # automated_clean set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.automated_clean = True node.obj_reset_changes() node._convert_to_version("1.27", False) self.assertIsNone(node.automated_clean) self.assertEqual({'automated_clean': None}, node.obj_get_changes()) def test_automated_clean_unsupported_set_no_remove_default(self): # automated_clean set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.automated_clean = None node.obj_reset_changes() node._convert_to_version("1.27", False) self.assertIsNone(node.automated_clean) self.assertEqual({}, node.obj_get_changes()) def test_protected_supported_missing(self): # protected_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'protected') delattr(node, 'protected_reason') node.obj_reset_changes() node._convert_to_version("1.29") self.assertFalse(node.protected) self.assertIsNone(node.protected_reason) self.assertEqual({'protected': False, 'protected_reason': None}, node.obj_get_changes()) def test_protected_supported_set(self): # protected set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.protected = True node.protected_reason = 'foo' node.obj_reset_changes() node._convert_to_version("1.29") self.assertTrue(node.protected) self.assertEqual('foo', node.protected_reason) self.assertEqual({}, node.obj_get_changes()) def test_protected_unsupported_missing(self): # protected not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'protected') delattr(node, 'protected_reason') node.obj_reset_changes() node._convert_to_version("1.28") self.assertNotIn('protected', node) self.assertNotIn('protected_reason', node) self.assertEqual({}, node.obj_get_changes()) def test_protected_unsupported_set_remove(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.protected = True node.protected_reason = 'foo' node.obj_reset_changes() node._convert_to_version("1.28") self.assertNotIn('protected', node) self.assertNotIn('protected_reason', node) self.assertEqual({}, node.obj_get_changes()) def test_protected_unsupported_set_no_remove_non_default(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.protected = True node.protected_reason = 'foo' node.obj_reset_changes() node._convert_to_version("1.28", False) self.assertIsNone(node.automated_clean) self.assertEqual({'protected': False, 'protected_reason': None}, node.obj_get_changes()) def test_retired_supported_missing(self): # retired_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'retired') delattr(node, 'retired_reason') node.obj_reset_changes() node._convert_to_version("1.33") self.assertFalse(node.retired) self.assertIsNone(node.retired_reason) self.assertEqual({'retired': False, 'retired_reason': None}, node.obj_get_changes()) def test_retired_supported_set(self): # retired set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.retired = True node.retired_reason = 'a reason' node.obj_reset_changes() node._convert_to_version("1.33") self.assertTrue(node.retired) self.assertEqual('a reason', node.retired_reason) self.assertEqual({}, node.obj_get_changes()) def test_retired_unsupported_missing(self): # retired not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'retired') delattr(node, 'retired_reason') node.obj_reset_changes() node._convert_to_version("1.32") self.assertNotIn('retired', node) self.assertNotIn('retired_reason', node) self.assertEqual({}, node.obj_get_changes()) def test_retired_unsupported_set_remove(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.retired = True node.retired_reason = 'another reason' node.obj_reset_changes() node._convert_to_version("1.32") self.assertNotIn('retired', node) self.assertNotIn('retired_reason', node) self.assertEqual({}, node.obj_get_changes()) def test_retired_unsupported_set_no_remove_non_default(self): node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.retired = True node.retired_reason = 'yet another reason' node.obj_reset_changes() node._convert_to_version("1.32", False) self.assertIsNone(node.automated_clean) self.assertEqual({'retired': False, 'retired_reason': None}, node.obj_get_changes()) def test_owner_supported_missing(self): # owner_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'owner') node.obj_reset_changes() node._convert_to_version("1.30") self.assertIsNone(node.owner) self.assertEqual({'owner': None}, node.obj_get_changes()) def test_owner_supported_set(self): # owner set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.owner = "Sure, there is an owner" node.obj_reset_changes() node._convert_to_version("1.30") self.assertEqual("Sure, there is an owner", node.owner) self.assertEqual({}, node.obj_get_changes()) def test_owner_unsupported_missing(self): # owner not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'owner') node.obj_reset_changes() node._convert_to_version("1.29") self.assertNotIn('owner', node) self.assertEqual({}, node.obj_get_changes()) def test_owner_unsupported_set_remove(self): # owner set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.owner = "magic" node.obj_reset_changes() node._convert_to_version("1.29") self.assertNotIn('owner', node) self.assertEqual({}, node.obj_get_changes()) def test_owner_unsupported_set_no_remove_non_default(self): # owner set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.owner = "magic" node.obj_reset_changes() node._convert_to_version("1.29", False) self.assertIsNone(node.owner) self.assertEqual({'owner': None}, node.obj_get_changes()) def test_owner_unsupported_set_no_remove_default(self): # owner set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.owner = None node.obj_reset_changes() node._convert_to_version("1.29", False) self.assertIsNone(node.owner) self.assertEqual({}, node.obj_get_changes()) def test_allocation_id_supported_missing(self): # allocation_id_interface not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'allocation_id') node.obj_reset_changes() node._convert_to_version("1.31") self.assertIsNone(node.allocation_id) self.assertEqual({'allocation_id': None}, node.obj_get_changes()) def test_allocation_id_supported_set(self): # allocation_id set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.allocation_id = 42 node.obj_reset_changes() node._convert_to_version("1.31") self.assertEqual(42, node.allocation_id) self.assertEqual({}, node.obj_get_changes()) def test_allocation_id_unsupported_missing(self): # allocation_id not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'allocation_id') node.obj_reset_changes() node._convert_to_version("1.30") self.assertNotIn('allocation_id', node) self.assertEqual({}, node.obj_get_changes()) def test_allocation_id_unsupported_set_remove(self): # allocation_id set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.allocation_id = 42 node.obj_reset_changes() node._convert_to_version("1.30") self.assertNotIn('allocation_id', node) self.assertEqual({}, node.obj_get_changes()) def test_allocation_id_unsupported_set_no_remove_non_default(self): # allocation_id set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.allocation_id = 42 node.obj_reset_changes() node._convert_to_version("1.30", False) self.assertIsNone(node.allocation_id) self.assertEqual({'allocation_id': None}, node.obj_get_changes()) def test_allocation_id_unsupported_set_no_remove_default(self): # allocation_id set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.allocation_id = None node.obj_reset_changes() node._convert_to_version("1.30", False) self.assertIsNone(node.allocation_id) self.assertEqual({}, node.obj_get_changes()) def test_description_supported_missing(self): # description not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'description') node.obj_reset_changes() node._convert_to_version("1.32") self.assertIsNone(node.description) self.assertEqual({'description': None}, node.obj_get_changes()) def test_description_supported_set(self): # description set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.description = "Useful information relates to this node" node.obj_reset_changes() node._convert_to_version("1.32") self.assertEqual("Useful information relates to this node", node.description) self.assertEqual({}, node.obj_get_changes()) def test_description_unsupported_missing(self): # description not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'description') node.obj_reset_changes() node._convert_to_version("1.31") self.assertNotIn('description', node) self.assertEqual({}, node.obj_get_changes()) def test_description_unsupported_set_remove(self): # description set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.description = "Useful piece" node.obj_reset_changes() node._convert_to_version("1.31") self.assertNotIn('description', node) self.assertEqual({}, node.obj_get_changes()) def test_description_unsupported_set_no_remove_non_default(self): # description set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.description = "Useful piece" node.obj_reset_changes() node._convert_to_version("1.31", False) self.assertIsNone(node.description) self.assertEqual({'description': None}, node.obj_get_changes()) def test_description_unsupported_set_no_remove_default(self): # description set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.description = None node.obj_reset_changes() node._convert_to_version("1.31", False) self.assertIsNone(node.description) self.assertEqual({}, node.obj_get_changes()) def test_lessee_supported_missing(self): # lessee not set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'lessee') node.obj_reset_changes() node._convert_to_version("1.34") self.assertIsNone(node.lessee) self.assertEqual({'lessee': None}, node.obj_get_changes()) def test_lessee_supported_set(self): # lessee set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.lessee = "some-lucky-project" node.obj_reset_changes() node._convert_to_version("1.34") self.assertEqual("some-lucky-project", node.lessee) self.assertEqual({}, node.obj_get_changes()) def test_lessee_unsupported_missing(self): # lessee not set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) delattr(node, 'lessee') node.obj_reset_changes() node._convert_to_version("1.33") self.assertNotIn('lessee', node) self.assertEqual({}, node.obj_get_changes()) def test_lessee_unsupported_set_remove(self): # lessee set, should be removed. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.lessee = "some-lucky-project" node.obj_reset_changes() node._convert_to_version("1.33") self.assertNotIn('lessee', node) self.assertEqual({}, node.obj_get_changes()) def test_lessee_unsupported_set_no_remove_non_default(self): # lessee set, should be set to default. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.lessee = "some-lucky-project" node.obj_reset_changes() node._convert_to_version("1.33", False) self.assertIsNone(node.lessee) self.assertEqual({'lessee': None}, node.obj_get_changes()) def test_lessee_unsupported_set_no_remove_default(self): # lessee set, no change required. node = obj_utils.get_test_node(self.ctxt, **self.fake_node) node.lessee = None node.obj_reset_changes() node._convert_to_version("1.33", False) self.assertIsNone(node.lessee) self.assertEqual({}, node.obj_get_changes()) class TestNodePayloads(db_base.DbTestCase): def setUp(self): super(TestNodePayloads, self).setUp() self.ctxt = context.get_admin_context() self.fake_node = db_utils.get_test_node() self.node = obj_utils.get_test_node(self.ctxt, **self.fake_node) def _test_node_payload(self, payload): self.assertEqual(self.node.clean_step, payload.clean_step) self.assertEqual(self.node.console_enabled, payload.console_enabled) self.assertEqual(self.node.created_at, payload.created_at) self.assertEqual(self.node.driver, payload.driver) self.assertEqual(self.node.extra, payload.extra) self.assertEqual(self.node.inspection_finished_at, payload.inspection_finished_at) self.assertEqual(self.node.inspection_started_at, payload.inspection_started_at) self.assertEqual(self.node.instance_uuid, payload.instance_uuid) self.assertEqual(self.node.last_error, payload.last_error) self.assertEqual(self.node.maintenance, payload.maintenance) self.assertEqual(self.node.maintenance_reason, payload.maintenance_reason) self.assertEqual(self.node.fault, payload.fault) self.assertEqual(self.node.bios_interface, payload.bios_interface) self.assertEqual(self.node.boot_interface, payload.boot_interface) self.assertEqual(self.node.console_interface, payload.console_interface) self.assertEqual(self.node.deploy_interface, payload.deploy_interface) self.assertEqual(self.node.inspect_interface, payload.inspect_interface) self.assertEqual(self.node.management_interface, payload.management_interface) self.assertEqual(self.node.network_interface, payload.network_interface) self.assertEqual(self.node.power_interface, payload.power_interface) self.assertEqual(self.node.raid_interface, payload.raid_interface) self.assertEqual(self.node.storage_interface, payload.storage_interface) self.assertEqual(self.node.vendor_interface, payload.vendor_interface) self.assertEqual(self.node.name, payload.name) self.assertEqual(self.node.power_state, payload.power_state) self.assertEqual(self.node.properties, payload.properties) self.assertEqual(self.node.provision_state, payload.provision_state) self.assertEqual(self.node.provision_updated_at, payload.provision_updated_at) self.assertEqual(self.node.resource_class, payload.resource_class) self.assertEqual(self.node.target_power_state, payload.target_power_state) self.assertEqual(self.node.target_provision_state, payload.target_provision_state) self.assertEqual(self.node.traits.get_trait_names(), payload.traits) self.assertEqual(self.node.updated_at, payload.updated_at) self.assertEqual(self.node.uuid, payload.uuid) self.assertEqual(self.node.owner, payload.owner) def test_node_payload(self): payload = objects.NodePayload(self.node) self._test_node_payload(payload) def test_node_payload_no_traits(self): delattr(self.node, 'traits') payload = objects.NodePayload(self.node) self.assertEqual([], payload.traits) def test_node_payload_traits_is_none(self): self.node.traits = None payload = objects.NodePayload(self.node) self.assertEqual([], payload.traits) def test_node_set_power_state_payload(self): payload = objects.NodeSetPowerStatePayload(self.node, 'POWER_ON') self._test_node_payload(payload) self.assertEqual('POWER_ON', payload.to_power) def test_node_corrected_power_state_payload(self): payload = objects.NodeCorrectedPowerStatePayload(self.node, 'POWER_ON') self._test_node_payload(payload) self.assertEqual('POWER_ON', payload.from_power) def test_node_set_provision_state_payload(self): payload = objects.NodeSetProvisionStatePayload(self.node, 'AVAILABLE', 'DEPLOYING', 'DEPLOY') self._test_node_payload(payload) self.assertEqual(self.node.instance_info, payload.instance_info) self.assertEqual('DEPLOY', payload.event) self.assertEqual('AVAILABLE', payload.previous_provision_state) self.assertEqual('DEPLOYING', payload.previous_target_provision_state) def test_node_crud_payload(self): chassis_uuid = uuidutils.generate_uuid() payload = objects.NodeCRUDPayload(self.node, chassis_uuid) self._test_node_payload(payload) self.assertEqual(chassis_uuid, payload.chassis_uuid) self.assertEqual(self.node.instance_info, payload.instance_info) self.assertEqual(self.node.driver_info, payload.driver_info) ironic-15.0.0/ironic/tests/unit/objects/test_objects.py0000664000175000017500000012073413652514273023220 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import datetime import types import iso8601 import mock from oslo_utils import timeutils from oslo_versionedobjects import base as object_base from oslo_versionedobjects import exception as object_exception from oslo_versionedobjects import fixture as object_fixture from ironic.common import context from ironic.common import release_mappings from ironic.conf import CONF from ironic.objects import base from ironic.objects import fields from ironic.tests import base as test_base @base.IronicObjectRegistry.register class MyObj(base.IronicObject, object_base.VersionedObjectDictCompat): VERSION = '1.5' fields = {'foo': fields.IntegerField(), 'bar': fields.StringField(), 'missing': fields.StringField(), # nested object added as string for simplicity 'nested_object': fields.StringField(), } def _set_from_db_object(self, context, db_object, fields=None): fields = set(fields or self.fields) - {'nested_object'} super(MyObj, self)._set_from_db_object(context, db_object, fields) # some special manipulation with nested_object here self['nested_object'] = db_object.get('nested_object', '') + 'test' def obj_load_attr(self, attrname): if attrname == 'version': setattr(self, attrname, None) else: setattr(self, attrname, 'loaded!') @object_base.remotable_classmethod def query(cls, context): obj = cls(context) obj.foo = 1 obj.bar = 'bar' obj.obj_reset_changes() return obj @object_base.remotable def marco(self, context=None): return 'polo' @object_base.remotable def update_test(self, context=None): if context and context.project_id == 'alternate': self.bar = 'alternate-context' else: self.bar = 'updated' @object_base.remotable def save(self, context=None): self.do_version_changes_for_db() self.obj_reset_changes() @object_base.remotable def refresh(self, context=None): self.foo = 321 self.bar = 'refreshed' self.obj_reset_changes() @object_base.remotable def modify_save_modify(self, context=None): self.bar = 'meow' self.save() self.foo = 42 def _convert_to_version(self, target_version, remove_unavailable_fields=True): if target_version == '1.5': self.missing = 'foo' elif self.missing: if remove_unavailable_fields: delattr(self, 'missing') else: self.missing = '' class MyObj2(object): @classmethod def obj_name(cls): return 'MyObj' @object_base.remotable_classmethod def get(cls, *args, **kwargs): pass @base.IronicObjectRegistry.register_if(False) class TestSubclassedObject(MyObj): fields = {'new_field': fields.StringField()} class _LocalTest(test_base.TestCase): def setUp(self): super(_LocalTest, self).setUp() # Just in case base.IronicObject.indirection_api = None @contextlib.contextmanager def things_temporarily_local(): # Temporarily go non-remote so the conductor handles # this request directly _api = base.IronicObject.indirection_api base.IronicObject.indirection_api = None yield base.IronicObject.indirection_api = _api class _TestObject(object): def test_hydration_type_error(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 'a'}} self.assertRaises(ValueError, MyObj.obj_from_primitive, primitive) def test_hydration(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} obj = MyObj.obj_from_primitive(primitive) self.assertEqual(1, obj.foo) def test_hydration_bad_ns(self): primitive = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'foo', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} self.assertRaises(object_exception.UnsupportedObjectError, MyObj.obj_from_primitive, primitive) def test_dehydration(self): expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.data': {'foo': 1}} obj = MyObj(self.context) obj.foo = 1 obj.obj_reset_changes() self.assertEqual(expected, obj.obj_to_primitive()) def test_get_updates(self): obj = MyObj(self.context) self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 self.assertEqual({'foo': 123}, obj.obj_get_changes()) obj.bar = 'test' self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes()) obj.obj_reset_changes() self.assertEqual({}, obj.obj_get_changes()) def test_object_property(self): obj = MyObj(self.context, foo=1) self.assertEqual(1, obj.foo) def test_object_property_type_error(self): obj = MyObj(self.context) def fail(): obj.foo = 'a' self.assertRaises(ValueError, fail) def test_load(self): obj = MyObj(self.context) self.assertEqual('loaded!', obj.bar) def test_load_in_base(self): @base.IronicObjectRegistry.register_if(False) class Foo(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foobar': fields.IntegerField()} obj = Foo(self.context) self.assertRaisesRegex( NotImplementedError, "Cannot load 'foobar' in the base class", getattr, obj, 'foobar') def test_loaded_in_primitive(self): obj = MyObj(self.context) obj.foo = 1 obj.obj_reset_changes() self.assertEqual('loaded!', obj.bar) expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.changes': ['bar'], 'ironic_object.data': {'foo': 1, 'bar': 'loaded!'}} self.assertEqual(expected, obj.obj_to_primitive()) def test_changes_in_primitive(self): obj = MyObj(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) primitive = obj.obj_to_primitive() self.assertIn('ironic_object.changes', primitive) obj2 = MyObj.obj_from_primitive(primitive) self.assertEqual(set(['foo']), obj2.obj_what_changed()) obj2.obj_reset_changes() self.assertEqual(set(), obj2.obj_what_changed()) def test_unknown_objtype(self): self.assertRaises(object_exception.UnsupportedObjectError, base.IronicObject.obj_class_from_name, 'foo', '1.0') def test_with_alternate_context(self): ctxt1 = context.RequestContext(auth_token='foo', project_id='foo') ctxt2 = context.RequestContext(auth_token='bar', project_id='alternate') obj = MyObj.query(ctxt1) obj.update_test(ctxt2) self.assertEqual('alternate-context', obj.bar) def test_orphaned_object(self): obj = MyObj.query(self.context) obj._context = None self.assertRaises(object_exception.OrphanedObjectError, obj.update_test) def test_changed_1(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.update_test(self.context) self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) self.assertEqual(123, obj.foo) def test_changed_2(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.save() self.assertEqual(set([]), obj.obj_what_changed()) self.assertEqual(123, obj.foo) def test_changed_3(self): obj = MyObj.query(self.context) obj.foo = 123 self.assertEqual(set(['foo']), obj.obj_what_changed()) obj.refresh() self.assertEqual(set([]), obj.obj_what_changed()) self.assertEqual(321, obj.foo) self.assertEqual('refreshed', obj.bar) def test_changed_4(self): obj = MyObj.query(self.context) obj.bar = 'something' self.assertEqual(set(['bar']), obj.obj_what_changed()) obj.modify_save_modify(self.context) self.assertEqual(set(['foo']), obj.obj_what_changed()) self.assertEqual(42, obj.foo) self.assertEqual('meow', obj.bar) def test_static_result(self): obj = MyObj.query(self.context) self.assertEqual('bar', obj.bar) result = obj.marco() self.assertEqual('polo', result) def test_updates(self): obj = MyObj.query(self.context) self.assertEqual(1, obj.foo) obj.update_test() self.assertEqual('updated', obj.bar) def test_base_attributes(self): dt = datetime.datetime(1955, 11, 5, 0, 0, tzinfo=iso8601.UTC) datatime = fields.DateTimeField() obj = MyObj(self.context) obj.created_at = dt obj.updated_at = dt expected = {'ironic_object.name': 'MyObj', 'ironic_object.namespace': 'ironic', 'ironic_object.version': '1.5', 'ironic_object.changes': ['created_at', 'updated_at'], 'ironic_object.data': {'created_at': datatime.stringify(dt), 'updated_at': datatime.stringify(dt), } } actual = obj.obj_to_primitive() # ironic_object.changes is built from a set and order is undefined self.assertEqual(sorted(expected['ironic_object.changes']), sorted(actual['ironic_object.changes'])) del expected['ironic_object.changes'], actual['ironic_object.changes'] self.assertEqual(expected, actual) def test_contains(self): obj = MyObj(self.context) self.assertNotIn('foo', obj) obj.foo = 1 self.assertIn('foo', obj) self.assertNotIn('does_not_exist', obj) def test_obj_attr_is_set(self): obj = MyObj(self.context, foo=1) self.assertTrue(obj.obj_attr_is_set('foo')) self.assertFalse(obj.obj_attr_is_set('bar')) self.assertRaises(AttributeError, obj.obj_attr_is_set, 'bang') def test_get(self): obj = MyObj(self.context, foo=1) # Foo has value, should not get the default self.assertEqual(1, obj.get('foo', 2)) # Foo has value, should return the value without error self.assertEqual(1, obj.get('foo')) # Bar is not loaded, so we should get the default self.assertEqual('not-loaded', obj.get('bar', 'not-loaded')) # Bar without a default should lazy-load self.assertEqual('loaded!', obj.get('bar')) # Bar now has a default, but loaded value should be returned self.assertEqual('loaded!', obj.get('bar', 'not-loaded')) # Invalid attribute should raise AttributeError self.assertRaises(AttributeError, obj.get, 'nothing') # ...even with a default self.assertRaises(AttributeError, obj.get, 'nothing', 3) def test_object_inheritance(self): base_fields = list(base.IronicObject.fields) myobj_fields = ['foo', 'bar', 'missing', 'nested_object'] + base_fields myobj3_fields = ['new_field'] self.assertTrue(issubclass(TestSubclassedObject, MyObj)) self.assertEqual(len(myobj_fields), len(MyObj.fields)) self.assertEqual(set(myobj_fields), set(MyObj.fields)) self.assertEqual(len(myobj_fields) + len(myobj3_fields), len(TestSubclassedObject.fields)) self.assertEqual(set(myobj_fields) | set(myobj3_fields), set(TestSubclassedObject.fields)) def _test_get_changes(self, target_version='1.5'): obj = MyObj(self.context) self.assertEqual('1.5', obj.VERSION) self.assertEqual(target_version, obj.get_target_version()) self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 self.assertEqual({'foo': 123}, obj.obj_get_changes()) obj.bar = 'test' obj.missing = 'test' # field which is missing in v1.4 self.assertEqual({'foo': 123, 'bar': 'test', 'missing': 'test'}, obj.obj_get_changes()) obj.obj_reset_changes() self.assertEqual({}, obj.obj_get_changes()) def test_get_changes(self): self._test_get_changes() @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_get_changes_pinned(self, mock_release_mapping): # obj_get_changes() is not affected by pinning CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyObj': ['1.4'], } } self._test_get_changes(target_version='1.4') @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_get_changes_pinned_2versions(self, mock_release_mapping): # obj_get_changes() is not affected by pinning CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyObj': ['1.3', '1.4'], } } self._test_get_changes(target_version='1.4') def test_convert_to_version_same(self): # no changes obj = MyObj(self.context) self.assertEqual('1.5', obj.VERSION) obj.convert_to_version('1.5', remove_unavailable_fields=False) self.assertEqual('1.5', obj.VERSION) self.assertEqual(obj.__class__.VERSION, obj.VERSION) self.assertEqual({}, obj.obj_get_changes()) def test_convert_to_version_new(self): obj = MyObj(self.context) obj.VERSION = '1.4' obj.convert_to_version('1.5', remove_unavailable_fields=False) self.assertEqual('1.5', obj.VERSION) self.assertEqual(obj.__class__.VERSION, obj.VERSION) self.assertEqual({'missing': 'foo'}, obj.obj_get_changes()) def test_convert_to_version_old(self): obj = MyObj(self.context) obj.missing = 'something' obj.obj_reset_changes() obj.convert_to_version('1.4', remove_unavailable_fields=True) self.assertEqual('1.4', obj.VERSION) self.assertEqual({}, obj.obj_get_changes()) def test_convert_to_version_old_keep(self): obj = MyObj(self.context) obj.missing = 'something' obj.obj_reset_changes() obj.convert_to_version('1.4', remove_unavailable_fields=False) self.assertEqual('1.4', obj.VERSION) self.assertEqual({'missing': ''}, obj.obj_get_changes()) @mock.patch.object(MyObj, 'convert_to_version', autospec=True) def test_do_version_changes_for_db(self, mock_convert): # no object conversion obj = MyObj(self.context) self.assertEqual('1.5', obj.VERSION) self.assertEqual('1.5', obj.get_target_version()) self.assertEqual({}, obj.obj_get_changes()) obj.foo = 123 obj.bar = 'test' obj.missing = 'test' # field which is missing in v1.4 self.assertEqual({'foo': 123, 'bar': 'test', 'missing': 'test'}, obj.obj_get_changes()) changes = obj.do_version_changes_for_db() self.assertEqual({'foo': 123, 'bar': 'test', 'missing': 'test', 'version': '1.5'}, changes) self.assertEqual('1.5', obj.VERSION) self.assertFalse(mock_convert.called) @mock.patch.object(MyObj, 'convert_to_version', autospec=True) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_do_version_changes_for_db_pinned(self, mock_target_version, mock_convert): # obj is same version as pinned, no conversion done mock_target_version.return_value = '1.4' obj = MyObj(self.context) obj.VERSION = '1.4' self.assertEqual('1.4', obj.get_target_version()) obj.foo = 123 obj.bar = 'test' self.assertEqual({'foo': 123, 'bar': 'test'}, obj.obj_get_changes()) self.assertEqual('1.4', obj.VERSION) changes = obj.do_version_changes_for_db() self.assertEqual({'foo': 123, 'bar': 'test', 'version': '1.4'}, changes) self.assertEqual('1.4', obj.VERSION) mock_target_version.assert_called_with() self.assertFalse(mock_convert.called) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_do_version_changes_for_db_downgrade(self, mock_target_version): # obj is 1.5; convert to 1.4 mock_target_version.return_value = '1.4' obj = MyObj(self.context) obj.foo = 123 obj.bar = 'test' obj.missing = 'something' self.assertEqual({'foo': 123, 'bar': 'test', 'missing': 'something'}, obj.obj_get_changes()) self.assertEqual('1.5', obj.VERSION) changes = obj.do_version_changes_for_db() self.assertEqual({'foo': 123, 'bar': 'test', 'missing': '', 'version': '1.4'}, changes) self.assertEqual('1.4', obj.VERSION) mock_target_version.assert_called_with() @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def _test__from_db_object(self, version, mock_release_mapping): mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyObj': ['1.4'], } } missing = '' if version == '1.5': missing = 'foo' obj = MyObj(self.context) dbobj = {'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'version': version, 'foo': 123, 'bar': 'test', 'missing': missing} MyObj._from_db_object(self.context, obj, dbobj) self.assertEqual(obj.__class__.VERSION, obj.VERSION) self.assertEqual(123, obj.foo) self.assertEqual('test', obj.bar) self.assertEqual('foo', obj.missing) self.assertFalse(mock_release_mapping.called) def test__from_db_object(self): self._test__from_db_object('1.5') def test__from_db_object_old(self): self._test__from_db_object('1.4') def test__from_db_object_map_version_bad(self): obj = MyObj(self.context) dbobj = {'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'version': '1.99', 'foo': 123, 'bar': 'test', 'missing': ''} self.assertRaises(object_exception.IncompatibleObjectVersion, MyObj._from_db_object, self.context, obj, dbobj) def test_get_target_version_no_pin(self): obj = MyObj(self.context) self.assertEqual('1.5', obj.get_target_version()) @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_get_target_version_pinned(self, mock_release_mapping): CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyObj': ['1.4'], } } obj = MyObj(self.context) self.assertEqual('1.4', obj.get_target_version()) @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_get_target_version_pinned_no_myobj(self, mock_release_mapping): CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'NotMyObj': ['1.4'], } } obj = MyObj(self.context) self.assertEqual('1.5', obj.get_target_version()) @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_get_target_version_pinned_bad(self, mock_release_mapping): CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyObj': ['1.6'], } } obj = MyObj(self.context) self.assertRaises(object_exception.IncompatibleObjectVersion, obj.get_target_version) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_supports_version(self, mock_target_version): mock_target_version.return_value = "1.5" obj = MyObj(self.context) self.assertTrue(obj.supports_version((1, 5))) self.assertFalse(obj.supports_version((1, 6))) def test_obj_fields(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foo': fields.IntegerField()} obj_extra_fields = ['bar'] @property def bar(self): return 'this is bar' obj = TestObj(self.context) self.assertEqual(set(['created_at', 'updated_at', 'foo', 'bar']), set(obj.obj_fields)) def test_refresh_object(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'foo': fields.IntegerField(), 'bar': fields.StringField()} obj = TestObj(self.context) current_obj = TestObj(self.context) obj.foo = 10 obj.bar = 'obj.bar' current_obj.foo = 2 current_obj.bar = 'current.bar' obj.obj_refresh(current_obj) self.assertEqual(2, obj.foo) self.assertEqual('current.bar', obj.bar) def test_obj_constructor(self): obj = MyObj(self.context, foo=123, bar='abc') self.assertEqual(123, obj.foo) self.assertEqual('abc', obj.bar) self.assertEqual(set(['foo', 'bar']), obj.obj_what_changed()) def test_assign_value_without_DictCompat(self): class TestObj(base.IronicObject): fields = {'foo': fields.IntegerField(), 'bar': fields.StringField()} obj = TestObj(self.context) obj.foo = 10 err_message = '' try: obj['bar'] = 'value' except TypeError as e: err_message = str(e) finally: self.assertIn("'TestObj' object does not support item assignment", err_message) def test_as_dict(self): obj = MyObj(self.context) obj.foo = 1 result = obj.as_dict() expected = {'foo': 1} self.assertEqual(expected, result) def test_as_dict_with_nested_object(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObject, object_base.VersionedObjectDictCompat): fields = {'my_obj': fields.ObjectField('MyObj')} obj1 = MyObj(self.context) obj1.foo = 1 obj2 = TestObj(self.context) obj2.my_obj = obj1 result = obj2.as_dict() expected = {'my_obj': {'foo': 1}} self.assertEqual(expected, result) def test_as_dict_with_nested_object_list(self): @base.IronicObjectRegistry.register_if(False) class TestObj(base.IronicObjectListBase, base.IronicObject): fields = {'objects': fields.ListOfObjectsField('MyObj')} obj1 = MyObj(self.context) obj1.foo = 1 obj2 = TestObj(self.context) obj2.objects = [obj1] result = obj2.as_dict() expected = {'objects': [{'foo': 1}]} self.assertEqual(expected, result) class TestObject(_LocalTest, _TestObject): pass # The hashes are to help developers to check if a change in an object needs a # version bump. It is an MD5 hash of the object fields and remotable methods. # The fingerprint values should only be changed if there is a version bump. expected_object_fingerprints = { 'Node': '1.34-ae873e627cf30bf28fe9f98a807b6200', 'MyObj': '1.5-9459d30d6954bffc7a9afd347a807ca6', 'Chassis': '1.3-d656e039fd8ae9f34efc232ab3980905', 'Port': '1.9-0cb9202a4ec442e8c0d87a324155eaaf', 'Portgroup': '1.4-71923a81a86743b313b190f5c675e258', 'Conductor': '1.3-d3f53e853b4d58cae5bfbd9a8341af4a', 'EventType': '1.1-aa2ba1afd38553e3880c267404e8d370', 'NotificationPublisher': '1.0-51a09397d6c0687771fb5be9a999605d', 'NodePayload': '1.15-86ee30dbf374be4cf17c5b501d9e2e7b', 'NodeSetPowerStateNotification': '1.0-59acc533c11d306f149846f922739c15', 'NodeSetPowerStatePayload': '1.15-3c64b07a2b96c2661e7743b47ed43705', 'NodeCorrectedPowerStateNotification': '1.0-59acc533c11d306f149846f922739c15', 'NodeCorrectedPowerStatePayload': '1.15-59a224a9191cdc9f1acc2e0dcd2d3adb', 'NodeSetProvisionStateNotification': '1.0-59acc533c11d306f149846f922739c15', 'NodeSetProvisionStatePayload': '1.15-488a3d62a0643d17e288ecf89ed5bbb4', 'VolumeConnector': '1.0-3e0252c0ab6e6b9d158d09238a577d97', 'VolumeTarget': '1.0-0b10d663d8dae675900b2c7548f76f5e', 'ChassisCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'ChassisCRUDPayload': '1.0-dce63895d8186279a7dd577cffccb202', 'NodeCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'NodeCRUDPayload': '1.13-8f673253ff8d7389897a6a80d224ac33', 'PortCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'PortCRUDPayload': '1.3-21235916ed54a91b2a122f59571194e7', 'NodeMaintenanceNotification': '1.0-59acc533c11d306f149846f922739c15', 'NodeConsoleNotification': '1.0-59acc533c11d306f149846f922739c15', 'PortgroupCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'PortgroupCRUDPayload': '1.0-b73c1fecf0cef3aa56bbe3c7e2275018', 'VolumeConnectorCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'VolumeConnectorCRUDPayload': '1.0-5e8dbb41e05b6149d8f7bfd4daff9339', 'VolumeTargetCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'VolumeTargetCRUDPayload': '1.0-30dcc4735512c104a3a36a2ae1e2aeb2', 'Trait': '1.0-3f26cb70c8a10a3807d64c219453e347', 'TraitList': '1.0-33a2e1bb91ad4082f9f63429b77c1244', 'BIOSSetting': '1.0-fd4a791dc2139a7cc21cefbbaedfd9e7', 'BIOSSettingList': '1.0-33a2e1bb91ad4082f9f63429b77c1244', 'Allocation': '1.1-38937f2854722f1057ec667b12878708', 'AllocationCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'AllocationCRUDPayload': '1.1-3c8849932b80380bb96587ff62e8f087', 'DeployTemplate': '1.1-4e30c8e9098595e359bb907f095bf1a9', 'DeployTemplateCRUDNotification': '1.0-59acc533c11d306f149846f922739c15', 'DeployTemplateCRUDPayload': '1.0-200857e7e715f58a5b6d6b700ab73a3b', } class TestObjectVersions(test_base.TestCase): def test_object_version_check(self): classes = base.IronicObjectRegistry.obj_classes() checker = object_fixture.ObjectVersionChecker(obj_classes=classes) # Compute the difference between actual fingerprints and # expect fingerprints. expect = actual = {} if there is no change. expect, actual = checker.test_hashes(expected_object_fingerprints) self.assertEqual(expect, actual, "Some objects fields or remotable methods have been " "modified. Please make sure the version of those " "objects have been bumped and then update " "expected_object_fingerprints with the new hashes. ") class TestObjectSerializer(test_base.TestCase): def test_object_serialization(self): ser = base.IronicObjectSerializer() obj = MyObj(self.context) primitive = ser.serialize_entity(self.context, obj) self.assertIn('ironic_object.name', primitive) obj2 = ser.deserialize_entity(self.context, primitive) self.assertIsInstance(obj2, MyObj) self.assertEqual(self.context, obj2._context) def test_object_serialization_iterables(self): ser = base.IronicObjectSerializer() obj = MyObj(self.context) for iterable in (list, tuple, set): thing = iterable([obj]) primitive = ser.serialize_entity(self.context, thing) self.assertEqual(1, len(primitive)) for item in primitive: self.assertNotIsInstance(item, base.IronicObject) thing2 = ser.deserialize_entity(self.context, primitive) self.assertEqual(1, len(thing2)) for item in thing2: self.assertIsInstance(item, MyObj) @mock.patch('ironic.objects.base.IronicObject.indirection_api', autospec=True) def _test_deserialize_entity_newer(self, obj_version, backported_to, mock_indirection_api, my_version='1.6'): ser = base.IronicObjectSerializer() backported_obj = MyObj() mock_indirection_api.object_backport_versions.return_value \ = backported_obj @base.IronicObjectRegistry.register class MyTestObj(MyObj): VERSION = my_version obj = MyTestObj(self.context) obj.VERSION = obj_version primitive = obj.obj_to_primitive() result = ser.deserialize_entity(self.context, primitive) if backported_to is None: self.assertFalse( mock_indirection_api.object_backport_versions.called) else: self.assertEqual(backported_obj, result) versions = object_base.obj_tree_get_versions('MyTestObj') mock_indirection_api.object_backport_versions.assert_called_with( self.context, primitive, versions) def test_deserialize_entity_newer_version_backports(self): "Test object with unsupported (newer) version" self._test_deserialize_entity_newer('1.25', '1.6') def test_deserialize_entity_same_revision_does_not_backport(self): "Test object with supported revision" self._test_deserialize_entity_newer('1.6', None) def test_deserialize_entity_newer_revision_does_not_backport_zero(self): "Test object with supported revision" self._test_deserialize_entity_newer('1.6.0', None) def test_deserialize_entity_newer_revision_does_not_backport(self): "Test object with supported (newer) revision" self._test_deserialize_entity_newer('1.6.1', None) def test_deserialize_entity_newer_version_passes_revision(self): "Test object with unsupported (newer) version and revision" self._test_deserialize_entity_newer('1.7', '1.6.1', my_version='1.6.1') @mock.patch('ironic.common.release_mappings.RELEASE_MAPPING', autospec=True) def test_deserialize_entity_pin_ignored(self, mock_release_mapping): # Deserializing doesn't look at pinning CONF.set_override('pin_release_version', release_mappings.RELEASE_VERSIONS[-1]) mock_release_mapping.__getitem__.return_value = { 'objects': { 'MyTestObj': ['1.0'], } } ser = base.IronicObjectSerializer() @base.IronicObjectRegistry.register class MyTestObj(MyObj): VERSION = '1.1' obj = MyTestObj(self.context) primitive = obj.obj_to_primitive() result = ser.deserialize_entity(self.context, primitive) self.assertEqual('1.1', result.VERSION) self.assertEqual('1.0', result.get_target_version()) self.assertFalse(mock_release_mapping.called) @mock.patch.object(base.IronicObject, 'convert_to_version', autospec=True) @mock.patch.object(base.IronicObject, 'get_target_version', autospec=True) def test_serialize_entity_unpinned_api(self, mock_version, mock_convert): """Test single element serializer with no backport, unpinned.""" mock_version.return_value = MyObj.VERSION serializer = base.IronicObjectSerializer(is_server=False) obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = 'textt' primitive = serializer.serialize_entity(self.context, obj) self.assertEqual('1.5', primitive['ironic_object.version']) data = primitive['ironic_object.data'] self.assertEqual(1, data['foo']) self.assertEqual('text', data['bar']) self.assertEqual('textt', data['missing']) changes = primitive['ironic_object.changes'] self.assertEqual(set(['foo', 'bar', 'missing']), set(changes)) self.assertFalse(mock_version.called) self.assertFalse(mock_convert.called) @mock.patch.object(base.IronicObject, 'convert_to_version', autospec=True) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_serialize_entity_unpinned_conductor(self, mock_version, mock_convert): """Test single element serializer with no backport, unpinned.""" mock_version.return_value = MyObj.VERSION serializer = base.IronicObjectSerializer(is_server=True) obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = 'textt' primitive = serializer.serialize_entity(self.context, obj) self.assertEqual('1.5', primitive['ironic_object.version']) data = primitive['ironic_object.data'] self.assertEqual(1, data['foo']) self.assertEqual('text', data['bar']) self.assertEqual('textt', data['missing']) changes = primitive['ironic_object.changes'] self.assertEqual(set(['foo', 'bar', 'missing']), set(changes)) mock_version.assert_called_once_with() self.assertFalse(mock_convert.called) @mock.patch.object(base.IronicObject, 'get_target_version', autospec=True) def test_serialize_entity_pinned_api(self, mock_version): """Test single element serializer with backport to pinned version.""" mock_version.return_value = '1.4' serializer = base.IronicObjectSerializer(is_server=False) obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = 'miss' self.assertEqual('1.5', obj.VERSION) primitive = serializer.serialize_entity(self.context, obj) self.assertEqual('1.5', primitive['ironic_object.version']) data = primitive['ironic_object.data'] self.assertEqual(1, data['foo']) self.assertEqual('text', data['bar']) self.assertEqual('miss', data['missing']) self.assertFalse(mock_version.called) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_serialize_entity_pinned_conductor(self, mock_version): """Test single element serializer with backport to pinned version.""" mock_version.return_value = '1.4' serializer = base.IronicObjectSerializer(is_server=True) obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = 'miss' self.assertEqual('1.5', obj.VERSION) primitive = serializer.serialize_entity(self.context, obj) self.assertEqual('1.4', primitive['ironic_object.version']) data = primitive['ironic_object.data'] self.assertEqual(1, data['foo']) self.assertEqual('text', data['bar']) self.assertNotIn('missing', data) self.assertNotIn('ironic_object.changes', primitive) mock_version.assert_called_once_with() @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_serialize_entity_invalid_pin(self, mock_version): mock_version.side_effect = object_exception.InvalidTargetVersion( version='1.6') serializer = base.IronicObjectSerializer(is_server=True) obj = MyObj(self.context) self.assertRaises(object_exception.InvalidTargetVersion, serializer.serialize_entity, self.context, obj) mock_version.assert_called_once_with() @mock.patch.object(base.IronicObject, 'convert_to_version', autospec=True) def _test__process_object(self, mock_convert, is_server=True): obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = 'miss' primitive = obj.obj_to_primitive() serializer = base.IronicObjectSerializer(is_server=is_server) obj2 = serializer._process_object(self.context, primitive) self.assertEqual(obj.foo, obj2.foo) self.assertEqual(obj.bar, obj2.bar) self.assertEqual(obj.missing, obj2.missing) self.assertEqual(obj.VERSION, obj2.VERSION) self.assertFalse(mock_convert.called) def test__process_object_api(self): self._test__process_object(is_server=False) def test__process_object_conductor(self): self._test__process_object(is_server=True) @mock.patch.object(base.IronicObject, 'convert_to_version', autospec=True) def _test__process_object_convert(self, is_server, mock_convert): obj = MyObj(self.context) obj.foo = 1 obj.bar = 'text' obj.missing = '' obj.VERSION = '1.4' primitive = obj.obj_to_primitive() serializer = base.IronicObjectSerializer(is_server=is_server) serializer._process_object(self.context, primitive) mock_convert.assert_called_once_with( mock.ANY, '1.5', remove_unavailable_fields=not is_server) def test__process_object_convert_api(self): self._test__process_object_convert(False) def test__process_object_convert_conductor(self): self._test__process_object_convert(True) class TestRegistry(test_base.TestCase): @mock.patch('ironic.objects.base.objects', autospec=True) def test_hook_chooses_newer_properly(self, mock_objects): reg = base.IronicObjectRegistry() reg.registration_hook(MyObj, 0) class MyNewerObj(object): VERSION = '1.123' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyNewerObj, 0) self.assertEqual(MyNewerObj, mock_objects.MyObj) @mock.patch('ironic.objects.base.objects', autospec=True) def test_hook_keeps_newer_properly(self, mock_objects): reg = base.IronicObjectRegistry() reg.registration_hook(MyObj, 0) class MyOlderObj(object): VERSION = '1.1' @classmethod def obj_name(cls): return 'MyObj' self.assertEqual(MyObj, mock_objects.MyObj) reg.registration_hook(MyOlderObj, 0) self.assertEqual(MyObj, mock_objects.MyObj) class TestMisc(test_base.TestCase): def test_max_version(self): versions = ['1.25', '1.33', '1.3'] maxv = base.max_version(versions) self.assertEqual('1.33', maxv) def test_max_version_one(self): versions = ['1.25'] maxv = base.max_version(versions) self.assertEqual('1.25', maxv) def test_max_version_two(self): versions = ['1.25', '1.26'] maxv = base.max_version(versions) self.assertEqual('1.26', maxv) ironic-15.0.0/ironic/tests/unit/objects/test_portgroup.py0000664000175000017500000002173113652514273023625 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from testtools import matchers from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestPortgroupObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestPortgroupObject, self).setUp() self.fake_portgroup = db_utils.get_test_portgroup() def test_get_by_id(self): portgroup_id = self.fake_portgroup['id'] with mock.patch.object(self.dbapi, 'get_portgroup_by_id', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, portgroup_id) mock_get_portgroup.assert_called_once_with(portgroup_id) self.assertEqual(self.context, portgroup._context) def test_get_by_uuid(self): uuid = self.fake_portgroup['uuid'] with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, uuid) mock_get_portgroup.assert_called_once_with(uuid) self.assertEqual(self.context, portgroup._context) def test_get_by_address(self): address = self.fake_portgroup['address'] with mock.patch.object(self.dbapi, 'get_portgroup_by_address', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, address) mock_get_portgroup.assert_called_once_with(address) self.assertEqual(self.context, portgroup._context) def test_get_by_name(self): name = self.fake_portgroup['name'] with mock.patch.object(self.dbapi, 'get_portgroup_by_name', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup portgroup = objects.Portgroup.get(self.context, name) mock_get_portgroup.assert_called_once_with(name) self.assertEqual(self.context, portgroup._context) def test_get_bad_id_and_uuid_and_address_and_name(self): self.assertRaises(exception.InvalidIdentity, objects.Portgroup.get, self.context, 'not:a_name_or_uuid') def test_create(self): portgroup = objects.Portgroup(self.context, **self.fake_portgroup) with mock.patch.object(self.dbapi, 'create_portgroup', autospec=True) as mock_create_portgroup: mock_create_portgroup.return_value = db_utils.get_test_portgroup() portgroup.create() args, _kwargs = mock_create_portgroup.call_args self.assertEqual(objects.Portgroup.VERSION, args[0]['version']) def test_save(self): uuid = self.fake_portgroup['uuid'] address = "b2:54:00:cf:2d:40" test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', autospec=True) as mock_get_portgroup: mock_get_portgroup.return_value = self.fake_portgroup with mock.patch.object(self.dbapi, 'update_portgroup', autospec=True) as mock_update_portgroup: mock_update_portgroup.return_value = ( db_utils.get_test_portgroup(address=address, updated_at=test_time)) p = objects.Portgroup.get_by_uuid(self.context, uuid) p.address = address p.save() mock_get_portgroup.assert_called_once_with(uuid) mock_update_portgroup.assert_called_once_with( uuid, {'version': objects.Portgroup.VERSION, 'address': "b2:54:00:cf:2d:40"}) self.assertEqual(self.context, p._context) res_updated_at = (p.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_portgroup['uuid'] returns = [self.fake_portgroup, db_utils.get_test_portgroup(address="c3:54:00:cf:2d:40")] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_portgroup_by_uuid', side_effect=returns, autospec=True) as mock_get_portgroup: p = objects.Portgroup.get_by_uuid(self.context, uuid) self.assertEqual("52:54:00:cf:2d:31", p.address) p.refresh() self.assertEqual("c3:54:00:cf:2d:40", p.address) self.assertEqual(expected, mock_get_portgroup.call_args_list) self.assertEqual(self.context, p._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() address = "b2:54:00:cf:2d:40" db_node = db_utils.create_test_node() db_portgroup = db_utils.create_test_portgroup(node_id=db_node.id) p = objects.Portgroup.get_by_uuid(self.context, db_portgroup.uuid) p_copy = objects.Portgroup.get_by_uuid(self.context, db_portgroup.uuid) p.address = address p.save() p_copy.refresh() p_copy.address = 'aa:bb:cc:dd:ee:ff' # Ensure this passes and an exception is not generated p_copy.save() def test_list(self): with mock.patch.object(self.dbapi, 'get_portgroup_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_portgroup] portgroups = objects.Portgroup.list(self.context) self.assertThat(portgroups, matchers.HasLength(1)) self.assertIsInstance(portgroups[0], objects.Portgroup) self.assertEqual(self.context, portgroups[0]._context) def test_list_by_node_id(self): with mock.patch.object(self.dbapi, 'get_portgroups_by_node_id', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_portgroup] node_id = self.fake_portgroup['node_id'] portgroups = objects.Portgroup.list_by_node_id(self.context, node_id) self.assertThat(portgroups, matchers.HasLength(1)) self.assertIsInstance(portgroups[0], objects.Portgroup) self.assertEqual(self.context, portgroups[0]._context) def test_payload_schemas(self): self._check_payload_schemas(objects.portgroup, objects.Portgroup.fields) class TestConvertToVersion(db_base.DbTestCase): def setUp(self): super(TestConvertToVersion, self).setUp() self.vif_id = 'some_uuid' extra = {'vif_port_id': self.vif_id} self.fake_portgroup = db_utils.get_test_portgroup(extra=extra) def test_vif_in_extra_lower_version(self): # no conversion portgroup = objects.Portgroup(self.context, **self.fake_portgroup) portgroup._convert_to_version("1.3", False) self.assertFalse('tenant_vif_port_id' in portgroup.internal_info) def test_vif_in_extra(self): for v in ['1.4', '1.5']: portgroup = objects.Portgroup(self.context, **self.fake_portgroup) portgroup._convert_to_version(v, False) self.assertEqual(self.vif_id, portgroup.internal_info['tenant_vif_port_id']) def test_vif_in_extra_not_in_extra(self): portgroup = objects.Portgroup(self.context, **self.fake_portgroup) portgroup.extra.pop('vif_port_id') portgroup._convert_to_version('1.4', False) self.assertFalse('tenant_vif_port_id' in portgroup.internal_info) def test_vif_in_extra_in_internal_info(self): vif2 = 'another_uuid' portgroup = objects.Portgroup(self.context, **self.fake_portgroup) portgroup.internal_info['tenant_vif_port_id'] = vif2 portgroup._convert_to_version('1.4', False) # no change self.assertEqual(vif2, portgroup.internal_info['tenant_vif_port_id']) ironic-15.0.0/ironic/tests/unit/objects/test_volume_target.py0000664000175000017500000002346213652514273024444 0ustar zuulzuul00000000000000# Copyright 2016 Hitachi, Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import types import mock from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestVolumeTargetObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestVolumeTargetObject, self).setUp() self.volume_target_dict = db_utils.get_test_volume_target() @mock.patch('ironic.objects.VolumeTarget.get_by_uuid', spec_set=types.FunctionType) @mock.patch('ironic.objects.VolumeTarget.get_by_id', spec_set=types.FunctionType) def test_get(self, mock_get_by_id, mock_get_by_uuid): id = self.volume_target_dict['id'] uuid = self.volume_target_dict['uuid'] objects.VolumeTarget.get(self.context, id) mock_get_by_id.assert_called_once_with(self.context, id) self.assertFalse(mock_get_by_uuid.called) objects.VolumeTarget.get(self.context, uuid) mock_get_by_uuid.assert_called_once_with(self.context, uuid) # Invalid identifier (not ID or UUID) self.assertRaises(exception.InvalidIdentity, objects.VolumeTarget.get, self.context, 'not-valid-identifier') def test_get_by_id(self): id = self.volume_target_dict['id'] with mock.patch.object(self.dbapi, 'get_volume_target_by_id', autospec=True) as mock_get_volume_target: mock_get_volume_target.return_value = self.volume_target_dict target = objects.VolumeTarget.get(self.context, id) mock_get_volume_target.assert_called_once_with(id) self.assertIsInstance(target, objects.VolumeTarget) self.assertEqual(self.context, target._context) def test_get_by_uuid(self): uuid = self.volume_target_dict['uuid'] with mock.patch.object(self.dbapi, 'get_volume_target_by_uuid', autospec=True) as mock_get_volume_target: mock_get_volume_target.return_value = self.volume_target_dict target = objects.VolumeTarget.get(self.context, uuid) mock_get_volume_target.assert_called_once_with(uuid) self.assertIsInstance(target, objects.VolumeTarget) self.assertEqual(self.context, target._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_volume_target_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.volume_target_dict] volume_targets = objects.VolumeTarget.list( self.context, limit=4, sort_key='uuid', sort_dir='asc') mock_get_list.assert_called_once_with( limit=4, marker=None, sort_key='uuid', sort_dir='asc') self.assertThat(volume_targets, HasLength(1)) self.assertIsInstance(volume_targets[0], objects.VolumeTarget) self.assertEqual(self.context, volume_targets[0]._context) def test_list_none(self): with mock.patch.object(self.dbapi, 'get_volume_target_list', autospec=True) as mock_get_list: mock_get_list.return_value = [] volume_targets = objects.VolumeTarget.list( self.context, limit=4, sort_key='uuid', sort_dir='asc') mock_get_list.assert_called_once_with( limit=4, marker=None, sort_key='uuid', sort_dir='asc') self.assertEqual([], volume_targets) def test_list_by_node_id(self): with mock.patch.object(self.dbapi, 'get_volume_targets_by_node_id', autospec=True) as mock_get_list_by_node_id: mock_get_list_by_node_id.return_value = [self.volume_target_dict] node_id = self.volume_target_dict['node_id'] volume_targets = objects.VolumeTarget.list_by_node_id( self.context, node_id, limit=10, sort_dir='desc') mock_get_list_by_node_id.assert_called_once_with( node_id, limit=10, marker=None, sort_key=None, sort_dir='desc') self.assertThat(volume_targets, HasLength(1)) self.assertIsInstance(volume_targets[0], objects.VolumeTarget) self.assertEqual(self.context, volume_targets[0]._context) def test_list_by_volume_id(self): with mock.patch.object(self.dbapi, 'get_volume_targets_by_volume_id', autospec=True) as mock_get_list_by_volume_id: mock_get_list_by_volume_id.return_value = [self.volume_target_dict] volume_id = self.volume_target_dict['volume_id'] volume_targets = objects.VolumeTarget.list_by_volume_id( self.context, volume_id, limit=10, sort_dir='desc') mock_get_list_by_volume_id.assert_called_once_with( volume_id, limit=10, marker=None, sort_key=None, sort_dir='desc') self.assertThat(volume_targets, HasLength(1)) self.assertIsInstance(volume_targets[0], objects.VolumeTarget) self.assertEqual(self.context, volume_targets[0]._context) def test_create(self): with mock.patch.object(self.dbapi, 'create_volume_target', autospec=True) as mock_db_create: mock_db_create.return_value = self.volume_target_dict new_target = objects.VolumeTarget( self.context, **self.volume_target_dict) new_target.create() mock_db_create.assert_called_once_with(self.volume_target_dict) def test_destroy(self): uuid = self.volume_target_dict['uuid'] with mock.patch.object(self.dbapi, 'get_volume_target_by_uuid', autospec=True) as mock_get_volume_target: mock_get_volume_target.return_value = self.volume_target_dict with mock.patch.object(self.dbapi, 'destroy_volume_target', autospec=True) as mock_db_destroy: target = objects.VolumeTarget.get_by_uuid(self.context, uuid) target.destroy() mock_db_destroy.assert_called_once_with(uuid) def test_save(self): uuid = self.volume_target_dict['uuid'] boot_index = 100 test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_volume_target_by_uuid', autospec=True) as mock_get_volume_target: mock_get_volume_target.return_value = self.volume_target_dict with mock.patch.object(self.dbapi, 'update_volume_target', autospec=True) as mock_update_target: mock_update_target.return_value = ( db_utils.get_test_volume_target(boot_index=boot_index, updated_at=test_time)) target = objects.VolumeTarget.get_by_uuid(self.context, uuid) target.boot_index = boot_index target.save() mock_get_volume_target.assert_called_once_with(uuid) mock_update_target.assert_called_once_with( uuid, {'version': objects.VolumeTarget.VERSION, 'boot_index': boot_index}) self.assertEqual(self.context, target._context) res_updated_at = (target.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.volume_target_dict['uuid'] old_boot_index = self.volume_target_dict['boot_index'] returns = [self.volume_target_dict, db_utils.get_test_volume_target(boot_index=100)] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_volume_target_by_uuid', side_effect=returns, autospec=True) as mock_get_volume_target: target = objects.VolumeTarget.get_by_uuid(self.context, uuid) self.assertEqual(old_boot_index, target.boot_index) target.refresh() self.assertEqual(100, target.boot_index) self.assertEqual(expected, mock_get_volume_target.call_args_list) self.assertEqual(self.context, target._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() db_volume_target = db_utils.create_test_volume_target() vt = objects.VolumeTarget.get_by_uuid(self.context, db_volume_target.uuid) vt_copy = objects.VolumeTarget.get_by_uuid(self.context, db_volume_target.uuid) vt.name = 'b240' vt.save() vt_copy.refresh() vt_copy.name = 'aaff' # Ensure this passes and an exception is not generated vt_copy.save() def test_payload_schemas(self): self._check_payload_schemas(objects.volume_target, objects.VolumeTarget.fields) ironic-15.0.0/ironic/tests/unit/objects/test_conductor.py0000664000175000017500000002151313652514273023562 0ustar zuulzuul00000000000000# coding=utf-8 # # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import types import mock from oslo_utils import timeutils from ironic.common import exception from ironic import objects from ironic.objects import base from ironic.objects import fields from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils class TestConductorObject(db_base.DbTestCase): def setUp(self): super(TestConductorObject, self).setUp() self.fake_conductor = ( db_utils.get_test_conductor(updated_at=timeutils.utcnow())) def test_load(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: mock_get_cdr.return_value = self.fake_conductor objects.Conductor.get_by_hostname(self.context, host) mock_get_cdr.assert_called_once_with(host, online=True) def test_list(self): conductor1 = db_utils.get_test_conductor(hostname='cond1') conductor2 = db_utils.get_test_conductor(hostname='cond2') with mock.patch.object(self.dbapi, 'get_conductor_list', autospec=True) as mock_cond_list: mock_cond_list.return_value = [conductor1, conductor2] conductors = objects.Conductor.list(self.context) self.assertEqual(2, len(conductors)) self.assertIsInstance(conductors[0], objects.Conductor) self.assertIsInstance(conductors[1], objects.Conductor) self.assertEqual(conductors[0].hostname, 'cond1') self.assertEqual(conductors[1].hostname, 'cond2') def test_save(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.hostname = 'another-hostname' self.assertRaises(NotImplementedError, c.save, self.context) mock_get_cdr.assert_called_once_with(host, online=True) def test_touch(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'touch_conductor', autospec=True) as mock_touch_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.touch(self.context) mock_get_cdr.assert_called_once_with(host, online=True) mock_touch_cdr.assert_called_once_with(host) def test_refresh(self): host = self.fake_conductor['hostname'] t0 = self.fake_conductor['updated_at'] t1 = t0 + datetime.timedelta(seconds=10) returns = [dict(self.fake_conductor, updated_at=t0), dict(self.fake_conductor, updated_at=t1)] expected = [mock.call(host, online=True), mock.call(host, online=True)] with mock.patch.object(self.dbapi, 'get_conductor', side_effect=returns, autospec=True) as mock_get_cdr: c = objects.Conductor.get_by_hostname(self.context, host) # ensure timestamps have tzinfo datetime_field = fields.DateTimeField() self.assertEqual( datetime_field.coerce(datetime_field, 'updated_at', t0), c.updated_at) c.refresh() self.assertEqual( datetime_field.coerce(datetime_field, 'updated_at', t1), c.updated_at) self.assertEqual(expected, mock_get_cdr.call_args_list) self.assertEqual(self.context, c._context) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def _test_register(self, mock_target_version, update_existing=False, conductor_group=''): mock_target_version.return_value = '1.5' host = self.fake_conductor['hostname'] drivers = self.fake_conductor['drivers'] with mock.patch.object(self.dbapi, 'register_conductor', autospec=True) as mock_register_cdr: mock_register_cdr.return_value = self.fake_conductor c = objects.Conductor.register(self.context, host, drivers, conductor_group, update_existing=update_existing) self.assertIsInstance(c, objects.Conductor) mock_register_cdr.assert_called_once_with( {'drivers': drivers, 'hostname': host, 'conductor_group': conductor_group.lower(), 'version': '1.5'}, update_existing=update_existing) def test_register(self): self._test_register() def test_register_update_existing_true(self): self._test_register(update_existing=True) def test_register_into_group(self): self._test_register(conductor_group='dc1') def test_register_into_group_uppercased(self): self._test_register(conductor_group='DC1') def test_register_into_group_with_update(self): self._test_register(conductor_group='dc1', update_existing=True) @mock.patch.object(base.IronicObject, 'get_target_version', spec_set=types.FunctionType) def test_register_with_invalid_group(self, mock_target_version): mock_target_version.return_value = '1.5' host = self.fake_conductor['hostname'] drivers = self.fake_conductor['drivers'] self.assertRaises(exception.InvalidConductorGroup, objects.Conductor.register, self.context, host, drivers, 'invalid:group') @mock.patch.object(objects.Conductor, 'unregister_all_hardware_interfaces', autospec=True) def test_unregister(self, mock_unreg_ifaces): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'unregister_conductor', autospec=True) as mock_unregister_cdr: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.unregister() mock_unregister_cdr.assert_called_once_with(host) mock_unreg_ifaces.assert_called_once_with(mock.ANY) def test_register_hardware_interfaces(self): host = self.fake_conductor['hostname'] self.config(default_deploy_interface='iscsi') with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'register_conductor_hardware_interfaces', autospec=True) as mock_register: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) args = ('hardware-type', 'deploy', ['iscsi', 'direct'], 'iscsi') c.register_hardware_interfaces(*args) mock_register.assert_called_once_with(c.id, *args) def test_unregister_all_hardware_interfaces(self): host = self.fake_conductor['hostname'] with mock.patch.object(self.dbapi, 'get_conductor', autospec=True) as mock_get_cdr: with mock.patch.object(self.dbapi, 'unregister_conductor_hardware_interfaces', autospec=True) as mock_unregister: mock_get_cdr.return_value = self.fake_conductor c = objects.Conductor.get_by_hostname(self.context, host) c.unregister_all_hardware_interfaces() mock_unregister.assert_called_once_with(c.id) ironic-15.0.0/ironic/tests/unit/objects/test_allocation.py0000664000175000017500000002174013652514273023711 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from testtools import matchers from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestAllocationObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestAllocationObject, self).setUp() self.fake_allocation = db_utils.get_test_allocation(name='host1') def test_get_by_id(self): allocation_id = self.fake_allocation['id'] with mock.patch.object(self.dbapi, 'get_allocation_by_id', autospec=True) as mock_get_allocation: mock_get_allocation.return_value = self.fake_allocation allocation = objects.Allocation.get(self.context, allocation_id) mock_get_allocation.assert_called_once_with(allocation_id) self.assertEqual(self.context, allocation._context) def test_get_by_uuid(self): uuid = self.fake_allocation['uuid'] with mock.patch.object(self.dbapi, 'get_allocation_by_uuid', autospec=True) as mock_get_allocation: mock_get_allocation.return_value = self.fake_allocation allocation = objects.Allocation.get(self.context, uuid) mock_get_allocation.assert_called_once_with(uuid) self.assertEqual(self.context, allocation._context) def test_get_by_name(self): name = self.fake_allocation['name'] with mock.patch.object(self.dbapi, 'get_allocation_by_name', autospec=True) as mock_get_allocation: mock_get_allocation.return_value = self.fake_allocation allocation = objects.Allocation.get(self.context, name) mock_get_allocation.assert_called_once_with(name) self.assertEqual(self.context, allocation._context) def test_get_bad_id_and_uuid_and_name(self): self.assertRaises(exception.InvalidIdentity, objects.Allocation.get, self.context, 'not:a_name_or_uuid') def test_create(self): allocation = objects.Allocation(self.context, **self.fake_allocation) with mock.patch.object(self.dbapi, 'create_allocation', autospec=True) as mock_create_allocation: mock_create_allocation.return_value = ( db_utils.get_test_allocation()) allocation.create() args, _kwargs = mock_create_allocation.call_args self.assertEqual(objects.Allocation.VERSION, args[0]['version']) def test_save(self): uuid = self.fake_allocation['uuid'] test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_allocation_by_uuid', autospec=True) as mock_get_allocation: mock_get_allocation.return_value = self.fake_allocation with mock.patch.object(self.dbapi, 'update_allocation', autospec=True) as mock_update_allocation: mock_update_allocation.return_value = ( db_utils.get_test_allocation(name='newname', updated_at=test_time)) p = objects.Allocation.get_by_uuid(self.context, uuid) p.name = 'newname' p.save() mock_get_allocation.assert_called_once_with(uuid) mock_update_allocation.assert_called_once_with( uuid, {'version': objects.Allocation.VERSION, 'name': 'newname'}) self.assertEqual(self.context, p._context) res_updated_at = (p.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_allocation['uuid'] returns = [self.fake_allocation, db_utils.get_test_allocation(name='newname')] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_allocation_by_uuid', side_effect=returns, autospec=True) as mock_get_allocation: p = objects.Allocation.get_by_uuid(self.context, uuid) self.assertEqual(self.fake_allocation['name'], p.name) p.refresh() self.assertEqual('newname', p.name) self.assertEqual(expected, mock_get_allocation.call_args_list) self.assertEqual(self.context, p._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() db_allocation = db_utils.create_test_allocation() p = objects.Allocation.get_by_uuid(self.context, db_allocation.uuid) p_copy = objects.Allocation.get_by_uuid(self.context, db_allocation.uuid) p.name = 'newname' p.save() p_copy.refresh() p.copy = 'newname2' # Ensure this passes and an exception is not generated p_copy.save() def test_list(self): with mock.patch.object(self.dbapi, 'get_allocation_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_allocation] allocations = objects.Allocation.list(self.context) self.assertThat(allocations, matchers.HasLength(1)) self.assertIsInstance(allocations[0], objects.Allocation) self.assertEqual(self.context, allocations[0]._context) def test_payload_schemas(self): self._check_payload_schemas(objects.allocation, objects.Allocation.fields) class TestConvertToVersion(db_base.DbTestCase): def setUp(self): super(TestConvertToVersion, self).setUp() self.fake_allocation = db_utils.get_test_allocation() def test_owner_supported_missing(self): # Physical network not set, should be set to default. allocation = objects.Allocation(self.context, **self.fake_allocation) delattr(allocation, 'owner') allocation.obj_reset_changes() allocation._convert_to_version("1.1") self.assertIsNone(allocation.owner) self.assertEqual({'owner': None}, allocation.obj_get_changes()) def test_owner_supported_set(self): # Physical network set, no change required. allocation = objects.Allocation(self.context, **self.fake_allocation) allocation.owner = 'owner1' allocation.obj_reset_changes() allocation._convert_to_version("1.1") self.assertEqual('owner1', allocation.owner) self.assertEqual({}, allocation.obj_get_changes()) def test_owner_unsupported_missing(self): # Physical network not set, no change required. allocation = objects.Allocation(self.context, **self.fake_allocation) delattr(allocation, 'owner') allocation.obj_reset_changes() allocation._convert_to_version("1.0") self.assertNotIn('owner', allocation) self.assertEqual({}, allocation.obj_get_changes()) def test_owner_unsupported_set_remove(self): # Physical network set, should be removed. allocation = objects.Allocation(self.context, **self.fake_allocation) allocation.owner = 'owner1' allocation.obj_reset_changes() allocation._convert_to_version("1.0") self.assertNotIn('owner', allocation) self.assertEqual({}, allocation.obj_get_changes()) def test_owner_unsupported_set_no_remove_non_default(self): # Physical network set, should be set to default. allocation = objects.Allocation(self.context, **self.fake_allocation) allocation.owner = 'owner1' allocation.obj_reset_changes() allocation._convert_to_version("1.0", False) self.assertIsNone(allocation.owner) self.assertEqual({'owner': None}, allocation.obj_get_changes()) def test_owner_unsupported_set_no_remove_default(self): # Physical network set, no change required. allocation = objects.Allocation(self.context, **self.fake_allocation) allocation.owner = None allocation.obj_reset_changes() allocation._convert_to_version("1.0", False) self.assertIsNone(allocation.owner) self.assertEqual({}, allocation.obj_get_changes()) ironic-15.0.0/ironic/tests/unit/objects/test_bios.py0000664000175000017500000002431413652514273022520 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import types import mock from ironic.common import context from ironic.db import api as dbapi from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestBIOSSettingObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestBIOSSettingObject, self).setUp() self.ctxt = context.get_admin_context() self.bios_setting = db_utils.get_test_bios_setting() self.node_id = self.bios_setting['node_id'] @mock.patch.object(dbapi.IMPL, 'get_bios_setting', autospec=True) def test_get(self, mock_get_setting): mock_get_setting.return_value = self.bios_setting bios_obj = objects.BIOSSetting.get(self.context, self.node_id, self.bios_setting['name']) mock_get_setting.assert_called_once_with(self.node_id, self.bios_setting['name']) self.assertEqual(self.context, bios_obj._context) self.assertEqual(self.bios_setting['node_id'], bios_obj.node_id) self.assertEqual(self.bios_setting['name'], bios_obj.name) self.assertEqual(self.bios_setting['value'], bios_obj.value) @mock.patch.object(dbapi.IMPL, 'get_bios_setting_list', autospec=True) def test_get_by_node_id(self, mock_get_setting_list): bios_setting2 = db_utils.get_test_bios_setting(name='hyperthread', value='enabled') mock_get_setting_list.return_value = [self.bios_setting, bios_setting2] bios_obj_list = objects.BIOSSettingList.get_by_node_id( self.context, self.node_id) mock_get_setting_list.assert_called_once_with(self.node_id) self.assertEqual(self.context, bios_obj_list._context) self.assertEqual(2, len(bios_obj_list)) self.assertEqual(self.bios_setting['node_id'], bios_obj_list[0].node_id) self.assertEqual(self.bios_setting['name'], bios_obj_list[0].name) self.assertEqual(self.bios_setting['value'], bios_obj_list[0].value) self.assertEqual(bios_setting2['node_id'], bios_obj_list[1].node_id) self.assertEqual(bios_setting2['name'], bios_obj_list[1].name) self.assertEqual(bios_setting2['value'], bios_obj_list[1].value) @mock.patch.object(dbapi.IMPL, 'create_bios_setting_list', autospec=True) def test_create(self, mock_create_list): fake_call_args = {'node_id': self.bios_setting['node_id'], 'name': self.bios_setting['name'], 'value': self.bios_setting['value'], 'version': self.bios_setting['version']} setting = [{'name': self.bios_setting['name'], 'value': self.bios_setting['value']}] bios_obj = objects.BIOSSetting(context=self.context, **fake_call_args) mock_create_list.return_value = [self.bios_setting] mock_create_list.call_args bios_obj.create() mock_create_list.assert_called_once_with(self.bios_setting['node_id'], setting, self.bios_setting['version']) self.assertEqual(self.bios_setting['node_id'], bios_obj.node_id) self.assertEqual(self.bios_setting['name'], bios_obj.name) self.assertEqual(self.bios_setting['value'], bios_obj.value) @mock.patch.object(dbapi.IMPL, 'update_bios_setting_list', autospec=True) def test_save(self, mock_update_list): fake_call_args = {'node_id': self.bios_setting['node_id'], 'name': self.bios_setting['name'], 'value': self.bios_setting['value'], 'version': self.bios_setting['version']} setting = [{'name': self.bios_setting['name'], 'value': self.bios_setting['value']}] bios_obj = objects.BIOSSetting(context=self.context, **fake_call_args) mock_update_list.return_value = [self.bios_setting] mock_update_list.call_args bios_obj.save() mock_update_list.assert_called_once_with(self.bios_setting['node_id'], setting, self.bios_setting['version']) self.assertEqual(self.bios_setting['node_id'], bios_obj.node_id) self.assertEqual(self.bios_setting['name'], bios_obj.name) self.assertEqual(self.bios_setting['value'], bios_obj.value) @mock.patch.object(dbapi.IMPL, 'create_bios_setting_list', autospec=True) def test_list_create(self, mock_create_list): bios_setting2 = db_utils.get_test_bios_setting(name='hyperthread', value='enabled') settings = db_utils.get_test_bios_setting_setting_list()[:-1] mock_create_list.return_value = [self.bios_setting, bios_setting2] bios_obj_list = objects.BIOSSettingList.create( self.context, self.node_id, settings) mock_create_list.assert_called_once_with(self.node_id, settings, '1.0') self.assertEqual(self.context, bios_obj_list._context) self.assertEqual(2, len(bios_obj_list)) self.assertEqual(self.bios_setting['node_id'], bios_obj_list[0].node_id) self.assertEqual(self.bios_setting['name'], bios_obj_list[0].name) self.assertEqual(self.bios_setting['value'], bios_obj_list[0].value) self.assertEqual(bios_setting2['node_id'], bios_obj_list[1].node_id) self.assertEqual(bios_setting2['name'], bios_obj_list[1].name) self.assertEqual(bios_setting2['value'], bios_obj_list[1].value) @mock.patch.object(dbapi.IMPL, 'update_bios_setting_list', autospec=True) def test_list_save(self, mock_update_list): bios_setting2 = db_utils.get_test_bios_setting(name='hyperthread', value='enabled') settings = db_utils.get_test_bios_setting_setting_list()[:-1] mock_update_list.return_value = [self.bios_setting, bios_setting2] bios_obj_list = objects.BIOSSettingList.save( self.context, self.node_id, settings) mock_update_list.assert_called_once_with(self.node_id, settings, '1.0') self.assertEqual(self.context, bios_obj_list._context) self.assertEqual(2, len(bios_obj_list)) self.assertEqual(self.bios_setting['node_id'], bios_obj_list[0].node_id) self.assertEqual(self.bios_setting['name'], bios_obj_list[0].name) self.assertEqual(self.bios_setting['value'], bios_obj_list[0].value) self.assertEqual(bios_setting2['node_id'], bios_obj_list[1].node_id) self.assertEqual(bios_setting2['name'], bios_obj_list[1].name) self.assertEqual(bios_setting2['value'], bios_obj_list[1].value) @mock.patch.object(dbapi.IMPL, 'delete_bios_setting_list', autospec=True) def test_delete(self, mock_delete): objects.BIOSSetting.delete(self.context, self.node_id, self.bios_setting['name']) mock_delete.assert_called_once_with(self.node_id, [self.bios_setting['name']]) @mock.patch.object(dbapi.IMPL, 'delete_bios_setting_list', autospec=True) def test_list_delete(self, mock_delete): bios_setting2 = db_utils.get_test_bios_setting(name='hyperthread') name_list = [self.bios_setting['name'], bios_setting2['name']] objects.BIOSSettingList.delete(self.context, self.node_id, name_list) mock_delete.assert_called_once_with(self.node_id, name_list) @mock.patch('ironic.objects.bios.BIOSSettingList.get_by_node_id', spec_set=types.FunctionType) def test_sync_node_setting_create_and_update(self, mock_get): node = obj_utils.create_test_node(self.ctxt) bios_obj = [obj_utils.create_test_bios_setting( self.ctxt, node_id=node.id)] mock_get.return_value = bios_obj settings = db_utils.get_test_bios_setting_setting_list() settings[0]['value'] = 'off' create, update, delete, nochange = ( objects.BIOSSettingList.sync_node_setting(self.ctxt, node.id, settings)) self.assertEqual(create, settings[1:]) self.assertEqual(update, [settings[0]]) self.assertEqual(delete, []) self.assertEqual(nochange, []) @mock.patch('ironic.objects.bios.BIOSSettingList.get_by_node_id', spec_set=types.FunctionType) def test_sync_node_setting_delete_nochange(self, mock_get): node = obj_utils.create_test_node(self.ctxt) bios_obj_1 = obj_utils.create_test_bios_setting( self.ctxt, node_id=node.id) bios_obj_2 = obj_utils.create_test_bios_setting( self.ctxt, node_id=node.id, name='numlock', value='off') mock_get.return_value = [bios_obj_1, bios_obj_2] settings = db_utils.get_test_bios_setting_setting_list() settings[0]['name'] = 'fake-bios-option' create, update, delete, nochange = ( objects.BIOSSettingList.sync_node_setting(self.ctxt, node.id, settings)) expected_delete = [{'name': bios_obj_1.name, 'value': bios_obj_1.value}] self.assertEqual(create, settings[:2]) self.assertEqual(update, []) self.assertEqual(delete, expected_delete) self.assertEqual(nochange, [settings[2]]) ironic-15.0.0/ironic/tests/unit/objects/test_notification.py0000664000175000017500000002737613652514273024265 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import exception from ironic.objects import base from ironic.objects import fields from ironic.objects import notification from ironic.tests import base as test_base class TestNotificationBase(test_base.TestCase): @base.IronicObjectRegistry.register_if(False) class TestObject(base.IronicObject): VERSION = '1.0' fields = { 'fake_field_1': fields.StringField(nullable=True), 'fake_field_2': fields.IntegerField(nullable=True) } @base.IronicObjectRegistry.register_if(False) class TestObjectMissingField(base.IronicObject): VERSION = '1.0' fields = { 'fake_field_1': fields.StringField(nullable=True), } @base.IronicObjectRegistry.register_if(False) class TestNotificationPayload(notification.NotificationPayloadBase): VERSION = '1.0' SCHEMA = { 'fake_field_a': ('test_obj', 'fake_field_1'), 'fake_field_b': ('test_obj', 'fake_field_2') } fields = { 'fake_field_a': fields.StringField(nullable=True), 'fake_field_b': fields.IntegerField(nullable=False), 'an_extra_field': fields.StringField(nullable=False), 'an_optional_field': fields.IntegerField(nullable=True) } @base.IronicObjectRegistry.register_if(False) class TestNotificationPayloadEmptySchema( notification.NotificationPayloadBase): VERSION = '1.0' fields = { 'fake_field': fields.StringField() } @base.IronicObjectRegistry.register_if(False) class TestNotification(notification.NotificationBase): VERSION = '1.0' fields = { 'payload': fields.ObjectField('TestNotificationPayload') } @base.IronicObjectRegistry.register_if(False) class TestNotificationEmptySchema(notification.NotificationBase): VERSION = '1.0' fields = { 'payload': fields.ObjectField('TestNotificationPayloadEmptySchema') } def setUp(self): super(TestNotificationBase, self).setUp() self.fake_obj = self.TestObject(fake_field_1='fake1', fake_field_2=2) def _verify_notification(self, mock_notifier, mock_context, expected_event_type, expected_payload, expected_publisher, notif_level): mock_notifier.prepare.assert_called_once_with( publisher_id=expected_publisher) # Handler actually sending out the notification depends on the # notification level mock_notify = getattr(mock_notifier.prepare.return_value, notif_level) self.assertTrue(mock_notify.called) self.assertEqual(mock_context, mock_notify.call_args[0][0]) self.assertEqual(expected_event_type, mock_notify.call_args[1]['event_type']) actual_payload = mock_notify.call_args[1]['payload'] self.assertJsonEqual(expected_payload, actual_payload) @mock.patch('ironic.common.rpc.VERSIONED_NOTIFIER', autospec=True) def test_emit_notification(self, mock_notifier): self.config(notification_level='debug') payload = self.TestNotificationPayload(an_extra_field='extra', an_optional_field=1) payload.populate_schema(test_obj=self.fake_obj) notif = self.TestNotification( event_type=notification.EventType( object='test_object', action='test', status=fields.NotificationStatus.START), level=fields.NotificationLevel.DEBUG, publisher=notification.NotificationPublisher( service='ironic-conductor', host='host'), payload=payload) mock_context = mock.Mock() notif.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='baremetal.test_object.test.start', expected_payload={ 'ironic_object.name': 'TestNotificationPayload', 'ironic_object.data': { 'fake_field_a': 'fake1', 'fake_field_b': 2, 'an_extra_field': 'extra', 'an_optional_field': 1 }, 'ironic_object.version': '1.0', 'ironic_object.namespace': 'ironic'}, expected_publisher='ironic-conductor.host', notif_level=fields.NotificationLevel.DEBUG) @mock.patch('ironic.common.rpc.VERSIONED_NOTIFIER', autospec=True) def test_no_emit_level_too_low(self, mock_notifier): # Make sure notification doesn't emit when set notification # level < config level self.config(notification_level='warning') payload = self.TestNotificationPayload(an_extra_field='extra', an_optional_field=1) payload.populate_schema(test_obj=self.fake_obj) notif = self.TestNotification( event_type=notification.EventType( object='test_object', action='test', status=fields.NotificationStatus.START), level=fields.NotificationLevel.DEBUG, publisher=notification.NotificationPublisher( service='ironic-conductor', host='host'), payload=payload) mock_context = mock.Mock() notif.emit(mock_context) self.assertFalse(mock_notifier.called) @mock.patch('ironic.common.rpc.VERSIONED_NOTIFIER', autospec=True) def test_no_emit_notifs_disabled(self, mock_notifier): # Make sure notifications aren't emitted when notification_level # isn't defined, indicating notifications should be disabled payload = self.TestNotificationPayload(an_extra_field='extra', an_optional_field=1) payload.populate_schema(test_obj=self.fake_obj) notif = self.TestNotification( event_type=notification.EventType( object='test_object', action='test', status=fields.NotificationStatus.START), level=fields.NotificationLevel.DEBUG, publisher=notification.NotificationPublisher( service='ironic-conductor', host='host'), payload=payload) mock_context = mock.Mock() notif.emit(mock_context) self.assertFalse(mock_notifier.called) @mock.patch('ironic.common.rpc.VERSIONED_NOTIFIER', autospec=True) def test_no_emit_schema_not_populated(self, mock_notifier): self.config(notification_level='debug') payload = self.TestNotificationPayload(an_extra_field='extra', an_optional_field=1) notif = self.TestNotification( event_type=notification.EventType( object='test_object', action='test', status=fields.NotificationStatus.START), level=fields.NotificationLevel.DEBUG, publisher=notification.NotificationPublisher( service='ironic-conductor', host='host'), payload=payload) mock_context = mock.Mock() self.assertRaises(exception.NotificationPayloadError, notif.emit, mock_context) self.assertFalse(mock_notifier.called) @mock.patch('ironic.common.rpc.VERSIONED_NOTIFIER', autospec=True) def test_emit_notification_empty_schema(self, mock_notifier): self.config(notification_level='debug') payload = self.TestNotificationPayloadEmptySchema(fake_field='123') notif = self.TestNotificationEmptySchema( event_type=notification.EventType( object='test_object', action='test', status=fields.NotificationStatus.ERROR), level=fields.NotificationLevel.ERROR, publisher=notification.NotificationPublisher( service='ironic-conductor', host='host'), payload=payload) mock_context = mock.Mock() notif.emit(mock_context) self._verify_notification( mock_notifier, mock_context, expected_event_type='baremetal.test_object.test.error', expected_payload={ 'ironic_object.name': 'TestNotificationPayloadEmptySchema', 'ironic_object.data': { 'fake_field': '123', }, 'ironic_object.version': '1.0', 'ironic_object.namespace': 'ironic'}, expected_publisher='ironic-conductor.host', notif_level=fields.NotificationLevel.ERROR) def test_populate_schema(self): payload = self.TestNotificationPayload(an_extra_field='extra', an_optional_field=1) payload.populate_schema(test_obj=self.fake_obj) self.assertEqual('extra', payload.an_extra_field) self.assertEqual(1, payload.an_optional_field) self.assertEqual(self.fake_obj.fake_field_1, payload.fake_field_a) self.assertEqual(self.fake_obj.fake_field_2, payload.fake_field_b) def test_populate_schema_missing_required_obj_field(self): test_obj = self.TestObject(fake_field_1='populated') # this payload requires missing fake_field_b payload = self.TestNotificationPayload(an_extra_field='too extra') self.assertRaises(exception.NotificationSchemaKeyError, payload.populate_schema, test_obj=test_obj) def test_populate_schema_nullable_field_auto_populates(self): """Test that nullable fields always end up in the payload.""" test_obj = self.TestObject(fake_field_2=123) payload = self.TestNotificationPayload() payload.populate_schema(test_obj=test_obj) self.assertIsNone(payload.fake_field_a) def test_populate_schema_no_object_field(self): test_obj = self.TestObjectMissingField(fake_field_1='foo') payload = self.TestNotificationPayload() self.assertRaises(exception.NotificationSchemaKeyError, payload.populate_schema, test_obj=test_obj) def test_event_type_with_status(self): event_type = notification.EventType( object="some_obj", action="some_action", status="success") self.assertEqual("baremetal.some_obj.some_action.success", event_type.to_event_type_field()) def test_event_type_without_status_fails(self): event_type = notification.EventType( object="some_obj", action="some_action") self.assertRaises(NotImplementedError, event_type.to_event_type_field) def test_event_type_invalid_status_fails(self): self.assertRaises(ValueError, notification.EventType, object="some_obj", action="some_action", status="invalid") def test_event_type_make_status_invalid(self): def make_status_invalid(): event_type.status = "Roar" event_type = notification.EventType( object='test_object', action='test', status='start') self.assertRaises(ValueError, make_status_invalid) ironic-15.0.0/ironic/tests/unit/objects/test_volume_connector.py0000664000175000017500000002272413652514273025150 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import types import mock from testtools.matchers import HasLength from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestVolumeConnectorObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestVolumeConnectorObject, self).setUp() self.volume_connector_dict = db_utils.get_test_volume_connector() @mock.patch('ironic.objects.VolumeConnector.get_by_uuid', spec_set=types.FunctionType) @mock.patch('ironic.objects.VolumeConnector.get_by_id', spec_set=types.FunctionType) def test_get(self, mock_get_by_id, mock_get_by_uuid): id = self.volume_connector_dict['id'] uuid = self.volume_connector_dict['uuid'] objects.VolumeConnector.get(self.context, id) mock_get_by_id.assert_called_once_with(self.context, id) self.assertFalse(mock_get_by_uuid.called) objects.VolumeConnector.get(self.context, uuid) mock_get_by_uuid.assert_called_once_with(self.context, uuid) # Invalid identifier (not ID or UUID) self.assertRaises(exception.InvalidIdentity, objects.VolumeConnector.get, self.context, 'not-valid-identifier') def test_get_by_id(self): id = self.volume_connector_dict['id'] with mock.patch.object(self.dbapi, 'get_volume_connector_by_id', autospec=True) as mock_get_volume_connector: mock_get_volume_connector.return_value = self.volume_connector_dict connector = objects.VolumeConnector.get_by_id(self.context, id) mock_get_volume_connector.assert_called_once_with(id) self.assertIsInstance(connector, objects.VolumeConnector) self.assertEqual(self.context, connector._context) def test_get_by_uuid(self): uuid = self.volume_connector_dict['uuid'] with mock.patch.object(self.dbapi, 'get_volume_connector_by_uuid', autospec=True) as mock_get_volume_connector: mock_get_volume_connector.return_value = self.volume_connector_dict connector = objects.VolumeConnector.get_by_uuid(self.context, uuid) mock_get_volume_connector.assert_called_once_with(uuid) self.assertIsInstance(connector, objects.VolumeConnector) self.assertEqual(self.context, connector._context) def test_list(self): with mock.patch.object(self.dbapi, 'get_volume_connector_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.volume_connector_dict] volume_connectors = objects.VolumeConnector.list( self.context, limit=4, sort_key='uuid', sort_dir='asc') mock_get_list.assert_called_once_with( limit=4, marker=None, sort_key='uuid', sort_dir='asc') self.assertThat(volume_connectors, HasLength(1)) self.assertIsInstance(volume_connectors[0], objects.VolumeConnector) self.assertEqual(self.context, volume_connectors[0]._context) def test_list_none(self): with mock.patch.object(self.dbapi, 'get_volume_connector_list', autospec=True) as mock_get_list: mock_get_list.return_value = [] volume_connectors = objects.VolumeConnector.list( self.context, limit=4, sort_key='uuid', sort_dir='asc') mock_get_list.assert_called_once_with( limit=4, marker=None, sort_key='uuid', sort_dir='asc') self.assertEqual([], volume_connectors) def test_list_by_node_id(self): with mock.patch.object(self.dbapi, 'get_volume_connectors_by_node_id', autospec=True) as mock_get_list_by_node_id: mock_get_list_by_node_id.return_value = [ self.volume_connector_dict] node_id = self.volume_connector_dict['node_id'] volume_connectors = objects.VolumeConnector.list_by_node_id( self.context, node_id, limit=10, sort_dir='desc') mock_get_list_by_node_id.assert_called_once_with( node_id, limit=10, marker=None, sort_key=None, sort_dir='desc') self.assertThat(volume_connectors, HasLength(1)) self.assertIsInstance(volume_connectors[0], objects.VolumeConnector) self.assertEqual(self.context, volume_connectors[0]._context) def test_create(self): with mock.patch.object(self.dbapi, 'create_volume_connector', autospec=True) as mock_db_create: mock_db_create.return_value = self.volume_connector_dict new_connector = objects.VolumeConnector( self.context, **self.volume_connector_dict) new_connector.create() mock_db_create.assert_called_once_with(self.volume_connector_dict) def test_destroy(self): uuid = self.volume_connector_dict['uuid'] with mock.patch.object(self.dbapi, 'get_volume_connector_by_uuid', autospec=True) as mock_get_volume_connector: mock_get_volume_connector.return_value = self.volume_connector_dict with mock.patch.object(self.dbapi, 'destroy_volume_connector', autospec=True) as mock_db_destroy: connector = objects.VolumeConnector.get_by_uuid(self.context, uuid) connector.destroy() mock_db_destroy.assert_called_once_with(uuid) def test_save(self): uuid = self.volume_connector_dict['uuid'] connector_id = "new_connector_id" test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_volume_connector_by_uuid', autospec=True) as mock_get_volume_connector: mock_get_volume_connector.return_value = self.volume_connector_dict with mock.patch.object(self.dbapi, 'update_volume_connector', autospec=True) as mock_update_connector: mock_update_connector.return_value = ( db_utils.get_test_volume_connector( connector_id=connector_id, updated_at=test_time)) c = objects.VolumeConnector.get_by_uuid(self.context, uuid) c.connector_id = connector_id c.save() mock_get_volume_connector.assert_called_once_with(uuid) mock_update_connector.assert_called_once_with( uuid, {'version': objects.VolumeConnector.VERSION, 'connector_id': connector_id}) self.assertEqual(self.context, c._context) res_updated_at = (c.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.volume_connector_dict['uuid'] old_connector_id = self.volume_connector_dict['connector_id'] returns = [self.volume_connector_dict, db_utils.get_test_volume_connector( connector_id="new_connector_id")] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_volume_connector_by_uuid', side_effect=returns, autospec=True) as mock_get_volume_connector: c = objects.VolumeConnector.get_by_uuid(self.context, uuid) self.assertEqual(old_connector_id, c.connector_id) c.refresh() self.assertEqual('new_connector_id', c.connector_id) self.assertEqual(expected, mock_get_volume_connector.call_args_list) self.assertEqual(self.context, c._context) def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() db_volume_connector = db_utils.create_test_volume_connector() vc = objects.VolumeConnector.get_by_uuid(self.context, db_volume_connector.uuid) vc_copy = objects.VolumeConnector.get_by_uuid(self.context, db_volume_connector.uuid) vc.name = 'b240' vc.save() vc_copy.refresh() vc_copy.name = 'aaff' # Ensure this passes and an exception is not generated vc_copy.save() def test_payload_schemas(self): self._check_payload_schemas(objects.volume_connector, objects.VolumeConnector.fields) ironic-15.0.0/ironic/tests/unit/objects/test_fields.py0000664000175000017500000001163413652514273023033 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import inspect from ironic.common import exception from ironic.objects import fields from ironic.tests import base as test_base class TestMacAddressField(test_base.TestCase): def setUp(self): super(TestMacAddressField, self).setUp() self.field = fields.MACAddressField() def test_coerce(self): values = {'aa:bb:cc:dd:ee:ff': 'aa:bb:cc:dd:ee:ff', 'AA:BB:CC:DD:EE:FF': 'aa:bb:cc:dd:ee:ff', 'AA:bb:cc:11:22:33': 'aa:bb:cc:11:22:33'} for k in values: self.assertEqual(values[k], self.field.coerce('obj', 'attr', k)) def test_coerce_bad_values(self): for v in ('invalid-mac', 'aa-bb-cc-dd-ee-ff'): self.assertRaises(exception.InvalidMAC, self.field.coerce, 'obj', 'attr', v) class TestFlexibleDictField(test_base.TestCase): def setUp(self): super(TestFlexibleDictField, self).setUp() self.field = fields.FlexibleDictField() def test_coerce(self): d = {'foo_1': 'bar', 'foo_2': 2, 'foo_3': [], 'foo_4': {}} self.assertEqual(d, self.field.coerce('obj', 'attr', d)) self.assertEqual({'foo': 'bar'}, self.field.coerce('obj', 'attr', '{"foo": "bar"}')) def test_coerce_bad_values(self): self.assertRaises(TypeError, self.field.coerce, 'obj', 'attr', 123) self.assertRaises(TypeError, self.field.coerce, 'obj', 'attr', True) def test_coerce_nullable_translation(self): # non-nullable self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', None) # nullable self.field = fields.FlexibleDictField(nullable=True) self.assertEqual({}, self.field.coerce('obj', 'attr', None)) class TestStringFieldThatAcceptsCallable(test_base.TestCase): def setUp(self): super(TestStringFieldThatAcceptsCallable, self).setUp() def test_default_function(): return "default value" self.test_default_function_hash = hashlib.md5( inspect.getsource(test_default_function).encode()).hexdigest() self.field = fields.StringFieldThatAcceptsCallable( default=test_default_function) def test_coerce_string(self): self.assertEqual("value", self.field.coerce('obj', 'attr', "value")) def test_coerce_function(self): def test_function(): return "value" self.assertEqual("value", self.field.coerce('obj', 'attr', test_function)) def test_coerce_invalid_type(self): self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', ('invalid', 'tuple')) def test_coerce_function_invalid_type(self): def test_function(): return ('invalid', 'tuple',) self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', test_function) def test_coerce_default_as_function(self): self.assertEqual("default value", self.field.coerce('obj', 'attr', None)) def test__repr__includes_default_function_name_and_source_hash(self): expected = ('StringAcceptsCallable(default=test_default_function-%s,' 'nullable=False)' % self.test_default_function_hash) self.assertEqual(expected, repr(self.field)) class TestNotificationLevelField(test_base.TestCase): def setUp(self): super(TestNotificationLevelField, self).setUp() self.field = fields.NotificationLevelField() def test_coerce_good_value(self): self.assertEqual(fields.NotificationLevel.WARNING, self.field.coerce('obj', 'attr', 'warning')) def test_coerce_bad_value(self): self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', 'not_a_priority') class TestNotificationStatusField(test_base.TestCase): def setUp(self): super(TestNotificationStatusField, self).setUp() self.field = fields.NotificationStatusField() def test_coerce_good_value(self): self.assertEqual(fields.NotificationStatus.START, self.field.coerce('obj', 'attr', 'start')) def test_coerce_bad_value(self): self.assertRaises(ValueError, self.field.coerce, 'obj', 'attr', 'not_a_priority') ironic-15.0.0/ironic/tests/unit/objects/test_deploy_template.py0000664000175000017500000001531113652514273024750 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import context from ironic.db import api as dbapi from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestDeployTemplateObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestDeployTemplateObject, self).setUp() self.ctxt = context.get_admin_context() self.fake_template = db_utils.get_test_deploy_template() @mock.patch.object(dbapi.IMPL, 'create_deploy_template', autospec=True) def test_create(self, mock_create): template = objects.DeployTemplate(context=self.context, **self.fake_template) mock_create.return_value = db_utils.get_test_deploy_template() template.create() args, _kwargs = mock_create.call_args self.assertEqual(1, mock_create.call_count) self.assertEqual(self.fake_template['name'], template.name) self.assertEqual(self.fake_template['steps'], template.steps) self.assertEqual(self.fake_template['extra'], template.extra) @mock.patch.object(dbapi.IMPL, 'update_deploy_template', autospec=True) def test_save(self, mock_update): template = objects.DeployTemplate(context=self.context, **self.fake_template) template.obj_reset_changes() mock_update.return_value = db_utils.get_test_deploy_template( name='CUSTOM_DT2') template.name = 'CUSTOM_DT2' template.save() mock_update.assert_called_once_with( self.fake_template['uuid'], {'name': 'CUSTOM_DT2', 'version': objects.DeployTemplate.VERSION}) self.assertEqual('CUSTOM_DT2', template.name) @mock.patch.object(dbapi.IMPL, 'destroy_deploy_template', autospec=True) def test_destroy(self, mock_destroy): template = objects.DeployTemplate(context=self.context, id=self.fake_template['id']) template.destroy() mock_destroy.assert_called_once_with(self.fake_template['id']) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_by_id', autospec=True) def test_get_by_id(self, mock_get): mock_get.return_value = self.fake_template template = objects.DeployTemplate.get_by_id( self.context, self.fake_template['id']) mock_get.assert_called_once_with(self.fake_template['id']) self.assertEqual(self.fake_template['name'], template.name) self.assertEqual(self.fake_template['uuid'], template.uuid) self.assertEqual(self.fake_template['steps'], template.steps) self.assertEqual(self.fake_template['extra'], template.extra) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_by_uuid', autospec=True) def test_get_by_uuid(self, mock_get): mock_get.return_value = self.fake_template template = objects.DeployTemplate.get_by_uuid( self.context, self.fake_template['uuid']) mock_get.assert_called_once_with(self.fake_template['uuid']) self.assertEqual(self.fake_template['name'], template.name) self.assertEqual(self.fake_template['uuid'], template.uuid) self.assertEqual(self.fake_template['steps'], template.steps) self.assertEqual(self.fake_template['extra'], template.extra) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_by_name', autospec=True) def test_get_by_name(self, mock_get): mock_get.return_value = self.fake_template template = objects.DeployTemplate.get_by_name( self.context, self.fake_template['name']) mock_get.assert_called_once_with(self.fake_template['name']) self.assertEqual(self.fake_template['name'], template.name) self.assertEqual(self.fake_template['uuid'], template.uuid) self.assertEqual(self.fake_template['steps'], template.steps) self.assertEqual(self.fake_template['extra'], template.extra) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_list', autospec=True) def test_list(self, mock_list): mock_list.return_value = [self.fake_template] templates = objects.DeployTemplate.list(self.context) mock_list.assert_called_once_with(limit=None, marker=None, sort_dir=None, sort_key=None) self.assertEqual(1, len(templates)) self.assertEqual(self.fake_template['name'], templates[0].name) self.assertEqual(self.fake_template['uuid'], templates[0].uuid) self.assertEqual(self.fake_template['steps'], templates[0].steps) self.assertEqual(self.fake_template['extra'], templates[0].extra) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_list_by_names', autospec=True) def test_list_by_names(self, mock_list): mock_list.return_value = [self.fake_template] names = [self.fake_template['name']] templates = objects.DeployTemplate.list_by_names(self.context, names) mock_list.assert_called_once_with(names) self.assertEqual(1, len(templates)) self.assertEqual(self.fake_template['name'], templates[0].name) self.assertEqual(self.fake_template['uuid'], templates[0].uuid) self.assertEqual(self.fake_template['steps'], templates[0].steps) self.assertEqual(self.fake_template['extra'], templates[0].extra) @mock.patch.object(dbapi.IMPL, 'get_deploy_template_by_uuid', autospec=True) def test_refresh(self, mock_get): uuid = self.fake_template['uuid'] mock_get.side_effect = [dict(self.fake_template), dict(self.fake_template, name='CUSTOM_DT2')] template = objects.DeployTemplate.get_by_uuid(self.context, uuid) self.assertEqual(self.fake_template['name'], template.name) template.refresh() self.assertEqual('CUSTOM_DT2', template.name) expected = [mock.call(uuid), mock.call(uuid)] self.assertEqual(expected, mock_get.call_args_list) self.assertEqual(self.context, template._context) ironic-15.0.0/ironic/tests/unit/objects/utils.py0000664000175000017500000003067513652514273021674 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Ironic object test utilities.""" import functools import inspect from ironic.common import exception from ironic.common.i18n import _ from ironic import objects from ironic.objects import notification from ironic.tests.unit.db import utils as db_utils def check_keyword_arguments(func): @functools.wraps(func) def wrapper(**kw): obj_type = kw.pop('object_type') result = func(**kw) extra_args = set(kw) - set(result) if extra_args: raise exception.InvalidParameterValue( _("Unknown keyword arguments (%(extra)s) were passed " "while creating a test %(object_type)s object.") % {"extra": ", ".join(extra_args), "object_type": obj_type}) return result return wrapper def get_test_node(ctxt, **kw): """Return a Node object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'node' get_db_node_checked = check_keyword_arguments(db_utils.get_test_node) db_node = get_db_node_checked(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_node['id'] node = objects.Node(ctxt) for key in db_node: if key == 'traits': # convert list of strings to object raw_traits = db_node['traits'] trait_list = [] for raw_trait in raw_traits: trait = objects.Trait(ctxt, trait=raw_trait) trait_list.append(trait) node.traits = objects.TraitList(ctxt, objects=trait_list) node.traits.obj_reset_changes() else: setattr(node, key, db_node[key]) return node def create_test_node(ctxt, **kw): """Create and return a test node object. Create a node in the DB and return a Node object with appropriate attributes. """ node = get_test_node(ctxt, **kw) node.create() return node def get_test_port(ctxt, **kw): """Return a Port object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'port' get_db_port_checked = check_keyword_arguments( db_utils.get_test_port) db_port = get_db_port_checked(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_port['id'] port = objects.Port(ctxt) for key in db_port: setattr(port, key, db_port[key]) return port def create_test_port(ctxt, **kw): """Create and return a test port object. Create a port in the DB and return a Port object with appropriate attributes. """ port = get_test_port(ctxt, **kw) port.create() return port def get_test_chassis(ctxt, **kw): """Return a Chassis object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'chassis' get_db_chassis_checked = check_keyword_arguments( db_utils.get_test_chassis) db_chassis = get_db_chassis_checked(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_chassis['id'] chassis = objects.Chassis(ctxt) for key in db_chassis: setattr(chassis, key, db_chassis[key]) return chassis def create_test_chassis(ctxt, **kw): """Create and return a test chassis object. Create a chassis in the DB and return a Chassis object with appropriate attributes. """ chassis = get_test_chassis(ctxt, **kw) chassis.create() return chassis def get_test_portgroup(ctxt, **kw): """Return a Portgroup object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'portgroup' get_db_port_group_checked = check_keyword_arguments( db_utils.get_test_portgroup) db_portgroup = get_db_port_group_checked(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_portgroup['id'] portgroup = objects.Portgroup(ctxt) for key in db_portgroup: setattr(portgroup, key, db_portgroup[key]) return portgroup def create_test_portgroup(ctxt, **kw): """Create and return a test portgroup object. Create a portgroup in the DB and return a Portgroup object with appropriate attributes. """ portgroup = get_test_portgroup(ctxt, **kw) portgroup.create() return portgroup def get_test_volume_connector(ctxt, **kw): """Return a VolumeConnector object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_volume_connector = db_utils.get_test_volume_connector(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_volume_connector['id'] volume_connector = objects.VolumeConnector(ctxt) for key in db_volume_connector: setattr(volume_connector, key, db_volume_connector[key]) return volume_connector def create_test_volume_connector(ctxt, **kw): """Create and return a test volume connector object. Create a volume connector in the DB and return a VolumeConnector object with appropriate attributes. """ volume_connector = get_test_volume_connector(ctxt, **kw) volume_connector.create() return volume_connector def get_test_volume_target(ctxt, **kw): """Return a VolumeTarget object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_volume_target = db_utils.get_test_volume_target(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_volume_target['id'] volume_target = objects.VolumeTarget(ctxt) for key in db_volume_target: setattr(volume_target, key, db_volume_target[key]) return volume_target def create_test_volume_target(ctxt, **kw): """Create and return a test volume target object. Create a volume target in the DB and return a VolumeTarget object with appropriate attributes. """ volume_target = get_test_volume_target(ctxt, **kw) volume_target.create() return volume_target def get_test_bios_setting(ctxt, **kw): """Return a BiosSettingList object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'bios' db_bios_setting = db_utils.get_test_bios_setting(**kw) bios_setting = objects.BIOSSetting(ctxt) for key in db_bios_setting: setattr(bios_setting, key, db_bios_setting[key]) return bios_setting def create_test_bios_setting(ctxt, **kw): """Create and return a test bios setting list object. Create a BIOS setting list in the DB and return a BIOSSettingList object with appropriate attributes. """ bios_setting = get_test_bios_setting(ctxt, **kw) bios_setting.create() return bios_setting def create_test_conductor(ctxt, **kw): """Register and return a test conductor object.""" args = db_utils.get_test_conductor(**kw) conductor = objects.Conductor.register(ctxt, args['hostname'], args['drivers'], args['conductor_group'], update_existing=True) return conductor def get_test_allocation(ctxt, **kw): """Return an Allocation object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ kw['object_type'] = 'allocation' get_db_allocation_checked = check_keyword_arguments( db_utils.get_test_allocation) db_allocation = get_db_allocation_checked(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_allocation['id'] allocation = objects.Allocation(ctxt) for key in db_allocation: setattr(allocation, key, db_allocation[key]) return allocation def create_test_allocation(ctxt, **kw): """Create and return a test allocation object. Create an allocation in the DB and return an Allocation object with appropriate attributes. """ allocation = get_test_allocation(ctxt, **kw) allocation.create() return allocation def get_test_deploy_template(ctxt, **kw): """Return a DeployTemplate object with appropriate attributes. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ db_template = db_utils.get_test_deploy_template(**kw) # Let DB generate ID if it isn't specified explicitly if 'id' not in kw: del db_template['id'] if 'steps' not in kw: for step in db_template['steps']: del step['id'] del step['deploy_template_id'] else: for kw_step, template_step in zip(kw['steps'], db_template['steps']): if 'id' not in kw_step and 'id' in template_step: del template_step['id'] template = objects.DeployTemplate(ctxt) for key in db_template: setattr(template, key, db_template[key]) return template def create_test_deploy_template(ctxt, **kw): """Create and return a test deploy template object. NOTE: The object leaves the attributes marked as changed, such that a create() could be used to commit it to the DB. """ template = get_test_deploy_template(ctxt, **kw) template.create() return template def get_payloads_with_schemas(from_module): """Get the Payload classes with SCHEMAs defined. :param from_module: module from which to get the classes. :returns: list of Payload classes that have SCHEMAs defined. """ payloads = [] for name, payload in inspect.getmembers(from_module, inspect.isclass): # Assume that Payload class names end in 'Payload'. if name.endswith("Payload"): base_classes = inspect.getmro(payload) if notification.NotificationPayloadBase not in base_classes: # The class may have the desired name but it isn't a REAL # Payload class; skip it. continue # First class is this payload class, parent class is the 2nd # one in the tuple parent = base_classes[1] if (not hasattr(parent, 'SCHEMA') or parent.SCHEMA != payload.SCHEMA): payloads.append(payload) return payloads class SchemasTestMixIn(object): def _check_payload_schemas(self, from_module, fields): """Assert that the Payload SCHEMAs have the expected properties. A payload's SCHEMA should: 1. Have each of its keys in the payload's fields 2. Have each member of the schema match with a corresponding field in the object """ resource = from_module.__name__.split('.')[-1] payloads = get_payloads_with_schemas(from_module) for payload in payloads: for schema_key in payload.SCHEMA: self.assertIn(schema_key, payload.fields, "for %s, schema key %s is not in fields" % (payload, schema_key)) key = payload.SCHEMA[schema_key][1] self.assertIn(key, fields, "for %s, schema key %s has invalid %s " "field %s" % (payload, schema_key, resource, key)) ironic-15.0.0/ironic/tests/unit/objects/__init__.py0000664000175000017500000000000013652514273022246 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/objects/test_chassis.py0000664000175000017500000001357313652514273023226 0ustar zuulzuul00000000000000# coding=utf-8 # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_utils import uuidutils from testtools import matchers from ironic.common import exception from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class TestChassisObject(db_base.DbTestCase, obj_utils.SchemasTestMixIn): def setUp(self): super(TestChassisObject, self).setUp() self.fake_chassis = db_utils.get_test_chassis() def test_get_by_id(self): chassis_id = self.fake_chassis['id'] with mock.patch.object(self.dbapi, 'get_chassis_by_id', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis chassis = objects.Chassis.get(self.context, chassis_id) mock_get_chassis.assert_called_once_with(chassis_id) self.assertEqual(self.context, chassis._context) def test_get_by_uuid(self): uuid = self.fake_chassis['uuid'] with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis chassis = objects.Chassis.get(self.context, uuid) mock_get_chassis.assert_called_once_with(uuid) self.assertEqual(self.context, chassis._context) def test_get_bad_id_and_uuid(self): self.assertRaises(exception.InvalidIdentity, objects.Chassis.get, self.context, 'not-a-uuid') def test_create(self): chassis = objects.Chassis(self.context, **self.fake_chassis) with mock.patch.object(self.dbapi, 'create_chassis', autospec=True) as mock_create_chassis: mock_create_chassis.return_value = db_utils.get_test_chassis() chassis.create() args, _kwargs = mock_create_chassis.call_args self.assertEqual(objects.Chassis.VERSION, args[0]['version']) def test_save(self): uuid = self.fake_chassis['uuid'] extra = {"test": 123} test_time = datetime.datetime(2000, 1, 1, 0, 0) with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', autospec=True) as mock_get_chassis: mock_get_chassis.return_value = self.fake_chassis with mock.patch.object(self.dbapi, 'update_chassis', autospec=True) as mock_update_chassis: mock_update_chassis.return_value = ( db_utils.get_test_chassis(extra=extra, updated_at=test_time)) c = objects.Chassis.get_by_uuid(self.context, uuid) c.extra = extra c.save() mock_get_chassis.assert_called_once_with(uuid) mock_update_chassis.assert_called_once_with( uuid, {'version': objects.Chassis.VERSION, 'extra': {"test": 123}}) self.assertEqual(self.context, c._context) res_updated_at = (c.updated_at).replace(tzinfo=None) self.assertEqual(test_time, res_updated_at) def test_refresh(self): uuid = self.fake_chassis['uuid'] new_uuid = uuidutils.generate_uuid() returns = [dict(self.fake_chassis, uuid=uuid), dict(self.fake_chassis, uuid=new_uuid)] expected = [mock.call(uuid), mock.call(uuid)] with mock.patch.object(self.dbapi, 'get_chassis_by_uuid', side_effect=returns, autospec=True) as mock_get_chassis: c = objects.Chassis.get_by_uuid(self.context, uuid) self.assertEqual(uuid, c.uuid) c.refresh() self.assertEqual(new_uuid, c.uuid) self.assertEqual(expected, mock_get_chassis.call_args_list) self.assertEqual(self.context, c._context) # NOTE(vsaienko) current implementation of update_chassis() dbapi is # differ from other object like update_port() or node_update() which # allows to perform object.save() after object.refresh() # This test will avoid update_chassis() regressions in future. def test_save_after_refresh(self): # Ensure that it's possible to do object.save() after object.refresh() db_chassis = db_utils.create_test_chassis() c = objects.Chassis.get_by_uuid(self.context, db_chassis.uuid) c_copy = objects.Chassis.get_by_uuid(self.context, db_chassis.uuid) c.description = 'b240' c.save() c_copy.refresh() c_copy.description = 'aaff' # Ensure this passes and an exception is not generated c_copy.save() def test_list(self): with mock.patch.object(self.dbapi, 'get_chassis_list', autospec=True) as mock_get_list: mock_get_list.return_value = [self.fake_chassis] chassis = objects.Chassis.list(self.context) self.assertThat(chassis, matchers.HasLength(1)) self.assertIsInstance(chassis[0], objects.Chassis) self.assertEqual(self.context, chassis[0]._context) def test_payload_schemas(self): self._check_payload_schemas(objects.chassis, objects.Chassis.fields) ironic-15.0.0/ironic/tests/unit/drivers/0000775000175000017500000000000013652514443020173 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/test_utils.py0000664000175000017500000004163113652514273022752 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import os import mock from oslo_config import cfg from oslo_utils import timeutils from ironic.common import exception from ironic.common import swift from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_client from ironic.drivers.modules import fake from ironic.drivers import utils as driver_utils from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class UtilsTestCase(db_base.DbTestCase): def setUp(self): super(UtilsTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context) def test_get_node_mac_addresses(self): ports = [] ports.append( obj_utils.create_test_port( self.context, address='aa:bb:cc:dd:ee:ff', uuid='bb43dc0b-03f2-4d2e-ae87-c02d7f33cc53', node_id=self.node.id) ) ports.append( obj_utils.create_test_port( self.context, address='dd:ee:ff:aa:bb:cc', uuid='4fc26c0b-03f2-4d2e-ae87-c02d7f33c234', node_id=self.node.id) ) with task_manager.acquire(self.context, self.node.uuid) as task: node_macs = driver_utils.get_node_mac_addresses(task) self.assertEqual(sorted([p.address for p in ports]), sorted(node_macs)) def test_get_node_capability(self): properties = {'capabilities': 'cap1:value1, cap2: value2'} self.node.properties = properties expected = 'value1' expected2 = 'value2' result = driver_utils.get_node_capability(self.node, 'cap1') result2 = driver_utils.get_node_capability(self.node, 'cap2') self.assertEqual(expected, result) self.assertEqual(expected2, result2) def test_get_node_capability_returns_none(self): properties = {'capabilities': 'cap1:value1,cap2:value2'} self.node.properties = properties result = driver_utils.get_node_capability(self.node, 'capX') self.assertIsNone(result) def test_add_node_capability(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = '' driver_utils.add_node_capability(task, 'boot_mode', 'bios') self.assertEqual('boot_mode:bios', task.node.properties['capabilities']) def test_add_node_capability_append(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'a:b,c:d' driver_utils.add_node_capability(task, 'boot_mode', 'bios') self.assertEqual('a:b,c:d,boot_mode:bios', task.node.properties['capabilities']) def test_add_node_capability_append_duplicate(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'a:b,c:d' driver_utils.add_node_capability(task, 'a', 'b') self.assertEqual('a:b,c:d,a:b', task.node.properties['capabilities']) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_ensure_next_boot_device(self, node_set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['persistent_boot_device'] = 'pxe' driver_utils.ensure_next_boot_device( task, {'force_boot_device': True} ) node_set_boot_device_mock.assert_called_once_with(task, 'pxe') def test_ensure_next_boot_device_clears_is_next_boot_persistent(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['persistent_boot_device'] = 'pxe' task.node.driver_internal_info['is_next_boot_persistent'] = False driver_utils.ensure_next_boot_device( task, {'force_boot_device': True} ) task.node.refresh() self.assertNotIn('is_next_boot_persistent', task.node.driver_internal_info) def test_force_persistent_boot_true(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ipmi_force_boot_device'] = True ret = driver_utils.force_persistent_boot(task, 'pxe', True) self.assertIsNone(ret) task.node.refresh() self.assertIn(('persistent_boot_device', 'pxe'), task.node.driver_internal_info.items()) self.assertNotIn('is_next_boot_persistent', task.node.driver_internal_info) def test_force_persistent_boot_false(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = driver_utils.force_persistent_boot(task, 'pxe', False) self.assertIsNone(ret) task.node.refresh() self.assertIs( False, task.node.driver_internal_info['is_next_boot_persistent']) def test_capabilities_to_dict(self): capabilities_more_than_one_item = 'a:b,c:d' capabilities_exactly_one_item = 'e:f' # Testing empty capabilities self.assertEqual( {}, driver_utils.capabilities_to_dict('') ) self.assertEqual( {'e': 'f'}, driver_utils.capabilities_to_dict(capabilities_exactly_one_item) ) self.assertEqual( {'a': 'b', 'c': 'd'}, driver_utils.capabilities_to_dict(capabilities_more_than_one_item) ) def test_capabilities_to_dict_with_only_key_or_value_fail(self): capabilities_only_key_or_value = 'xpto' exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_only_key_or_value ) self.assertEqual('Malformed capabilities value: xpto', str(exc)) def test_capabilities_to_dict_with_invalid_character_fail(self): for test_capabilities in ('xpto:a,', ',xpto:a'): exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, test_capabilities ) self.assertEqual('Malformed capabilities value: ', str(exc)) def test_capabilities_to_dict_with_incorrect_format_fail(self): for test_capabilities in (':xpto,', 'xpto:,', ':,'): exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, test_capabilities ) self.assertEqual('Malformed capabilities value: ', str(exc)) def test_capabilities_not_string(self): capabilities_already_dict = {'a': 'b'} capabilities_something_else = 42 exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_already_dict ) self.assertEqual("Value of 'capabilities' must be string. Got " + str(dict), str(exc)) exc = self.assertRaises( exception.InvalidParameterValue, driver_utils.capabilities_to_dict, capabilities_something_else ) self.assertEqual("Value of 'capabilities' must be string. Got " + str(int), str(exc)) def test_normalize_mac_string(self): mac_raw = "0A:1B-2C-3D:4F" mac_clean = driver_utils.normalize_mac(mac_raw) self.assertEqual("0a1b2c3d4f", mac_clean) def test_normalize_mac_unicode(self): mac_raw = u"0A:1B-2C-3D:4F" mac_clean = driver_utils.normalize_mac(mac_raw) self.assertEqual("0a1b2c3d4f", mac_clean) class UtilsRamdiskLogsTestCase(tests_base.TestCase): def setUp(self): super(UtilsRamdiskLogsTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context) @mock.patch.object(timeutils, 'utcnow', autospec=True) def test_get_ramdisk_logs_file_name(self, mock_utcnow): mock_utcnow.return_value = datetime.datetime(2000, 1, 1, 0, 0) name = driver_utils.get_ramdisk_logs_file_name(self.node) expected_name = ('1be26c0b-03f2-4d2e-ae87-c02d7f33c123_' '2000-01-01-00-00-00.tar.gz') self.assertEqual(expected_name, name) # with instance_info instance_uuid = '7a5641ba-d264-424a-a9d7-e2a293ca482b' node2 = obj_utils.get_test_node( self.context, instance_uuid=instance_uuid) name = driver_utils.get_ramdisk_logs_file_name(node2) expected_name = ('1be26c0b-03f2-4d2e-ae87-c02d7f33c123_' + instance_uuid + '_2000-01-01-00-00-00.tar.gz') self.assertEqual(expected_name, name) @mock.patch.object(driver_utils, 'store_ramdisk_logs', autospec=True) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) def test_collect_ramdisk_logs(self, mock_collect, mock_store): logs = 'Gary the Snail' mock_collect.return_value = {'command_result': {'system_logs': logs}} driver_utils.collect_ramdisk_logs(self.node) mock_store.assert_called_once_with(self.node, logs) @mock.patch.object(driver_utils.LOG, 'error', autospec=True) @mock.patch.object(driver_utils, 'store_ramdisk_logs', autospec=True) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) def test_collect_ramdisk_logs_IPA_command_fail( self, mock_collect, mock_store, mock_log): error_str = 'MR. KRABS! I WANNA GO TO BED!' mock_collect.return_value = {'faultstring': error_str} driver_utils.collect_ramdisk_logs(self.node) # assert store was never invoked self.assertFalse(mock_store.called) mock_log.assert_called_once_with( mock.ANY, {'node': self.node.uuid, 'error': error_str}) @mock.patch.object(driver_utils, 'store_ramdisk_logs', autospec=True) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) def test_collect_ramdisk_logs_storage_command_fail( self, mock_collect, mock_store): mock_collect.side_effect = exception.IronicException('boom') self.assertIsNone(driver_utils.collect_ramdisk_logs(self.node)) self.assertFalse(mock_store.called) @mock.patch.object(driver_utils, 'store_ramdisk_logs', autospec=True) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) def _collect_ramdisk_logs_storage_fail( self, expected_exception, mock_collect, mock_store): mock_store.side_effect = expected_exception logs = 'Gary the Snail' mock_collect.return_value = {'command_result': {'system_logs': logs}} driver_utils.collect_ramdisk_logs(self.node) mock_store.assert_called_once_with(self.node, logs) @mock.patch.object(driver_utils.LOG, 'exception', autospec=True) def test_collect_ramdisk_logs_storage_fail_fs(self, mock_log): error = IOError('boom') self._collect_ramdisk_logs_storage_fail(error) mock_log.assert_called_once_with( mock.ANY, {'node': self.node.uuid, 'error': error}) self.assertIn('file-system', mock_log.call_args[0][0]) @mock.patch.object(driver_utils.LOG, 'error', autospec=True) def test_collect_ramdisk_logs_storage_fail_swift(self, mock_log): error = exception.SwiftOperationError('boom') self._collect_ramdisk_logs_storage_fail(error) mock_log.assert_called_once_with( mock.ANY, {'node': self.node.uuid, 'error': error}) self.assertIn('Swift', mock_log.call_args[0][0]) @mock.patch.object(driver_utils.LOG, 'exception', autospec=True) def test_collect_ramdisk_logs_storage_fail_unkown(self, mock_log): error = Exception('boom') self._collect_ramdisk_logs_storage_fail(error) mock_log.assert_called_once_with( mock.ANY, {'node': self.node.uuid, 'error': error}) self.assertIn('Unknown error', mock_log.call_args[0][0]) @mock.patch.object(swift, 'SwiftAPI', autospec=True) @mock.patch.object(driver_utils, 'get_ramdisk_logs_file_name', autospec=True) def test_store_ramdisk_logs_swift(self, mock_logs_name, mock_swift): container_name = 'ironic_test_container' file_name = 'ironic_test_file.tar.gz' b64str = 'ZW5jb2RlZHN0cmluZw==\n' cfg.CONF.set_override('deploy_logs_storage_backend', 'swift', 'agent') cfg.CONF.set_override( 'deploy_logs_swift_container', container_name, 'agent') cfg.CONF.set_override('deploy_logs_swift_days_to_expire', 1, 'agent') mock_logs_name.return_value = file_name driver_utils.store_ramdisk_logs(self.node, b64str) mock_swift.return_value.create_object.assert_called_once_with( container_name, file_name, mock.ANY, object_headers={'X-Delete-After': '86400'}) mock_logs_name.assert_called_once_with(self.node, label=None) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(driver_utils, 'get_ramdisk_logs_file_name', autospec=True) def test_store_ramdisk_logs_local(self, mock_logs_name, mock_makedirs): file_name = 'ironic_test_file.tar.gz' b64str = 'ZW5jb2RlZHN0cmluZw==\n' log_path = '/foo/bar' cfg.CONF.set_override('deploy_logs_local_path', log_path, 'agent') mock_logs_name.return_value = file_name with mock.patch.object(driver_utils, 'open', new=mock.mock_open(), create=True) as mock_open: driver_utils.store_ramdisk_logs(self.node, b64str) expected_path = os.path.join(log_path, file_name) mock_open.assert_called_once_with(expected_path, 'wb') mock_makedirs.assert_called_once_with(log_path) mock_logs_name.assert_called_once_with(self.node, label=None) class MixinVendorInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(MixinVendorInterfaceTestCase, self).setUp() self.a = fake.FakeVendorA() self.b = fake.FakeVendorB() self.mapping = {'first_method': self.a, 'second_method': self.b, 'third_method_sync': self.b, 'fourth_method_shared_lock': self.b} self.vendor = driver_utils.MixinVendorInterface(self.mapping) self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') def test_vendor_interface_get_properties(self): expected = {'A1': 'A1 description. Required.', 'A2': 'A2 description. Optional.', 'B1': 'B1 description. Required.', 'B2': 'B2 description. Required.'} props = self.vendor.get_properties() self.assertEqual(expected, props) @mock.patch.object(fake.FakeVendorA, 'validate', autospec=True) def test_vendor_interface_validate_valid_methods(self, mock_fakea_validate): with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.validate(task, method='first_method') mock_fakea_validate.assert_called_once_with( self.vendor.mapping['first_method'], task, method='first_method') def test_vendor_interface_validate_bad_method(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.vendor.validate, task, method='fake_method') ironic-15.0.0/ironic/tests/unit/drivers/test_ilo.py0000664000175000017500000002452313652514273022376 0ustar zuulzuul00000000000000# Copyright 2017 Hewlett-Packard Enterprise Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iLO Drivers """ from ironic.conductor import task_manager from ironic.drivers import ilo from ironic.drivers.modules import agent from ironic.drivers.modules.ilo import management from ironic.drivers.modules.ilo import raid from ironic.drivers.modules import inspector from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IloHardwareTestCase(db_base.DbTestCase): def setUp(self): super(IloHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['ilo'], enabled_boot_interfaces=['ilo-virtual-media', 'ilo-pxe'], enabled_bios_interfaces=['no-bios', 'ilo'], enabled_console_interfaces=['ilo'], enabled_deploy_interfaces=['iscsi', 'direct'], enabled_inspect_interfaces=['ilo'], enabled_management_interfaces=['ilo'], enabled_power_interfaces=['ilo'], enabled_raid_interfaces=['no-raid', 'agent'], enabled_rescue_interfaces=['no-rescue', 'agent'], enabled_vendor_interfaces=['ilo', 'no-vendor']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='ilo') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ilo.boot.IloVirtualMediaBoot) self.assertIsInstance(task.driver.bios, ilo.bios.IloBIOS) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, ilo.inspect.IloInspect) self.assertIsInstance(task.driver.management, ilo.management.IloManagement) self.assertIsInstance(task.driver.power, ilo.power.IloPower) self.assertIsInstance(task.driver.raid, noop.NoRAID) self.assertIsInstance(task.driver.vendor, ilo.vendor.VendorPassthru) self.assertIsInstance(task.driver.rescue, noop.NoRescue) def test_override_with_inspector(self): self.config(enabled_inspect_interfaces=['inspector', 'ilo']) node = obj_utils.create_test_node( self.context, driver='ilo', deploy_interface='direct', inspect_interface='inspector', raid_interface='agent', vendor_interface='no-vendor') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ilo.boot.IloVirtualMediaBoot) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, inspector.Inspector) self.assertIsInstance(task.driver.management, ilo.management.IloManagement) self.assertIsInstance(task.driver.power, ilo.power.IloPower) self.assertIsInstance(task.driver.raid, agent.AgentRAID) self.assertIsInstance(task.driver.rescue, noop.NoRescue) self.assertIsInstance(task.driver.vendor, noop.NoVendor) def test_override_with_pxe(self): node = obj_utils.create_test_node( self.context, driver='ilo', boot_interface='ilo-pxe', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ilo.boot.IloPXEBoot) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, ilo.inspect.IloInspect) self.assertIsInstance(task.driver.management, ilo.management.IloManagement) self.assertIsInstance(task.driver.power, ilo.power.IloPower) self.assertIsInstance(task.driver.raid, agent.AgentRAID) self.assertIsInstance(task.driver.rescue, noop.NoRescue) self.assertIsInstance(task.driver.vendor, ilo.vendor.VendorPassthru) def test_override_with_agent_rescue(self): self.config(enabled_inspect_interfaces=['inspector', 'ilo']) node = obj_utils.create_test_node( self.context, driver='ilo', deploy_interface='direct', rescue_interface='agent', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ilo.boot.IloVirtualMediaBoot) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, ilo.inspect.IloInspect) self.assertIsInstance(task.driver.management, ilo.management.IloManagement) self.assertIsInstance(task.driver.power, ilo.power.IloPower) self.assertIsInstance(task.driver.raid, agent.AgentRAID) self.assertIsInstance(task.driver.rescue, agent.AgentRescue) self.assertIsInstance(task.driver.vendor, ilo.vendor.VendorPassthru) def test_override_with_no_bios(self): node = obj_utils.create_test_node( self.context, driver='ilo', boot_interface='ilo-pxe', bios_interface='no-bios', deploy_interface='direct', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ilo.boot.IloPXEBoot) self.assertIsInstance(task.driver.bios, noop.NoBIOS) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.raid, agent.AgentRAID) class Ilo5HardwareTestCase(db_base.DbTestCase): def setUp(self): super(Ilo5HardwareTestCase, self).setUp() self.config(enabled_hardware_types=['ilo5'], enabled_boot_interfaces=['ilo-virtual-media', 'ilo-pxe'], enabled_console_interfaces=['ilo'], enabled_deploy_interfaces=['iscsi', 'direct'], enabled_inspect_interfaces=['ilo'], enabled_management_interfaces=['ilo5'], enabled_power_interfaces=['ilo'], enabled_raid_interfaces=['ilo5'], enabled_rescue_interfaces=['no-rescue', 'agent'], enabled_vendor_interfaces=['ilo', 'no-vendor']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='ilo5') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.raid, raid.Ilo5RAID) self.assertIsInstance(task.driver.management, management.Ilo5Management) def test_override_with_no_raid(self): self.config(enabled_raid_interfaces=['no-raid', 'ilo5']) node = obj_utils.create_test_node(self.context, driver='ilo5', raid_interface='no-raid') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.raid, noop.NoRAID) self.assertIsInstance(task.driver.boot, ilo.boot.IloVirtualMediaBoot) self.assertIsInstance(task.driver.console, ilo.console.IloConsoleInterface) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, ilo.inspect.IloInspect) self.assertIsInstance(task.driver.management, ilo.management.IloManagement) self.assertIsInstance(task.driver.power, ilo.power.IloPower) self.assertIsInstance(task.driver.rescue, noop.NoRescue) self.assertIsInstance(task.driver.vendor, ilo.vendor.VendorPassthru) ironic-15.0.0/ironic/tests/unit/drivers/test_drac.py0000664000175000017500000001654613652514273022532 0ustar zuulzuul00000000000000# Copyright (c) 2017-2019 Dell Inc. or its subsidiaries. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ironic.conductor import task_manager from ironic.drivers.modules import agent from ironic.drivers.modules import drac from ironic.drivers.modules import inspector from ironic.drivers.modules import ipxe from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.network import flat as flat_net from ironic.drivers.modules import noop from ironic.drivers.modules.storage import noop as noop_storage from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IDRACHardwareTestCase(db_base.DbTestCase): def setUp(self): super(IDRACHardwareTestCase, self).setUp() self.config_temp_dir('http_root', group='deploy') self.config(enabled_hardware_types=['idrac'], enabled_boot_interfaces=[ 'idrac-redfish-virtual-media', 'ipxe', 'pxe'], enabled_management_interfaces=[ 'idrac', 'idrac-redfish', 'idrac-wsman'], enabled_power_interfaces=[ 'idrac', 'idrac-redfish', 'idrac-wsman'], enabled_inspect_interfaces=[ 'idrac', 'idrac-redfish', 'idrac-wsman', 'inspector', 'no-inspect'], enabled_network_interfaces=['flat', 'neutron', 'noop'], enabled_raid_interfaces=[ 'idrac', 'idrac-wsman', 'no-raid'], enabled_vendor_interfaces=[ 'idrac', 'idrac-wsman', 'no-vendor'], enabled_bios_interfaces=[ 'idrac-wsman', 'no-bios']) def _validate_interfaces(self, driver, **kwargs): self.assertIsInstance( driver.boot, kwargs.get('boot', ipxe.iPXEBoot)) self.assertIsInstance( driver.deploy, kwargs.get('deploy', iscsi_deploy.ISCSIDeploy)) self.assertIsInstance( driver.management, kwargs.get('management', drac.management.DracWSManManagement)) self.assertIsInstance( driver.power, kwargs.get('power', drac.power.DracWSManPower)) self.assertIsInstance( driver.console, kwargs.get('console', noop.NoConsole)) self.assertIsInstance( driver.inspect, kwargs.get('inspect', drac.inspect.DracWSManInspect)) self.assertIsInstance( driver.network, kwargs.get('network', flat_net.FlatNetwork)) self.assertIsInstance( driver.raid, kwargs.get('raid', drac.raid.DracWSManRAID)) self.assertIsInstance( driver.storage, kwargs.get('storage', noop_storage.NoopStorage)) self.assertIsInstance( driver.vendor, kwargs.get('vendor', drac.vendor_passthru.DracWSManVendorPassthru)) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='idrac') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task.driver) def test_override_with_inspector(self): node = obj_utils.create_test_node(self.context, driver='idrac', inspect_interface='inspector') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task.driver, inspect=inspector.Inspector) def test_override_with_agent(self): node = obj_utils.create_test_node(self.context, driver='idrac', deploy_interface='direct', inspect_interface='inspector') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task.driver, deploy=agent.AgentDeploy, inspect=inspector.Inspector) def test_override_with_raid(self): node = obj_utils.create_test_node(self.context, driver='idrac', raid_interface='no-raid') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task.driver, raid=noop.NoRAID) def test_override_no_vendor(self): node = obj_utils.create_test_node(self.context, driver='idrac', vendor_interface='no-vendor') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task.driver, vendor=noop.NoVendor) def test_override_with_idrac(self): node = obj_utils.create_test_node(self.context, driver='idrac', management_interface='idrac', power_interface='idrac', inspect_interface='idrac', raid_interface='idrac', vendor_interface='idrac') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task.driver, management=drac.management.DracManagement, power=drac.power.DracPower, inspect=drac.inspect.DracInspect, raid=drac.raid.DracRAID, vendor=drac.vendor_passthru.DracVendorPassthru) def test_override_with_redfish_management_and_power(self): node = obj_utils.create_test_node(self.context, driver='idrac', management_interface='idrac-redfish', power_interface='idrac-redfish') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task.driver, management=drac.management.DracRedfishManagement, power=drac.power.DracRedfishPower) def test_override_with_redfish_inspect(self): node = obj_utils.create_test_node(self.context, driver='idrac', inspect_interface='idrac-redfish') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task.driver, inspect=drac.inspect.DracRedfishInspect) def test_override_with_redfish_virtual_media_boot(self): node = obj_utils.create_test_node( self.context, driver='idrac', boot_interface='idrac-redfish-virtual-media') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task.driver, boot=drac.boot.DracRedfishVirtualMediaBoot) ironic-15.0.0/ironic/tests/unit/drivers/test_redfish.py0000664000175000017500000000471413652514273023237 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.conductor import task_manager from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules.redfish import boot as redfish_boot from ironic.drivers.modules.redfish import inspect as redfish_inspect from ironic.drivers.modules.redfish import management as redfish_mgmt from ironic.drivers.modules.redfish import power as redfish_power from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class RedfishHardwareTestCase(db_base.DbTestCase): def setUp(self): super(RedfishHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish'], enabled_inspect_interfaces=['redfish'], enabled_bios_interfaces=['redfish']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='redfish') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.inspect, redfish_inspect.RedfishInspect) self.assertIsInstance(task.driver.management, redfish_mgmt.RedfishManagement) self.assertIsInstance(task.driver.power, redfish_power.RedfishPower) self.assertIsInstance(task.driver.boot, redfish_boot.RedfishVirtualMediaBoot) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.console, noop.NoConsole) self.assertIsInstance(task.driver.raid, noop.NoRAID) ironic-15.0.0/ironic/tests/unit/drivers/boot.ipxe0000664000175000017500000000136713652514273022035 0ustar zuulzuul00000000000000#!ipxe # NOTE(lucasagomes): Loop over all network devices and boot from # the first one capable of booting. For more information see: # https://bugs.launchpad.net/ironic/+bug/1504482 set netid:int32 -1 :loop inc netid || chain pxelinux.cfg/${mac:hexhyp} || goto old_rom isset ${net${netid}/mac} || goto loop_done echo Attempting to boot from MAC ${net${netid}/mac:hexhyp} chain pxelinux.cfg/${net${netid}/mac:hexhyp} || goto loop :loop_done echo PXE boot failed! No configuration found for any of the present NICs. echo Press any key to reboot... prompt --timeout 180 reboot :old_rom echo PXE boot failed! No configuration found for NIC ${mac:hexhyp}. echo Please update your iPXE ROM and retry. echo Press any key to reboot... prompt --timeout 180 reboot ironic-15.0.0/ironic/tests/unit/drivers/ipxe_config_timeout.template0000664000175000017500000000177313652514273026001 0ustar zuulzuul00000000000000#!ipxe set attempts:int32 10 set i:int32 0 goto deploy :deploy imgfree kernel --timeout 120 http://1.2.3.4:1234/deploy_kernel selinux=0 troubleshoot=0 text test_param BOOTIF=${mac} initrd=deploy_ramdisk || goto retry initrd --timeout 120 http://1.2.3.4:1234/deploy_ramdisk || goto retry boot :retry iseq ${i} ${attempts} && goto fail || inc i echo No response, retrying in {i} seconds. sleep ${i} goto deploy :fail echo Failed to get a response after ${attempts} attempts echo Powering off in 30 seconds. sleep 30 poweroff :boot_partition imgfree kernel --timeout 120 http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk || goto boot_partition initrd --timeout 120 http://1.2.3.4:1234/ramdisk || goto boot_partition boot :boot_ramdisk imgfree kernel --timeout 120 http://1.2.3.4:1234/kernel root=/dev/ram0 text test_param ramdisk_param initrd=ramdisk || goto boot_ramdisk initrd --timeout 120 http://1.2.3.4:1234/ramdisk || goto boot_ramdisk boot :boot_whole_disk sanboot --no-describe ironic-15.0.0/ironic/tests/unit/drivers/ipxe_config.template0000664000175000017500000000164713652514273024233 0ustar zuulzuul00000000000000#!ipxe set attempts:int32 10 set i:int32 0 goto deploy :deploy imgfree kernel http://1.2.3.4:1234/deploy_kernel selinux=0 troubleshoot=0 text test_param BOOTIF=${mac} initrd=deploy_ramdisk || goto retry initrd http://1.2.3.4:1234/deploy_ramdisk || goto retry boot :retry iseq ${i} ${attempts} && goto fail || inc i echo No response, retrying in {i} seconds. sleep ${i} goto deploy :fail echo Failed to get a response after ${attempts} attempts echo Powering off in 30 seconds. sleep 30 poweroff :boot_partition imgfree kernel http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk || goto boot_partition initrd http://1.2.3.4:1234/ramdisk || goto boot_partition boot :boot_ramdisk imgfree kernel http://1.2.3.4:1234/kernel root=/dev/ram0 text test_param ramdisk_param initrd=ramdisk || goto boot_ramdisk initrd http://1.2.3.4:1234/ramdisk || goto boot_ramdisk boot :boot_whole_disk sanboot --no-describe ironic-15.0.0/ironic/tests/unit/drivers/pxe_grub_config.template0000664000175000017500000000147013652514273025073 0ustar zuulzuul00000000000000set default=deploy set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel selinux=0 troubleshoot=0 text test_param boot_server=192.0.2.1 initrdefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk } menuentry "boot_partition" { linuxefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel root=(( ROOT )) ro text test_param boot_server=192.0.2.1 initrdefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk } menuentry "boot_ramdisk" { linuxefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel root=/dev/ram0 text test_param ramdisk_param initrdefi /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } ironic-15.0.0/ironic/tests/unit/drivers/test_base.py0000664000175000017500000010447513652514273022532 0ustar zuulzuul00000000000000# Copyright 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from ironic.common import components from ironic.common import exception from ironic.common import indicator_states from ironic.common import raid from ironic.common import states from ironic.drivers import base as driver_base from ironic.drivers.modules import fake from ironic.tests import base class FakeVendorInterface(driver_base.VendorInterface): def get_properties(self): pass @driver_base.passthru(['POST']) def noexception(self): return "Fake" @driver_base.driver_passthru(['POST']) def driver_noexception(self): return "Fake" @driver_base.passthru(['POST']) def ironicexception(self): raise exception.IronicException("Fake!") @driver_base.passthru(['POST']) def normalexception(self): raise Exception("Fake!") @driver_base.passthru(['POST'], require_exclusive_lock=False) def shared_task(self): return "shared fake" def validate(self, task, **kwargs): pass def driver_validate(self, **kwargs): pass class PassthruDecoratorTestCase(base.TestCase): def setUp(self): super(PassthruDecoratorTestCase, self).setUp() self.fvi = FakeVendorInterface() def test_passthru_noexception(self): result = self.fvi.noexception() self.assertEqual("Fake", result) @mock.patch.object(driver_base, 'LOG', autospec=True) def test_passthru_ironicexception(self, mock_log): self.assertRaises(exception.IronicException, self.fvi.ironicexception, mock.ANY) mock_log.exception.assert_called_with( mock.ANY, 'ironicexception') @mock.patch.object(driver_base, 'LOG', autospec=True) def test_passthru_nonironicexception(self, mock_log): self.assertRaises(exception.VendorPassthruException, self.fvi.normalexception, mock.ANY) mock_log.exception.assert_called_with( mock.ANY, 'normalexception') def test_passthru_shared_task_metadata(self): self.assertIn('require_exclusive_lock', self.fvi.shared_task._vendor_metadata[1]) self.assertFalse( self.fvi.shared_task._vendor_metadata[1]['require_exclusive_lock']) def test_passthru_exclusive_task_metadata(self): self.assertIn('require_exclusive_lock', self.fvi.noexception._vendor_metadata[1]) self.assertTrue( self.fvi.noexception._vendor_metadata[1]['require_exclusive_lock']) def test_passthru_check_func_references(self): inst1 = FakeVendorInterface() inst2 = FakeVendorInterface() self.assertNotEqual(inst1.vendor_routes['noexception']['func'], inst2.vendor_routes['noexception']['func']) self.assertNotEqual(inst1.driver_routes['driver_noexception']['func'], inst2.driver_routes['driver_noexception']['func']) class CleanStepDecoratorTestCase(base.TestCase): def setUp(self): super(CleanStepDecoratorTestCase, self).setUp() method_mock = mock.MagicMock() del method_mock._is_clean_step del method_mock._clean_step_priority del method_mock._clean_step_abortable del method_mock._clean_step_argsinfo self.method = method_mock def test__validate_argsinfo(self): # None, empty dict driver_base._validate_argsinfo(None) driver_base._validate_argsinfo({}) # Only description specified driver_base._validate_argsinfo({'arg1': {'description': 'desc1'}}) # Multiple args driver_base._validate_argsinfo({'arg1': {'description': 'desc1', 'required': True}, 'arg2': {'description': 'desc2'}}) def test__validate_argsinfo_not_dict(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'argsinfo.+dictionary', driver_base._validate_argsinfo, 'not-a-dict') def test__validate_argsinfo_arg_not_dict(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'Argument.+dictionary', driver_base._validate_argsinfo, {'arg1': 'not-a-dict'}) def test__validate_argsinfo_arg_empty_dict(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'description', driver_base._validate_argsinfo, {'arg1': {}}) def test__validate_argsinfo_arg_missing_description(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'description', driver_base._validate_argsinfo, {'arg1': {'required': True}}) def test__validate_argsinfo_arg_description_invalid(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'string', driver_base._validate_argsinfo, {'arg1': {'description': True}}) def test__validate_argsinfo_arg_required_invalid(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'Boolean', driver_base._validate_argsinfo, {'arg1': {'description': 'desc1', 'required': 'maybe'}}) def test__validate_argsinfo_arg_unknown_key(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'invalid', driver_base._validate_argsinfo, {'arg1': {'description': 'desc1', 'unknown': 'bad'}}) def test_clean_step_priority_only(self): d = driver_base.clean_step(priority=10) d(self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(10, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertIsNone(self.method._clean_step_argsinfo) def test_clean_step_all_args(self): argsinfo = {'arg1': {'description': 'desc1', 'required': True}} d = driver_base.clean_step(priority=0, abortable=True, argsinfo=argsinfo) d(self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertTrue(self.method._clean_step_abortable) self.assertEqual(argsinfo, self.method._clean_step_argsinfo) def test_clean_step_bad_priority(self): d = driver_base.clean_step(priority='hi') self.assertRaisesRegex(exception.InvalidParameterValue, 'priority', d, self.method) self.assertTrue(self.method._is_clean_step) self.assertFalse(hasattr(self.method, '_clean_step_priority')) self.assertFalse(hasattr(self.method, '_clean_step_abortable')) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) def test_clean_step_bad_abortable(self): d = driver_base.clean_step(priority=0, abortable='blue') self.assertRaisesRegex(exception.InvalidParameterValue, 'abortable', d, self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(hasattr(self.method, '_clean_step_abortable')) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) @mock.patch.object(driver_base, '_validate_argsinfo', spec_set=True, autospec=True) def test_clean_step_bad_argsinfo(self, mock_valid): mock_valid.side_effect = exception.InvalidParameterValue('bad') d = driver_base.clean_step(priority=0, argsinfo=100) self.assertRaises(exception.InvalidParameterValue, d, self.method) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertFalse(hasattr(self.method, '_clean_step_argsinfo')) class CleanStepTestCase(base.TestCase): def test_get_and_execute_clean_steps(self): # Create a fake Driver class, create some clean steps, make sure # they are listed correctly, and attempt to execute one of them method_mock = mock.MagicMock(spec_set=[]) method_args_mock = mock.MagicMock(spec_set=[]) task_mock = mock.MagicMock(spec_set=[]) class BaseTestClass(driver_base.BaseInterface): def get_properties(self): return {} def validate(self, task): pass class TestClass(BaseTestClass): interface_type = 'test' @driver_base.clean_step(priority=0) def manual_method(self, task): pass @driver_base.clean_step(priority=10, abortable=True) def automated_method(self, task): method_mock(task) def not_clean_method(self, task): pass class TestClass2(BaseTestClass): interface_type = 'test2' @driver_base.clean_step(priority=0) def manual_method2(self, task): pass @driver_base.clean_step(priority=20, abortable=True) def automated_method2(self, task): method_mock(task) def not_clean_method2(self, task): pass class TestClass3(BaseTestClass): interface_type = 'test3' @driver_base.clean_step(priority=0, abortable=True, argsinfo={ 'arg1': {'description': 'desc1', 'required': True}}) def manual_method3(self, task, **kwargs): method_args_mock(task, **kwargs) @driver_base.clean_step(priority=15, argsinfo={ 'arg10': {'description': 'desc10'}}) def automated_method3(self, task, **kwargs): pass def not_clean_method3(self, task): pass obj = TestClass() obj2 = TestClass2() obj3 = TestClass3() self.assertEqual(2, len(obj.get_clean_steps(task_mock))) # Ensure the steps look correct self.assertEqual(10, obj.get_clean_steps(task_mock)[0]['priority']) self.assertTrue(obj.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test', obj.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method', obj.get_clean_steps( task_mock)[0]['step']) self.assertEqual(0, obj.get_clean_steps(task_mock)[1]['priority']) self.assertFalse(obj.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual('test', obj.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method', obj.get_clean_steps( task_mock)[1]['step']) # Ensure the second obj get different clean steps self.assertEqual(2, len(obj2.get_clean_steps(task_mock))) # Ensure the steps look correct self.assertEqual(20, obj2.get_clean_steps(task_mock)[0]['priority']) self.assertTrue(obj2.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test2', obj2.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method2', obj2.get_clean_steps( task_mock)[0]['step']) self.assertEqual(0, obj2.get_clean_steps(task_mock)[1]['priority']) self.assertFalse(obj2.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual('test2', obj2.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method2', obj2.get_clean_steps( task_mock)[1]['step']) self.assertIsNone(obj2.get_clean_steps(task_mock)[0]['argsinfo']) # Ensure the third obj has different clean steps self.assertEqual(2, len(obj3.get_clean_steps(task_mock))) self.assertEqual(15, obj3.get_clean_steps(task_mock)[0]['priority']) self.assertFalse(obj3.get_clean_steps(task_mock)[0]['abortable']) self.assertEqual('test3', obj3.get_clean_steps( task_mock)[0]['interface']) self.assertEqual('automated_method3', obj3.get_clean_steps( task_mock)[0]['step']) self.assertEqual({'arg10': {'description': 'desc10'}}, obj3.get_clean_steps(task_mock)[0]['argsinfo']) self.assertEqual(0, obj3.get_clean_steps(task_mock)[1]['priority']) self.assertTrue(obj3.get_clean_steps(task_mock)[1]['abortable']) self.assertEqual(obj3.interface_type, obj3.get_clean_steps( task_mock)[1]['interface']) self.assertEqual('manual_method3', obj3.get_clean_steps( task_mock)[1]['step']) self.assertEqual({'arg1': {'description': 'desc1', 'required': True}}, obj3.get_clean_steps(task_mock)[1]['argsinfo']) # Ensure we can execute the function. obj.execute_clean_step(task_mock, obj.get_clean_steps(task_mock)[0]) method_mock.assert_called_once_with(task_mock) args = {'arg1': 'val1'} clean_step = {'interface': 'test3', 'step': 'manual_method3', 'args': args} obj3.execute_clean_step(task_mock, clean_step) method_args_mock.assert_called_once_with(task_mock, **args) class DeployStepDecoratorTestCase(base.TestCase): def setUp(self): super(DeployStepDecoratorTestCase, self).setUp() method_mock = mock.MagicMock() del method_mock._is_deploy_step del method_mock._deploy_step_priority del method_mock._deploy_step_argsinfo self.method = method_mock def test_deploy_step_priority_only(self): d = driver_base.deploy_step(priority=10) d(self.method) self.assertTrue(self.method._is_deploy_step) self.assertEqual(10, self.method._deploy_step_priority) self.assertIsNone(self.method._deploy_step_argsinfo) def test_deploy_step_all_args(self): argsinfo = {'arg1': {'description': 'desc1', 'required': True}} d = driver_base.deploy_step(priority=0, argsinfo=argsinfo) d(self.method) self.assertTrue(self.method._is_deploy_step) self.assertEqual(0, self.method._deploy_step_priority) self.assertEqual(argsinfo, self.method._deploy_step_argsinfo) def test_deploy_step_bad_priority(self): d = driver_base.deploy_step(priority='hi') self.assertRaisesRegex(exception.InvalidParameterValue, 'priority', d, self.method) self.assertTrue(self.method._is_deploy_step) self.assertFalse(hasattr(self.method, '_deploy_step_priority')) self.assertFalse(hasattr(self.method, '_deploy_step_argsinfo')) @mock.patch.object(driver_base, '_validate_argsinfo', spec_set=True, autospec=True) def test_deploy_step_bad_argsinfo(self, mock_valid): mock_valid.side_effect = exception.InvalidParameterValue('bad') d = driver_base.deploy_step(priority=0, argsinfo=100) self.assertRaises(exception.InvalidParameterValue, d, self.method) self.assertTrue(self.method._is_deploy_step) self.assertEqual(0, self.method._deploy_step_priority) self.assertFalse(hasattr(self.method, '_deploy_step_argsinfo')) class DeployAndCleanStepDecoratorTestCase(base.TestCase): def setUp(self): super(DeployAndCleanStepDecoratorTestCase, self).setUp() method_mock = mock.MagicMock() del method_mock._is_deploy_step del method_mock._deploy_step_priority del method_mock._deploy_step_argsinfo del method_mock._is_clean_step del method_mock._clean_step_priority del method_mock._clean_step_abortable del method_mock._clean_step_argsinfo self.method = method_mock def test_deploy_and_clean_step_priority_only(self): dd = driver_base.deploy_step(priority=10) dc = driver_base.clean_step(priority=11) dd(dc(self.method)) self.assertTrue(self.method._is_deploy_step) self.assertEqual(10, self.method._deploy_step_priority) self.assertIsNone(self.method._deploy_step_argsinfo) self.assertTrue(self.method._is_clean_step) self.assertEqual(11, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertIsNone(self.method._clean_step_argsinfo) def test_deploy_and_clean_step_all_args(self): dargsinfo = {'arg1': {'description': 'desc1', 'required': True}} cargsinfo = {'arg2': {'description': 'desc2', 'required': False}} dd = driver_base.deploy_step(priority=0, argsinfo=dargsinfo) dc = driver_base.clean_step(priority=0, argsinfo=cargsinfo) dd(dc(self.method)) self.assertTrue(self.method._is_deploy_step) self.assertEqual(0, self.method._deploy_step_priority) self.assertEqual(dargsinfo, self.method._deploy_step_argsinfo) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertEqual(cargsinfo, self.method._clean_step_argsinfo) def test_clean_and_deploy_step_all_args(self): # Opposite ordering, should make no difference. dargsinfo = {'arg1': {'description': 'desc1', 'required': True}} cargsinfo = {'arg2': {'description': 'desc2', 'required': False}} dd = driver_base.deploy_step(priority=0, argsinfo=dargsinfo) dc = driver_base.clean_step(priority=0, argsinfo=cargsinfo) dc(dd(self.method)) self.assertTrue(self.method._is_deploy_step) self.assertEqual(0, self.method._deploy_step_priority) self.assertEqual(dargsinfo, self.method._deploy_step_argsinfo) self.assertTrue(self.method._is_clean_step) self.assertEqual(0, self.method._clean_step_priority) self.assertFalse(self.method._clean_step_abortable) self.assertEqual(cargsinfo, self.method._clean_step_argsinfo) class DeployStepTestCase(base.TestCase): def test_get_and_execute_deploy_steps(self): # Create a fake Driver class, create some deploy steps, make sure # they are listed correctly, and attempt to execute one of them method_mock = mock.MagicMock(spec_set=[]) method_args_mock = mock.MagicMock(spec_set=[]) task_mock = mock.MagicMock(spec_set=[]) class BaseTestClass(driver_base.BaseInterface): def get_properties(self): return {} def validate(self, task): pass class TestClass(BaseTestClass): interface_type = 'test' @driver_base.deploy_step(priority=0) def deploy_zero(self, task): pass @driver_base.deploy_step(priority=10) def deploy_ten(self, task): method_mock(task) def not_deploy_method(self, task): pass class TestClass2(BaseTestClass): interface_type = 'test2' @driver_base.deploy_step(priority=0) def deploy_zero2(self, task): pass @driver_base.deploy_step(priority=20) def deploy_twenty(self, task): method_mock(task) def not_deploy_method2(self, task): pass class TestClass3(BaseTestClass): interface_type = 'test3' @driver_base.deploy_step(priority=0, argsinfo={ 'arg1': {'description': 'desc1', 'required': True}}) def deploy_zero3(self, task, **kwargs): method_args_mock(task, **kwargs) @driver_base.deploy_step(priority=15, argsinfo={ 'arg10': {'description': 'desc10'}}) def deploy_fifteen(self, task, **kwargs): pass def not_deploy_method3(self, task): pass obj = TestClass() obj2 = TestClass2() obj3 = TestClass3() self.assertEqual(2, len(obj.get_deploy_steps(task_mock))) # Ensure the steps look correct self.assertEqual(10, obj.get_deploy_steps(task_mock)[0]['priority']) self.assertEqual('test', obj.get_deploy_steps( task_mock)[0]['interface']) self.assertEqual('deploy_ten', obj.get_deploy_steps( task_mock)[0]['step']) self.assertEqual(0, obj.get_deploy_steps(task_mock)[1]['priority']) self.assertEqual('test', obj.get_deploy_steps( task_mock)[1]['interface']) self.assertEqual('deploy_zero', obj.get_deploy_steps( task_mock)[1]['step']) # Ensure the second obj has different deploy steps self.assertEqual(2, len(obj2.get_deploy_steps(task_mock))) # Ensure the steps look correct self.assertEqual(20, obj2.get_deploy_steps(task_mock)[0]['priority']) self.assertEqual('test2', obj2.get_deploy_steps( task_mock)[0]['interface']) self.assertEqual('deploy_twenty', obj2.get_deploy_steps( task_mock)[0]['step']) self.assertEqual(0, obj2.get_deploy_steps(task_mock)[1]['priority']) self.assertEqual('test2', obj2.get_deploy_steps( task_mock)[1]['interface']) self.assertEqual('deploy_zero2', obj2.get_deploy_steps( task_mock)[1]['step']) self.assertIsNone(obj2.get_deploy_steps(task_mock)[0]['argsinfo']) # Ensure the third obj has different deploy steps self.assertEqual(2, len(obj3.get_deploy_steps(task_mock))) self.assertEqual(15, obj3.get_deploy_steps(task_mock)[0]['priority']) self.assertEqual('test3', obj3.get_deploy_steps( task_mock)[0]['interface']) self.assertEqual('deploy_fifteen', obj3.get_deploy_steps( task_mock)[0]['step']) self.assertEqual({'arg10': {'description': 'desc10'}}, obj3.get_deploy_steps(task_mock)[0]['argsinfo']) self.assertEqual(0, obj3.get_deploy_steps(task_mock)[1]['priority']) self.assertEqual(obj3.interface_type, obj3.get_deploy_steps( task_mock)[1]['interface']) self.assertEqual('deploy_zero3', obj3.get_deploy_steps( task_mock)[1]['step']) self.assertEqual({'arg1': {'description': 'desc1', 'required': True}}, obj3.get_deploy_steps(task_mock)[1]['argsinfo']) # Ensure we can execute the function. obj.execute_deploy_step(task_mock, obj.get_deploy_steps(task_mock)[0]) method_mock.assert_called_once_with(task_mock) args = {'arg1': 'val1'} deploy_step = {'interface': 'test3', 'step': 'deploy_zero3', 'args': args} obj3.execute_deploy_step(task_mock, deploy_step) method_args_mock.assert_called_once_with(task_mock, **args) class MyRAIDInterface(driver_base.RAIDInterface): def create_configuration(self, task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=True): pass def delete_configuration(self, task): pass class RAIDInterfaceTestCase(base.TestCase): @mock.patch.object(driver_base.RAIDInterface, 'validate_raid_config', autospec=True) def test_validate(self, validate_raid_config_mock): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config='some_raid_config') task_mock = mock.MagicMock(node=node_mock) raid_interface.validate(task_mock) validate_raid_config_mock.assert_called_once_with( raid_interface, task_mock, 'some_raid_config') @mock.patch.object(driver_base.RAIDInterface, 'validate_raid_config', autospec=True) def test_validate_no_target_raid_config(self, validate_raid_config_mock): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config={}) task_mock = mock.MagicMock(node=node_mock) raid_interface.validate(task_mock) self.assertFalse(validate_raid_config_mock.called) @mock.patch.object(raid, 'validate_configuration', autospec=True) def test_validate_raid_config(self, common_validate_mock): with open(driver_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: raid_schema = json.load(raid_schema_fobj) raid_interface = MyRAIDInterface() raid_interface.validate_raid_config('task', 'some_raid_config') common_validate_mock.assert_called_once_with( 'some_raid_config', raid_schema) @mock.patch.object(raid, 'get_logical_disk_properties', autospec=True) def test_get_logical_disk_properties(self, get_properties_mock): with open(driver_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: raid_schema = json.load(raid_schema_fobj) raid_interface = MyRAIDInterface() raid_interface.get_logical_disk_properties() get_properties_mock.assert_called_once_with(raid_schema) @mock.patch.object(MyRAIDInterface, 'create_configuration', autospec=True) @mock.patch.object(MyRAIDInterface, 'validate_raid_config', autospec=True) def test_apply_configuration(self, mock_validate, mock_create): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config=None) task_mock = mock.MagicMock(node=node_mock) mock_create.return_value = states.DEPLOYWAIT raid_config = 'some_raid_config' result = raid_interface.apply_configuration(task_mock, raid_config) self.assertEqual(states.DEPLOYWAIT, result) mock_validate.assert_called_once_with(raid_interface, task_mock, raid_config) mock_create.assert_called_once_with(raid_interface, task_mock, create_root_volume=True, create_nonroot_volumes=True, delete_existing=True) self.assertEqual(raid_config, node_mock.target_raid_config) @mock.patch.object(MyRAIDInterface, 'create_configuration', autospec=True) @mock.patch.object(MyRAIDInterface, 'validate_raid_config', autospec=True) def test_apply_configuration_delete_existing(self, mock_validate, mock_create): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config=None) task_mock = mock.MagicMock(node=node_mock) mock_create.return_value = states.DEPLOYWAIT raid_config = 'some_raid_config' result = raid_interface.apply_configuration(task_mock, raid_config, delete_existing=True) self.assertEqual(states.DEPLOYWAIT, result) mock_validate.assert_called_once_with(raid_interface, task_mock, raid_config) mock_create.assert_called_once_with(raid_interface, task_mock, create_root_volume=True, create_nonroot_volumes=True, delete_existing=True) self.assertEqual(raid_config, node_mock.target_raid_config) @mock.patch.object(MyRAIDInterface, 'create_configuration', autospec=True) @mock.patch.object(MyRAIDInterface, 'validate_raid_config', autospec=True) def test_apply_configuration_invalid(self, mock_validate, mock_create): raid_interface = MyRAIDInterface() node_mock = mock.MagicMock(target_raid_config=None) task_mock = mock.MagicMock(node=node_mock) mock_validate.side_effect = exception.InvalidParameterValue('bad') raid_config = 'some_raid_config' self.assertRaises(exception.InvalidParameterValue, raid_interface.apply_configuration, task_mock, raid_config) mock_validate.assert_called_once_with(raid_interface, task_mock, raid_config) self.assertFalse(mock_create.called) self.assertIsNone(node_mock.target_raid_config) class TestDeployInterface(base.TestCase): @mock.patch.object(driver_base.LOG, 'warning', autospec=True) def test_warning_on_heartbeat(self, mock_log): # NOTE(dtantsur): FakeDeploy does not override heartbeat deploy = fake.FakeDeploy() deploy.heartbeat(mock.Mock(node=mock.Mock(uuid='uuid', driver='driver')), 'url', '3.2.0') self.assertTrue(mock_log.called) class MyBIOSInterface(driver_base.BIOSInterface): def get_properties(self): pass def validate(self, task): pass @driver_base.cache_bios_settings def apply_configuration(self, task, settings): return "return_value_apply_configuration" @driver_base.cache_bios_settings def factory_reset(self, task): return "return_value_factory_reset" def cache_bios_settings(self, task): pass class TestBIOSInterface(base.TestCase): @mock.patch.object(MyBIOSInterface, 'cache_bios_settings', autospec=True) def test_apply_configuration_wrapper(self, cache_bios_settings_mock): bios = MyBIOSInterface() task_mock = mock.MagicMock() actual = bios.apply_configuration(task_mock, "") cache_bios_settings_mock.assert_called_once_with(bios, task_mock) self.assertEqual(actual, "return_value_apply_configuration") @mock.patch.object(MyBIOSInterface, 'cache_bios_settings', autospec=True) def test_factory_reset_wrapper(self, cache_bios_settings_mock): bios = MyBIOSInterface() task_mock = mock.MagicMock() actual = bios.factory_reset(task_mock) cache_bios_settings_mock.assert_called_once_with(bios, task_mock) self.assertEqual(actual, "return_value_factory_reset") class TestBootInterface(base.TestCase): def test_validate_rescue_default_impl(self): boot = fake.FakeBoot() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, boot.validate_rescue, task_mock) class TestManagementInterface(base.TestCase): def test_inject_nmi_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, management.inject_nmi, task_mock) def test_get_supported_boot_modes_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, management.get_supported_boot_modes, task_mock) def test_set_boot_mode_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, management.set_boot_mode, task_mock, 'whatever') def test_get_boot_mode_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, management.get_boot_mode, task_mock) def test_get_supported_indicators_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) expected = { components.CHASSIS: { 'led-0': { "readonly": True, "states": [ indicator_states.OFF, indicator_states.ON ] } }, components.SYSTEM: { 'led': { "readonly": False, "states": [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } } } self.assertEqual( expected, management.get_supported_indicators(task_mock)) def test_set_indicator_state_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) self.assertRaises(exception.UnsupportedDriverExtension, management.set_indicator_state, task_mock, components.CHASSIS, 'led-0', indicator_states.ON) def test_get_indicator_state_default_impl(self): management = fake.FakeManagement() task_mock = mock.MagicMock(spec_set=['node']) expected = indicator_states.ON self.assertEqual( expected, management.get_indicator_state( task_mock, components.CHASSIS, 'led-0')) class TestBareDriver(base.TestCase): def test_class_variables(self): self.assertEqual(['boot', 'deploy', 'management', 'network', 'power'], driver_base.BareDriver().core_interfaces) self.assertEqual( ['bios', 'console', 'inspect', 'raid', 'rescue', 'storage'], driver_base.BareDriver().optional_interfaces ) ironic-15.0.0/ironic/tests/unit/drivers/test_generic.py0000664000175000017500000001014113652514273023216 0ustar zuulzuul00000000000000# Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers import base as driver_base from ironic.drivers.modules import agent from ironic.drivers.modules import fake from ironic.drivers.modules import inspector from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules import noop_mgmt from ironic.drivers.modules import pxe from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class ManualManagementHardwareTestCase(db_base.DbTestCase): def setUp(self): super(ManualManagementHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['manual-management'], enabled_power_interfaces=['fake'], enabled_management_interfaces=['noop', 'fake']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='manual-management') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.management, noop_mgmt.NoopManagement) self.assertIsInstance(task.driver.power, fake.FakePower) self.assertIsInstance(task.driver.boot, pxe.PXEBoot) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, noop.NoInspect) self.assertIsInstance(task.driver.raid, noop.NoRAID) def test_supported_interfaces(self): self.config(enabled_inspect_interfaces=['inspector', 'no-inspect'], enabled_raid_interfaces=['agent']) node = obj_utils.create_test_node(self.context, driver='manual-management', management_interface='fake', deploy_interface='direct', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.management, fake.FakeManagement) self.assertIsInstance(task.driver.power, fake.FakePower) self.assertIsInstance(task.driver.boot, pxe.PXEBoot) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, inspector.Inspector) self.assertIsInstance(task.driver.raid, agent.AgentRAID) def test_get_properties(self): # These properties are from vendor (agent) and boot (pxe) interfaces expected_prop_keys = [ 'deploy_forces_oob_reboot', 'deploy_kernel', 'deploy_ramdisk', 'force_persistent_boot_device', 'rescue_kernel', 'rescue_ramdisk'] hardware_type = driver_factory.get_hardware_type("manual-management") properties = hardware_type.get_properties() self.assertEqual(sorted(expected_prop_keys), sorted(properties)) @mock.patch.object(driver_factory, 'default_interface', autospec=True) def test_get_properties_none(self, mock_def_iface): hardware_type = driver_factory.get_hardware_type("manual-management") mock_def_iface.side_effect = exception.NoValidDefaultForInterface("no") properties = hardware_type.get_properties() self.assertEqual({}, properties) self.assertEqual(len(driver_base.ALL_INTERFACES), mock_def_iface.call_count) ironic-15.0.0/ironic/tests/unit/drivers/test_xclarity.py0000664000175000017500000000376413652514273023456 0ustar zuulzuul00000000000000# Copyright 2017 Lenovo, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for XClarity Driver """ from ironic.conductor import task_manager from ironic.drivers.modules import agent from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import pxe from ironic.drivers.xclarity import management as xc_management from ironic.drivers.xclarity import power as xc_power from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class XClarityHardwareTestCase(db_base.DbTestCase): def setUp(self): super(XClarityHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['xclarity'], enabled_power_interfaces=['xclarity'], enabled_management_interfaces=['xclarity']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='xclarity') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, pxe.PXEBoot) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy, agent.AgentDeploy) self.assertIsInstance(task.driver.management, xc_management.XClarityManagement) self.assertIsInstance(task.driver.power, xc_power.XClarityPower) ironic-15.0.0/ironic/tests/unit/drivers/test_snmp.py0000664000175000017500000000501113652514273022557 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.conductor import task_manager from ironic.drivers.modules import fake from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules import noop_mgmt from ironic.drivers.modules import pxe from ironic.drivers.modules import snmp from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class SNMPHardwareTestCase(db_base.DbTestCase): def setUp(self): super(SNMPHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['snmp'], enabled_management_interfaces=['noop'], enabled_power_interfaces=['snmp']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='snmp') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.power, snmp.SNMPPower) self.assertIsInstance(task.driver.boot, pxe.PXEBoot) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.management, noop_mgmt.NoopManagement) self.assertIsInstance(task.driver.console, noop.NoConsole) self.assertIsInstance(task.driver.raid, noop.NoRAID) @mock.patch.object(fake.LOG, 'warning', autospec=True) def test_fake_management(self, mock_warn): self.config(enabled_management_interfaces=['noop', 'fake']) node = obj_utils.create_test_node(self.context, driver='snmp', management_interface='fake') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.management, fake.FakeManagement) task.driver.management.validate(task) self.assertTrue(mock_warn.called) ironic-15.0.0/ironic/tests/unit/drivers/ipxe_config_boot_from_volume_no_extra_volumes.template0000664000175000017500000000237013652514273033333 0ustar zuulzuul00000000000000#!ipxe set attempts:int32 10 set i:int32 0 goto deploy :deploy imgfree kernel http://1.2.3.4:1234/deploy_kernel selinux=0 troubleshoot=0 text test_param BOOTIF=${mac} initrd=deploy_ramdisk || goto retry initrd http://1.2.3.4:1234/deploy_ramdisk || goto retry boot :retry iseq ${i} ${attempts} && goto fail || inc i echo No response, retrying in {i} seconds. sleep ${i} goto deploy :fail echo Failed to get a response after ${attempts} attempts echo Powering off in 30 seconds. sleep 30 poweroff :boot_partition imgfree kernel http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk || goto boot_partition initrd http://1.2.3.4:1234/ramdisk || goto boot_partition boot :boot_ramdisk imgfree kernel http://1.2.3.4:1234/kernel root=/dev/ram0 text test_param ramdisk_param initrd=ramdisk || goto boot_ramdisk initrd http://1.2.3.4:1234/ramdisk || goto boot_ramdisk boot :boot_iscsi imgfree set username fake_username set password fake_password set initiator-iqn fake_iqn sanhook --drive 0x80 iscsi:fake_host::3260:0:fake_iqn || goto fail_iscsi_retry sanboot --no-describe || goto fail_iscsi_retry :fail_iscsi_retry echo Failed to attach iSCSI volume(s), retrying in 10 seconds. sleep 10 goto boot_iscsi :boot_whole_disk sanboot --no-describe ironic-15.0.0/ironic/tests/unit/drivers/pxe_config.template0000664000175000017500000000160113652514273024050 0ustar zuulzuul00000000000000default deploy label deploy kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/deploy_ramdisk selinux=0 troubleshoot=0 text test_param ipappend 2 label boot_partition kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk root={{ ROOT }} ro text test_param label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel root={{ ROOT }} ro text test_param intel_iommu=on --- /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk label boot_ramdisk kernel /tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/kernel append initrd=/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/ramdisk root=/dev/ram0 text test_param ramdisk_param ironic-15.0.0/ironic/tests/unit/drivers/test_ibmc.py0000664000175000017500000000412713652514273022523 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Version 1.0.0 from ironic.conductor import task_manager from ironic.drivers.modules.ibmc import management as ibmc_mgmt from ironic.drivers.modules.ibmc import power as ibmc_power from ironic.drivers.modules.ibmc import vendor as ibmc_vendor from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules import pxe from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IBMCHardwareTestCase(db_base.DbTestCase): def setUp(self): super(IBMCHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['ibmc'], enabled_power_interfaces=['ibmc'], enabled_management_interfaces=['ibmc'], enabled_vendor_interfaces=['ibmc']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='ibmc') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.management, ibmc_mgmt.IBMCManagement) self.assertIsInstance(task.driver.power, ibmc_power.IBMCPower) self.assertIsInstance(task.driver.boot, pxe.PXEBoot) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.console, noop.NoConsole) self.assertIsInstance(task.driver.raid, noop.NoRAID) self.assertIsInstance(task.driver.vendor, ibmc_vendor.IBMCVendor) ironic-15.0.0/ironic/tests/unit/drivers/test_ipmi.py0000664000175000017500000001133413652514273022545 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.conductor import task_manager from ironic.drivers.modules import agent from ironic.drivers.modules import ipmitool from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules import noop_mgmt from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import cinder from ironic.drivers.modules.storage import noop as noop_storage from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IPMIHardwareTestCase(db_base.DbTestCase): def setUp(self): super(IPMIHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['ipmi'], enabled_power_interfaces=['ipmitool'], enabled_management_interfaces=['ipmitool', 'noop'], enabled_raid_interfaces=['no-raid', 'agent'], enabled_console_interfaces=['no-console'], enabled_vendor_interfaces=['ipmitool', 'no-vendor']) def _validate_interfaces(self, task, **kwargs): self.assertIsInstance( task.driver.management, kwargs.get('management', ipmitool.IPMIManagement)) self.assertIsInstance( task.driver.power, kwargs.get('power', ipmitool.IPMIPower)) self.assertIsInstance( task.driver.boot, kwargs.get('boot', pxe.PXEBoot)) self.assertIsInstance( task.driver.deploy, kwargs.get('deploy', iscsi_deploy.ISCSIDeploy)) self.assertIsInstance( task.driver.console, kwargs.get('console', noop.NoConsole)) self.assertIsInstance( task.driver.raid, kwargs.get('raid', noop.NoRAID)) self.assertIsInstance( task.driver.vendor, kwargs.get('vendor', ipmitool.VendorPassthru)) self.assertIsInstance( task.driver.storage, kwargs.get('storage', noop_storage.NoopStorage)) self.assertIsInstance( task.driver.rescue, kwargs.get('rescue', noop.NoRescue)) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='ipmi') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task) def test_override_with_shellinabox(self): self.config(enabled_console_interfaces=['ipmitool-shellinabox', 'ipmitool-socat']) node = obj_utils.create_test_node( self.context, driver='ipmi', deploy_interface='direct', raid_interface='agent', console_interface='ipmitool-shellinabox', vendor_interface='no-vendor') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task, deploy=agent.AgentDeploy, console=ipmitool.IPMIShellinaboxConsole, raid=agent.AgentRAID, vendor=noop.NoVendor) def test_override_with_cinder_storage(self): self.config(enabled_storage_interfaces=['noop', 'cinder']) node = obj_utils.create_test_node( self.context, driver='ipmi', storage_interface='cinder') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task, storage=cinder.CinderStorage) def test_override_with_agent_rescue(self): self.config(enabled_rescue_interfaces=['no-rescue', 'agent']) node = obj_utils.create_test_node( self.context, driver='ipmi', rescue_interface='agent') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task, rescue=agent.AgentRescue) def test_override_with_noop_mgmt(self): self.config(enabled_management_interfaces=['ipmitool', 'noop']) node = obj_utils.create_test_node( self.context, driver='ipmi', management_interface='noop') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task, management=noop_mgmt.NoopManagement) ironic-15.0.0/ironic/tests/unit/drivers/ipxe_config_boot_from_volume_extra_volume.template0000664000175000017500000000266513652514273032463 0ustar zuulzuul00000000000000#!ipxe set attempts:int32 10 set i:int32 0 goto deploy :deploy imgfree kernel http://1.2.3.4:1234/deploy_kernel selinux=0 troubleshoot=0 text test_param BOOTIF=${mac} initrd=deploy_ramdisk || goto retry initrd http://1.2.3.4:1234/deploy_ramdisk || goto retry boot :retry iseq ${i} ${attempts} && goto fail || inc i echo No response, retrying in {i} seconds. sleep ${i} goto deploy :fail echo Failed to get a response after ${attempts} attempts echo Powering off in 30 seconds. sleep 30 poweroff :boot_partition imgfree kernel http://1.2.3.4:1234/kernel root={{ ROOT }} ro text test_param initrd=ramdisk || goto boot_partition initrd http://1.2.3.4:1234/ramdisk || goto boot_partition boot :boot_ramdisk imgfree kernel http://1.2.3.4:1234/kernel root=/dev/ram0 text test_param ramdisk_param initrd=ramdisk || goto boot_ramdisk initrd http://1.2.3.4:1234/ramdisk || goto boot_ramdisk boot :boot_iscsi imgfree set username fake_username set password fake_password set initiator-iqn fake_iqn sanhook --drive 0x80 iscsi:fake_host::3260:0:fake_iqn || goto fail_iscsi_retry set username fake_username_1 set password fake_password_1 sanhook --drive 0x81 iscsi:fake_host::3260:1:fake_iqn || goto fail_iscsi_retry set username fake_username set password fake_password sanboot --no-describe || goto fail_iscsi_retry :fail_iscsi_retry echo Failed to attach iSCSI volume(s), retrying in 10 seconds. sleep 10 goto boot_iscsi :boot_whole_disk sanboot --no-describe ironic-15.0.0/ironic/tests/unit/drivers/third_party_driver_mock_specs.py0000664000175000017500000000754513652514273026673 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This module provides mock 'specs' for third party modules that can be used when needing to mock those third party modules""" # python-dracclient DRACCLIENT_SPEC = ( 'client', 'constants', 'exceptions' ) DRACCLIENT_CLIENT_MOD_SPEC = ( 'DRACClient', ) DRACCLIENT_CONSTANTS_MOD_SPEC = ( 'POWER_OFF', 'POWER_ON', 'REBOOT', 'RebootRequired', 'RaidStatus' ) DRACCLIENT_CONSTANTS_REBOOT_REQUIRED_MOD_SPEC = ( 'true', 'optional', 'false' ) DRACCLIENT_CONSTANTS_RAID_STATUS_MOD_SPEC = ( 'jbod', 'raid' ) # proliantutils PROLIANTUTILS_SPEC = ( 'exception', 'ilo', 'utils', ) # pywsnmp PYWSNMP_SPEC = ( 'hlapi', 'error', ) # scciclient SCCICLIENT_SPEC = ( 'irmc', ) SCCICLIENT_IRMC_SCCI_SPEC = ( 'POWER_OFF', 'POWER_ON', 'POWER_RESET', 'POWER_SOFT_CYCLE', 'POWER_SOFT_OFF', 'MOUNT_CD', 'POWER_RAISE_NMI', 'UNMOUNT_CD', 'MOUNT_FD', 'UNMOUNT_FD', 'SCCIError', 'SCCIClientError', 'SCCIError', 'SCCIInvalidInputError', 'get_share_type', 'get_client', 'get_report', 'get_sensor_data', 'get_virtual_cd_set_params_cmd', 'get_virtual_fd_set_params_cmd', 'get_essential_properties', 'get_capabilities_properties', ) SCCICLIENT_IRMC_ELCM_SPEC = ( 'backup_bios_config', 'restore_bios_config', 'set_secure_boot_mode', ) SCCICLIENT_VIOM_SPEC = ( 'validate_physical_port_id', 'VIOMConfiguration', ) SCCICLIENT_VIOM_CONF_SPEC = ( 'set_lan_port', 'set_iscsi_volume', 'set_fc_volume', 'apply', 'dump_json', 'terminate', ) REDFISH_SPEC = ( 'redfish', ) SUSHY_SPEC = ( 'auth', 'exceptions', 'Sushy', 'BOOT_SOURCE_TARGET_PXE', 'BOOT_SOURCE_TARGET_HDD', 'BOOT_SOURCE_TARGET_CD', 'BOOT_SOURCE_TARGET_BIOS_SETUP', 'CHASSIS_INDICATOR_LED_LIT', 'CHASSIS_INDICATOR_LED_BLINKING', 'CHASSIS_INDICATOR_LED_OFF', 'CHASSIS_INDICATOR_LED_UNKNOWN', 'DRIVE_INDICATOR_LED_LIT', 'DRIVE_INDICATOR_LED_BLINKING', 'DRIVE_INDICATOR_LED_OFF', 'DRIVE_INDICATOR_LED_UNKNOWN', 'INDICATOR_LED_LIT', 'INDICATOR_LED_BLINKING', 'INDICATOR_LED_OFF', 'INDICATOR_LED_UNKNOWN', 'SYSTEM_POWER_STATE_ON', 'SYSTEM_POWER_STATE_POWERING_ON', 'SYSTEM_POWER_STATE_OFF', 'SYSTEM_POWER_STATE_POWERING_OFF', 'RESET_ON', 'RESET_FORCE_OFF', 'RESET_GRACEFUL_SHUTDOWN', 'RESET_GRACEFUL_RESTART', 'RESET_FORCE_RESTART', 'RESET_NMI', 'BOOT_SOURCE_ENABLED_CONTINUOUS', 'BOOT_SOURCE_ENABLED_ONCE', 'BOOT_SOURCE_MODE_BIOS', 'BOOT_SOURCE_MODE_UEFI', 'PROCESSOR_ARCH_x86', 'PROCESSOR_ARCH_IA_64', 'PROCESSOR_ARCH_ARM', 'PROCESSOR_ARCH_MIPS', 'PROCESSOR_ARCH_OEM', 'STATE_ENABLED', 'STATE_DISABLED', 'STATE_ABSENT', 'VIRTUAL_MEDIA_CD', 'VIRTUAL_MEDIA_FLOPPY', ) SUSHY_AUTH_SPEC = ( 'BasicAuth', 'SessionAuth', 'SessionOrBasicAuth', ) XCLARITY_SPEC = ( 'client', 'states', 'exceptions', 'models', 'utils', ) XCLARITY_CLIENT_CLS_SPEC = ( ) XCLARITY_STATES_SPEC = ( 'STATE_POWERING_OFF', 'STATE_POWERING_ON', 'STATE_POWER_OFF', 'STATE_POWER_ON', ) # python-ibmcclient IBMCCLIENT_SPEC = ( 'connect', 'exceptions', 'constants', ) ironic-15.0.0/ironic/tests/unit/drivers/test_fake_hardware.py0000664000175000017500000001453213652514273024375 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Fake driver.""" from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import components from ironic.common import exception from ironic.common import indicator_states from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base as driver_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils class FakeHardwareTestCase(db_base.DbTestCase): def setUp(self): super(FakeHardwareTestCase, self).setUp() self.node = db_utils.create_test_node() self.task = task_manager.acquire(self.context, self.node.id) self.addCleanup(self.task.release_resources) self.driver = self.task.driver def test_driver_interfaces(self): self.assertIsInstance(self.driver.power, driver_base.PowerInterface) self.assertIsInstance(self.driver.deploy, driver_base.DeployInterface) self.assertIsInstance(self.driver.boot, driver_base.BootInterface) self.assertIsInstance(self.driver.vendor, driver_base.VendorInterface) self.assertIsInstance(self.driver.console, driver_base.ConsoleInterface) def test_get_properties(self): expected = ['B1', 'B2'] properties = self.driver.get_properties() self.assertEqual(sorted(expected), sorted(properties)) def test_power_interface(self): self.assertEqual({}, self.driver.power.get_properties()) self.driver.power.validate(self.task) self.driver.power.get_power_state(self.task) self.assertRaises(exception.InvalidParameterValue, self.driver.power.set_power_state, self.task, states.NOSTATE) self.driver.power.set_power_state(self.task, states.POWER_ON) self.driver.power.reboot(self.task) def test_deploy_interface(self): self.assertEqual({}, self.driver.deploy.get_properties()) self.driver.deploy.validate(None) self.driver.deploy.prepare(None) self.driver.deploy.deploy(None) self.driver.deploy.take_over(None) self.driver.deploy.clean_up(None) self.driver.deploy.tear_down(None) def test_boot_interface(self): self.assertEqual({}, self.driver.boot.get_properties()) self.driver.boot.validate(self.task) self.driver.boot.prepare_ramdisk(self.task, {}) self.driver.boot.clean_up_ramdisk(self.task) self.driver.boot.prepare_instance(self.task) self.driver.boot.clean_up_instance(self.task) def test_console_interface(self): self.assertEqual({}, self.driver.console.get_properties()) self.driver.console.validate(self.task) self.driver.console.start_console(self.task) self.driver.console.stop_console(self.task) self.driver.console.get_console(self.task) def test_management_interface_get_properties(self): self.assertEqual({}, self.driver.management.get_properties()) def test_management_interface_validate(self): self.driver.management.validate(self.task) def test_management_interface_set_boot_device_good(self): self.driver.management.set_boot_device(self.task, boot_devices.PXE) def test_management_interface_set_boot_device_fail(self): self.assertRaises(exception.InvalidParameterValue, self.driver.management.set_boot_device, self.task, 'not-supported') def test_management_interface_get_supported_boot_devices(self): expected = [boot_devices.PXE] self.assertEqual( expected, self.driver.management.get_supported_boot_devices(self.task)) def test_management_interface_get_boot_device(self): expected = {'boot_device': boot_devices.PXE, 'persistent': False} self.assertEqual(expected, self.driver.management.get_boot_device(self.task)) def test_management_interface_set_boot_mode_good(self): self.assertRaises( exception.UnsupportedDriverExtension, self.driver.management.set_boot_mode, self.task, boot_modes.LEGACY_BIOS ) def test_management_interface_get_supported_indicators(self): expected = { components.CHASSIS: { 'led-0': { "readonly": True, "states": [ indicator_states.OFF, indicator_states.ON ] } }, components.SYSTEM: { 'led': { "readonly": False, "states": [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } } } self.assertEqual( expected, self.driver.management.get_supported_indicators(self.task)) def test_management_interface_get_indicator_state(self): expected = indicator_states.ON self.assertEqual( expected, self.driver.management.get_indicator_state( self.task, components.CHASSIS, 'led-0')) def test_management_interface_set_indicator_state_good(self): self.assertRaises( exception.UnsupportedDriverExtension, self.driver.management.set_indicator_state, self.task, components.CHASSIS, 'led-0', indicator_states.ON) def test_inspect_interface(self): self.assertEqual({}, self.driver.inspect.get_properties()) self.driver.inspect.validate(self.task) self.driver.inspect.inspect_hardware(self.task) ironic-15.0.0/ironic/tests/unit/drivers/third_party_driver_mocks.py0000664000175000017500000003071713652514273025656 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This module detects whether third-party libraries, utilized by third-party drivers, are present on the system. If they are not, it mocks them and tinkers with sys.modules so that the drivers can be loaded by unit tests, and the unit tests can continue to test the functionality of those drivers without the respective external libraries' actually being present. Any external library required by a third-party driver should be mocked here. Current list of mocked libraries: - proliantutils - pysnmp - scciclient - python-dracclient - python-ibmcclient """ import importlib import sys import mock from oslo_utils import importutils from ironic.drivers.modules import ipmitool from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs # IPMITool driver checks the system for presence of 'ipmitool' binary during # __init__. We bypass that check in order to run the unit tests, which do not # depend on 'ipmitool' being on the system. ipmitool.TIMING_SUPPORT = False ipmitool.DUAL_BRIDGE_SUPPORT = False ipmitool.SINGLE_BRIDGE_SUPPORT = False proliantutils = importutils.try_import('proliantutils') if not proliantutils: proliantutils = mock.MagicMock(spec_set=mock_specs.PROLIANTUTILS_SPEC) sys.modules['proliantutils'] = proliantutils sys.modules['proliantutils.ilo'] = proliantutils.ilo sys.modules['proliantutils.ilo.client'] = proliantutils.ilo.client sys.modules['proliantutils.exception'] = proliantutils.exception sys.modules['proliantutils.utils'] = proliantutils.utils proliantutils.utils.process_firmware_image = mock.MagicMock() proliantutils.exception.IloError = type('IloError', (Exception,), {}) proliantutils.exception.IloLogicalDriveNotFoundError = ( type('IloLogicalDriveNotFoundError', (Exception,), {})) command_exception = type('IloCommandNotSupportedError', (Exception,), {}) proliantutils.exception.IloCommandNotSupportedError = command_exception proliantutils.exception.IloCommandNotSupportedInBiosError = type( 'IloCommandNotSupportedInBiosError', (Exception,), {}) proliantutils.exception.InvalidInputError = type( 'InvalidInputError', (Exception,), {}) proliantutils.exception.ImageExtractionFailed = type( 'ImageExtractionFailed', (Exception,), {}) if 'ironic.drivers.ilo' in sys.modules: importlib.reload(sys.modules['ironic.drivers.ilo']) redfish = importutils.try_import('redfish') if not redfish: redfish = mock.MagicMock(spec_set=mock_specs.REDFISH_SPEC) sys.modules['redfish'] = redfish if 'ironic.drivers.redfish' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.redfish']) # attempt to load the external 'python-dracclient' library, which is required # by the optional drivers.modules.drac module dracclient = importutils.try_import('dracclient') if not dracclient: dracclient = mock.MagicMock(spec_set=mock_specs.DRACCLIENT_SPEC) dracclient.client = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CLIENT_MOD_SPEC) dracclient.constants = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CONSTANTS_MOD_SPEC, POWER_OFF=mock.sentinel.POWER_OFF, POWER_ON=mock.sentinel.POWER_ON, REBOOT=mock.sentinel.REBOOT) dracclient.constants.RebootRequired = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CONSTANTS_REBOOT_REQUIRED_MOD_SPEC, true=mock.sentinel.true, optional=mock.sentinel.optional, false=mock.sentinel.false) dracclient.constants.RaidStatus = mock.MagicMock( spec_set=mock_specs.DRACCLIENT_CONSTANTS_RAID_STATUS_MOD_SPEC, jbod=mock.sentinel.jbod, raid=mock.sentinel.raid) sys.modules['dracclient'] = dracclient sys.modules['dracclient.client'] = dracclient.client sys.modules['dracclient.constants'] = dracclient.constants sys.modules['dracclient.exceptions'] = dracclient.exceptions dracclient.exceptions.BaseClientException = type('BaseClientException', (Exception,), {}) dracclient.exceptions.DRACRequestFailed = type( 'DRACRequestFailed', (dracclient.exceptions.BaseClientException,), {}) class DRACOperationFailed(dracclient.exceptions.DRACRequestFailed): def __init__(self, **kwargs): super(DRACOperationFailed, self).__init__( 'DRAC operation failed. Messages: %(drac_messages)s' % kwargs) dracclient.exceptions.DRACOperationFailed = DRACOperationFailed # Now that the external library has been mocked, if anything had already # loaded any of the drivers, reload them. if 'ironic.drivers.modules.drac' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.drac']) # attempt to load the external 'pysnmp' library, which is required by # the optional drivers.modules.snmp module pysnmp = importutils.try_import("pysnmp") if not pysnmp: pysnmp = mock.MagicMock(spec_set=mock_specs.PYWSNMP_SPEC) sys.modules["pysnmp"] = pysnmp sys.modules["pysnmp.hlapi"] = pysnmp.hlapi sys.modules["pysnmp.error"] = pysnmp.error pysnmp.error.PySnmpError = Exception # Patch the RFC1902 integer class with a python int pysnmp.hlapi.Integer = int # if anything has loaded the snmp driver yet, reload it now that the # external library has been mocked if 'ironic.drivers.modules.snmp' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.snmp']) # attempt to load the external 'scciclient' library, which is required by # the optional drivers.modules.irmc module scciclient = importutils.try_import('scciclient') if not scciclient: mock_scciclient = mock.MagicMock(spec_set=mock_specs.SCCICLIENT_SPEC) sys.modules['scciclient'] = mock_scciclient sys.modules['scciclient.irmc'] = mock_scciclient.irmc sys.modules['scciclient.irmc.scci'] = mock.MagicMock( spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC, POWER_OFF=mock.sentinel.POWER_OFF, POWER_ON=mock.sentinel.POWER_ON, POWER_RESET=mock.sentinel.POWER_RESET, MOUNT_CD=mock.sentinel.MOUNT_CD, UNMOUNT_CD=mock.sentinel.UNMOUNT_CD, MOUNT_FD=mock.sentinel.MOUNT_FD, UNMOUNT_FD=mock.sentinel.UNMOUNT_FD) sys.modules['scciclient.irmc.elcm'] = mock.MagicMock( spec_set=mock_specs.SCCICLIENT_IRMC_ELCM_SPEC) # if anything has loaded the iRMC driver yet, reload it now that the # external library has been mocked if 'ironic.drivers.modules.irmc' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.irmc']) # install mock object to prevent the irmc-virtual-media boot interface from # checking whether NFS/CIFS share file system is mounted or not. irmc_boot = importutils.import_module( 'ironic.drivers.modules.irmc.boot') irmc_boot.check_share_fs_mounted_orig = irmc_boot.check_share_fs_mounted irmc_boot.check_share_fs_mounted_patcher = mock.patch( 'ironic.drivers.modules.irmc.boot.check_share_fs_mounted') irmc_boot.check_share_fs_mounted_patcher.return_value = None class MockKwargsException(Exception): def __init__(self, *args, **kwargs): super(MockKwargsException, self).__init__(*args) self.kwargs = kwargs sushy = importutils.try_import('sushy') if not sushy: sushy = mock.MagicMock( spec_set=mock_specs.SUSHY_SPEC, BOOT_SOURCE_TARGET_PXE='Pxe', BOOT_SOURCE_TARGET_HDD='Hdd', BOOT_SOURCE_TARGET_CD='Cd', BOOT_SOURCE_TARGET_BIOS_SETUP='BiosSetup', INDICATOR_LED_LIT='indicator led lit', INDICATOR_LED_BLINKING='indicator led blinking', INDICATOR_LED_OFF='indicator led off', INDICATOR_LED_UNKNOWN='indicator led unknown', SYSTEM_POWER_STATE_ON='on', SYSTEM_POWER_STATE_POWERING_ON='powering on', SYSTEM_POWER_STATE_OFF='off', SYSTEM_POWER_STATE_POWERING_OFF='powering off', RESET_ON='on', RESET_FORCE_OFF='force off', RESET_GRACEFUL_SHUTDOWN='graceful shutdown', RESET_GRACEFUL_RESTART='graceful restart', RESET_FORCE_RESTART='force restart', RESET_NMI='nmi', BOOT_SOURCE_ENABLED_CONTINUOUS='continuous', BOOT_SOURCE_ENABLED_ONCE='once', BOOT_SOURCE_MODE_BIOS='bios', BOOT_SOURCE_MODE_UEFI='uefi', PROCESSOR_ARCH_x86='x86 or x86-64', PROCESSOR_ARCH_IA_64='Intel Itanium', PROCESSOR_ARCH_ARM='ARM', PROCESSOR_ARCH_MIPS='MIPS', PROCESSOR_ARCH_OEM='OEM-defined', STATE_ENABLED='enabled', STATE_DISABLED='disabled', STATE_ABSENT='absent', VIRTUAL_MEDIA_CD='cd', VIRTUAL_MEDIA_FLOPPY='floppy', ) sys.modules['sushy'] = sushy sys.modules['sushy.exceptions'] = sushy.exceptions sushy.exceptions.SushyError = ( type('SushyError', (MockKwargsException,), {})) sushy.exceptions.ConnectionError = ( type('ConnectionError', (sushy.exceptions.SushyError,), {})) sushy.exceptions.ResourceNotFoundError = ( type('ResourceNotFoundError', (sushy.exceptions.SushyError,), {})) sushy.exceptions.MissingAttributeError = ( type('MissingAttributeError', (sushy.exceptions.SushyError,), {})) sushy.exceptions.OEMExtensionNotFoundError = ( type('OEMExtensionNotFoundError', (sushy.exceptions.SushyError,), {})) sushy.auth = mock.MagicMock(spec_set=mock_specs.SUSHY_AUTH_SPEC) sys.modules['sushy.auth'] = sushy.auth if 'ironic.drivers.modules.redfish' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.redfish']) xclarity_client = importutils.try_import('xclarity_client') if not xclarity_client: xclarity_client = mock.MagicMock(spec_set=mock_specs.XCLARITY_SPEC) sys.modules['xclarity_client'] = xclarity_client sys.modules['xclarity_client.client'] = xclarity_client.client states = mock.MagicMock( spec_set=mock_specs.XCLARITY_STATES_SPEC, STATE_POWER_ON="power on", STATE_POWER_OFF="power off", STATE_POWERING_ON="powering_on", STATE_POWERING_OFF="powering_off") sys.modules['xclarity_client.states'] = states sys.modules['xclarity_client.exceptions'] = xclarity_client.exceptions sys.modules['xclarity_client.utils'] = xclarity_client.utils xclarity_client.exceptions.XClarityException = type('XClarityException', (Exception,), {}) sys.modules['xclarity_client.models'] = xclarity_client.models # python-ibmcclient mocks for HUAWEI rack server driver ibmc_client = importutils.try_import('ibmc_client') if not ibmc_client: ibmc_client = mock.MagicMock(spec_set=mock_specs.IBMCCLIENT_SPEC) sys.modules['ibmc_client'] = ibmc_client # Mock iBMC client exceptions exceptions = mock.MagicMock() exceptions.ConnectionError = ( type('ConnectionError', (MockKwargsException,), {})) exceptions.IBMCClientError = ( type('IBMCClientError', (MockKwargsException,), {})) sys.modules['ibmc_client.exceptions'] = exceptions # Mock iIBMC client constants constants = mock.MagicMock( SYSTEM_POWER_STATE_ON='On', SYSTEM_POWER_STATE_OFF='Off', BOOT_SOURCE_TARGET_NONE='None', BOOT_SOURCE_TARGET_PXE='Pxe', BOOT_SOURCE_TARGET_FLOPPY='Floppy', BOOT_SOURCE_TARGET_CD='Cd', BOOT_SOURCE_TARGET_HDD='Hdd', BOOT_SOURCE_TARGET_BIOS_SETUP='BiosSetup', BOOT_SOURCE_MODE_BIOS='Legacy', BOOT_SOURCE_MODE_UEFI='UEFI', BOOT_SOURCE_ENABLED_ONCE='Once', BOOT_SOURCE_ENABLED_CONTINUOUS='Continuous', BOOT_SOURCE_ENABLED_DISABLED='Disabled', RESET_NMI='Nmi', RESET_ON='On', RESET_FORCE_OFF='ForceOff', RESET_GRACEFUL_SHUTDOWN='GracefulShutdown', RESET_FORCE_RESTART='ForceRestart', RESET_FORCE_POWER_CYCLE='ForcePowerCycle') sys.modules['ibmc_client.constants'] = constants if 'ironic.drivers.modules.ibmc' in sys.modules: importlib.reload(sys.modules['ironic.drivers.modules.ibmc']) ironic-15.0.0/ironic/tests/unit/drivers/test_irmc.py0000664000175000017500000002267213652514273022550 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Deploy Driver """ from ironic.conductor import task_manager from ironic.drivers import irmc from ironic.drivers.modules import agent from ironic.drivers.modules import inspector from ironic.drivers.modules import ipmitool from ironic.drivers.modules import ipxe from ironic.drivers.modules.irmc import bios as irmc_bios from ironic.drivers.modules.irmc import raid from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IRMCHardwareTestCase(db_base.DbTestCase): def setUp(self): irmc.boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc.boot.check_share_fs_mounted_patcher.stop) super(IRMCHardwareTestCase, self).setUp() self.config_temp_dir('http_root', group='deploy') self.config(enabled_hardware_types=['irmc'], enabled_boot_interfaces=['irmc-virtual-media', 'ipxe'], enabled_console_interfaces=['ipmitool-socat'], enabled_deploy_interfaces=['iscsi', 'direct'], enabled_inspect_interfaces=['irmc'], enabled_management_interfaces=['irmc'], enabled_power_interfaces=['irmc', 'ipmitool'], enabled_raid_interfaces=['no-raid', 'agent', 'irmc'], enabled_rescue_interfaces=['no-rescue', 'agent'], enabled_bios_interfaces=['irmc', 'no-bios', 'fake']) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='irmc') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, irmc.inspect.IRMCInspect) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, irmc.power.IRMCPower) self.assertIsInstance(task.driver.raid, noop.NoRAID) self.assertIsInstance(task.driver.rescue, noop.NoRescue) self.assertIsInstance(task.driver.bios, irmc_bios.IRMCBIOS) def test_override_with_inspector(self): self.config(enabled_inspect_interfaces=['inspector', 'irmc']) node = obj_utils.create_test_node( self.context, driver='irmc', deploy_interface='direct', inspect_interface='inspector', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, inspector.Inspector) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, irmc.power.IRMCPower) self.assertIsInstance(task.driver.raid, agent.AgentRAID) self.assertIsInstance(task.driver.rescue, noop.NoRescue) def test_override_with_agent_rescue(self): node = obj_utils.create_test_node( self.context, driver='irmc', deploy_interface='direct', rescue_interface='agent', raid_interface='agent') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, irmc.inspect.IRMCInspect) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, irmc.power.IRMCPower) self.assertIsInstance(task.driver.raid, agent.AgentRAID) self.assertIsInstance(task.driver.rescue, agent.AgentRescue) def test_override_with_ipmitool_power(self): node = obj_utils.create_test_node( self.context, driver='irmc', power_interface='ipmitool') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy) self.assertIsInstance(task.driver.inspect, irmc.inspect.IRMCInspect) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, ipmitool.IPMIPower) self.assertIsInstance(task.driver.raid, noop.NoRAID) self.assertIsInstance(task.driver.rescue, noop.NoRescue) def test_override_with_raid_configuration(self): node = obj_utils.create_test_node( self.context, driver='irmc', deploy_interface='direct', rescue_interface='agent', raid_interface='irmc') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, irmc.inspect.IRMCInspect) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, irmc.power.IRMCPower) self.assertIsInstance(task.driver.raid, raid.IRMCRAID) self.assertIsInstance(task.driver.rescue, agent.AgentRescue) def test_override_with_bios_configuration(self): node = obj_utils.create_test_node( self.context, driver='irmc', deploy_interface='direct', rescue_interface='agent', bios_interface='no-bios') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, irmc.boot.IRMCVirtualMediaBoot) self.assertIsInstance(task.driver.console, ipmitool.IPMISocatConsole) self.assertIsInstance(task.driver.deploy, agent.AgentDeploy) self.assertIsInstance(task.driver.inspect, irmc.inspect.IRMCInspect) self.assertIsInstance(task.driver.management, irmc.management.IRMCManagement) self.assertIsInstance(task.driver.power, irmc.power.IRMCPower) self.assertIsInstance(task.driver.bios, noop.NoBIOS) self.assertIsInstance(task.driver.rescue, agent.AgentRescue) def test_override_with_boot_configuration(self): node = obj_utils.create_test_node( self.context, driver='irmc', boot_interface='ipxe') with task_manager.acquire(self.context, node.id) as task: self.assertIsInstance(task.driver.boot, ipxe.iPXEBoot) ironic-15.0.0/ironic/tests/unit/drivers/__init__.py0000664000175000017500000000175413652514273022314 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(tenbrae): since __init__ is loaded before the files in the same # directory, and some third-party driver tests may need to have # their external libraries mocked, we load the file which does # that mocking here -- in the __init__. from ironic.tests.unit.drivers import third_party_driver_mocks # noqa ironic-15.0.0/ironic/tests/unit/drivers/modules/0000775000175000017500000000000013652514443021643 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/0000775000175000017500000000000013652514443022426 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_boot.py0000664000175000017500000024376113652514273025020 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for boot methods used by iLO modules.""" import io import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common import image_service from ironic.common import images from ironic.common import states from ironic.common import swift from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import boot as ilo_boot from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules import ipxe from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import noop as noop_storage from ironic.drivers import utils as driver_utils from ironic.tests.unit.drivers.modules.ilo import test_common CONF = cfg.CONF class IloBootCommonMethodsTestCase(test_common.BaseIloTest): boot_interface = 'ilo-virtual-media' def test_parse_driver_info(self): self.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' expected_driver_info = {'ilo_deploy_iso': 'deploy-iso'} actual_driver_info = ilo_boot.parse_driver_info(self.node) self.assertEqual(expected_driver_info, actual_driver_info) def test_parse_driver_info_exc(self): self.assertRaises(exception.MissingParameterValue, ilo_boot.parse_driver_info, self.node) class IloBootPrivateMethodsTestCase(test_common.BaseIloTest): boot_interface = 'ilo-virtual-media' def test__get_boot_iso_object_name(self): boot_iso_actual = ilo_boot._get_boot_iso_object_name(self.node) boot_iso_expected = "boot-%s" % self.node.uuid self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) def test__get_boot_iso_http_url(self, service_mock): url = 'http://abc.org/image/qcow2' i_info = self.node.instance_info i_info['ilo_boot_iso'] = url self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') service_mock.assert_called_once_with(mock.ANY, url) self.assertEqual(url, boot_iso_actual) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) def test__get_boot_iso_unsupported_url(self, validate_href_mock): validate_href_mock.side_effect = exception.ImageRefValidationFailed( image_href='file://img.qcow2', reason='fail') url = 'file://img.qcow2' i_info = self.node.instance_info i_info['ilo_boot_iso'] = url self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.ImageRefValidationFailed, ilo_boot._get_boot_iso, task, 'root-uuid') @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_glance_image(self, deploy_info_mock, image_props_mock): deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': u'glance://uui\u0111', 'kernel_id': None, 'ramdisk_id': None} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = False task.node.driver_internal_info = driver_internal_info task.node.save() boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_iso_expected = u'glance://uui\u0111' self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_uefi_no_glance_image(self, deploy_info_mock, image_props_mock, log_mock, boot_mode_mock): deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': None, 'ramdisk_id': None} properties = {'capabilities': 'boot_mode:uefi'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = properties boot_iso_result = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) self.assertTrue(log_mock.called) self.assertFalse(boot_mode_mock.called) self.assertIsNone(boot_iso_result) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_create(self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, swift_api_mock, create_boot_iso_mock, tempfile_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.pxe.pxe_append_params = 'kernel-params' swift_obj_mock = swift_api_mock.return_value fileobj_mock = mock.MagicMock(spec=io.BytesIO) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': 'kernel_uuid', 'ramdisk_id': 'ramdisk_uuid'} boot_object_name_mock.return_value = 'abcdef' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with( task.context, 'tmpfile', 'kernel_uuid', 'ramdisk_uuid', deploy_iso_href='deploy_iso_uuid', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') swift_obj_mock.create_object.assert_called_once_with('ilo-cont', 'abcdef', 'tmpfile') boot_iso_expected = 'swift:abcdef' self.assertEqual(boot_iso_expected, boot_iso_actual) @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_recreate_boot_iso_use_webserver( self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, create_boot_iso_mock, tempfile_mock, copy_file_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.ilo.use_web_server_for_images = True CONF.deploy.http_url = "http://10.10.1.30/httpboot" CONF.deploy.http_root = "/httpboot" CONF.pxe.pxe_append_params = 'kernel-params' fileobj_mock = mock.MagicMock(spec=io.BytesIO) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle ramdisk_href = "http://10.10.1.30/httpboot/ramdisk" kernel_href = "http://10.10.1.30/httpboot/kernel" deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': kernel_href, 'ramdisk_id': ramdisk_href} boot_object_name_mock.return_value = 'new_boot_iso' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' copy_file_mock.return_value = "http://10.10.1.30/httpboot/new_boot_iso" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = True instance_info = task.node.instance_info old_boot_iso = 'http://10.10.1.30/httpboot/old_boot_iso' instance_info['ilo_boot_iso'] = old_boot_iso boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with( task.context, 'tmpfile', kernel_href, ramdisk_href, deploy_iso_href='deploy_iso_uuid', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') boot_iso_expected = 'http://10.10.1.30/httpboot/new_boot_iso' self.assertEqual(boot_iso_expected, boot_iso_actual) copy_file_mock.assert_called_once_with(fileobj_mock.name, 'new_boot_iso') @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'get_node_capability', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__get_boot_iso_create_use_webserver_true_ramdisk_webserver( self, deploy_info_mock, image_props_mock, capability_mock, boot_object_name_mock, create_boot_iso_mock, tempfile_mock, copy_file_mock): CONF.ilo.swift_ilo_container = 'ilo-cont' CONF.ilo.use_web_server_for_images = True CONF.deploy.http_url = "http://10.10.1.30/httpboot" CONF.deploy.http_root = "/httpboot" CONF.pxe.pxe_append_params = 'kernel-params' fileobj_mock = mock.MagicMock(spec=io.BytesIO) fileobj_mock.name = 'tmpfile' mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = fileobj_mock tempfile_mock.return_value = mock_file_handle ramdisk_href = "http://10.10.1.30/httpboot/ramdisk" kernel_href = "http://10.10.1.30/httpboot/kernel" deploy_info_mock.return_value = {'image_source': 'image-uuid', 'ilo_deploy_iso': 'deploy_iso_uuid'} image_props_mock.return_value = {'boot_iso': None, 'kernel_id': kernel_href, 'ramdisk_id': ramdisk_href} boot_object_name_mock.return_value = 'abcdef' create_boot_iso_mock.return_value = '/path/to/boot-iso' capability_mock.return_value = 'uefi' copy_file_mock.return_value = "http://10.10.1.30/httpboot/abcdef" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: boot_iso_actual = ilo_boot._get_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['boot_iso', 'kernel_id', 'ramdisk_id']) boot_object_name_mock.assert_called_once_with(task.node) create_boot_iso_mock.assert_called_once_with( task.context, 'tmpfile', kernel_href, ramdisk_href, deploy_iso_href='deploy_iso_uuid', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') boot_iso_expected = 'http://10.10.1.30/httpboot/abcdef' self.assertEqual(boot_iso_expected, boot_iso_actual) copy_file_mock.assert_called_once_with(fileobj_mock.name, 'abcdef') @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance(self, swift_mock, boot_object_name_mock): swift_obj_mock = swift_mock.return_value CONF.ilo.swift_ilo_container = 'ilo-cont' boot_object_name_mock.return_value = 'boot-object' i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'swift:bootiso' self.node.instance_info = i_info self.node.save() ilo_boot._clean_up_boot_iso_for_instance(self.node) swift_obj_mock.delete_object.assert_called_once_with('ilo-cont', 'boot-object') @mock.patch.object(ilo_boot.LOG, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_exc(self, swift_mock, boot_object_name_mock, log_mock): swift_obj_mock = swift_mock.return_value exc = exception.SwiftObjectNotFoundError('error') swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo-cont' boot_object_name_mock.return_value = 'boot-object' i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'swift:bootiso' self.node.instance_info = i_info self.node.save() ilo_boot._clean_up_boot_iso_for_instance(self.node) swift_obj_mock.delete_object.assert_called_once_with('ilo-cont', 'boot-object') self.assertTrue(log_mock.called) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_on_webserver(self, unlink_mock): CONF.ilo.use_web_server_for_images = True CONF.deploy.http_root = "/webserver" i_info = self.node.instance_info i_info['ilo_boot_iso'] = 'http://x.y.z.a/webserver/boot-object' self.node.instance_info = i_info self.node.save() boot_iso_path = "/webserver/boot-object" ilo_boot._clean_up_boot_iso_for_instance(self.node) unlink_mock.assert_called_once_with(boot_iso_path) @mock.patch.object(ilo_boot, '_get_boot_iso_object_name', spec_set=True, autospec=True) def test__clean_up_boot_iso_for_instance_no_boot_iso( self, boot_object_name_mock): ilo_boot._clean_up_boot_iso_for_instance(self.node) self.assertFalse(boot_object_name_mock.called) @mock.patch.object(ilo_boot, 'parse_driver_info', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_image_instance_info', spec_set=True, autospec=True) def test__parse_deploy_info(self, instance_info_mock, driver_info_mock): instance_info_mock.return_value = {'a': 'b'} driver_info_mock.return_value = {'c': 'd'} expected_info = {'a': 'b', 'c': 'd'} actual_info = ilo_boot._parse_deploy_info(self.node) self.assertEqual(expected_info, actual_info) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test__validate_driver_info_MissingParam(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex(exception.MissingParameterValue, "Missing 'ilo_deploy_iso'", ilo_boot._validate_driver_info, task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test__validate_driver_info_valid_uuid(self, mock_parse_driver_info, mock_is_glance_image): mock_is_glance_image.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: deploy_iso = '8a81759a-f29b-454b-8ab3-161c6ca1882c' task.node.driver_info['ilo_deploy_iso'] = deploy_iso ilo_boot._validate_driver_info(task) mock_parse_driver_info.assert_called_once_with(task.node) mock_is_glance_image.assert_called_once_with(deploy_iso) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test__validate_driver_info_InvalidParam(self, mock_parse_driver_info, mock_is_glance_image, mock_validate_href): deploy_iso = 'http://abc.org/image/qcow2' mock_validate_href.side_effect = exception.ImageRefValidationFailed( image_href='http://abc.org/image/qcow2', reason='fail') mock_is_glance_image.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ilo_deploy_iso'] = deploy_iso self.assertRaisesRegex(exception.InvalidParameterValue, "Virtual media boot accepts", ilo_boot._validate_driver_info, task) mock_parse_driver_info.assert_called_once_with(task.node) mock_validate_href.assert_called_once_with(mock.ANY, deploy_iso) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test__validate_driver_info_valid_url(self, mock_parse_driver_info, mock_is_glance_image, mock_validate_href): deploy_iso = 'http://abc.org/image/deploy.iso' mock_is_glance_image.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ilo_deploy_iso'] = deploy_iso ilo_boot._validate_driver_info(task) mock_parse_driver_info.assert_called_once_with(task.node) mock_validate_href.assert_called_once_with(mock.ANY, deploy_iso) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_parse_deploy_info', spec_set=True, autospec=True) def _test__validate_instance_image_info(self, deploy_info_mock, validate_prop_mock, props_expected): d_info = {'image_source': 'uuid'} deploy_info_mock.return_value = d_info with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_boot._validate_instance_image_info(task) deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, props_expected) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test__validate_glance_partition_image(self, is_glance_image_mock): is_glance_image_mock.return_value = True self._test__validate_instance_image_info(props_expected=['kernel_id', 'ramdisk_id']) def test__validate_whole_disk_image(self): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() self._test__validate_instance_image_info(props_expected=[]) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test__validate_non_glance_partition_image(self, is_glance_image_mock): is_glance_image_mock.return_value = False self._test__validate_instance_image_info(props_expected=['kernel', 'ramdisk']) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_false(self, func_get_secure_boot_mode, func_set_secure_boot_mode): func_get_secure_boot_mode.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: returned_state = ilo_boot._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) self.assertFalse(func_set_secure_boot_mode.called) self.assertFalse(returned_state) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_true(self, func_get_secure_boot_mode, func_set_secure_boot_mode): func_get_secure_boot_mode.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: returned_state = ilo_boot._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) func_set_secure_boot_mode.assert_called_once_with(task, False) self.assertTrue(returned_state) @mock.patch.object(ilo_boot, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_secure_boot_mode', spec_set=True, autospec=True) def test__disable_secure_boot_exception(self, func_get_secure_boot_mode, exception_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: exception_mock.IloOperationNotSupported = Exception func_get_secure_boot_mode.side_effect = Exception returned_state = ilo_boot._disable_secure_boot(task) func_get_secure_boot_mode.assert_called_once_with(task) self.assertFalse(returned_state) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_node_for_deploy(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = False ilo_boot.prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) func_update_boot_mode.assert_called_once_with(task) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_node_for_deploy_sec_boot_on(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = True ilo_boot.prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) self.assertFalse(func_update_boot_mode.called) ret_boot_mode = task.node.instance_info['deploy_boot_mode'] self.assertEqual('uefi', ret_boot_mode) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_node_for_deploy_inst_info(self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): instance_info = {'capabilities': '{"secure_boot": "true"}'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = False task.node.instance_info = instance_info ilo_boot.prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) func_update_boot_mode.assert_called_once_with(task) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) self.assertNotIn('deploy_boot_mode', task.node.instance_info) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_disable_secure_boot', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_prepare_node_for_deploy_sec_boot_on_inst_info( self, func_node_power_action, func_disable_secure_boot, func_update_boot_mode): instance_info = {'capabilities': '{"secure_boot": "true"}'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_disable_secure_boot.return_value = True task.node.instance_info = instance_info ilo_boot.prepare_node_for_deploy(task) func_node_power_action.assert_called_once_with(task, states.POWER_OFF) func_disable_secure_boot.assert_called_once_with(task) self.assertFalse(func_update_boot_mode.called) bootmode = driver_utils.get_node_capability(task.node, "boot_mode") self.assertIsNone(bootmode) self.assertNotIn('deploy_boot_mode', task.node.instance_info) class IloVirtualMediaBootTestCase(test_common.BaseIloTest): boot_interface = 'ilo-virtual-media' @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(ilo_boot, '_validate_driver_info', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_validate_instance_image_info', spec_set=True, autospec=True) def test_validate(self, mock_val_instance_image_info, mock_val_driver_info, storage_mock): instance_info = self.node.instance_info instance_info['ilo_boot_iso'] = 'deploy-iso' instance_info['image_source'] = '6b2f0c0c-79e8-4db6-842e-43c9764204af' self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' storage_mock.return_value = True task.driver.boot.validate(task) mock_val_instance_image_info.assert_called_once_with(task) mock_val_driver_info.assert_called_once_with(task) @mock.patch.object(ilo_boot, '_validate_driver_info', spec_set=True, autospec=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_validate_ramdisk_boot_option_glance(self, is_glance_image_mock, validate_href_mock, val_driver_info_mock): instance_info = self.node.instance_info boot_iso = '6b2f0c0c-79e8-4db6-842e-43c9764204af' instance_info['ilo_boot_iso'] = boot_iso instance_info['capabilities'] = '{"boot_option": "ramdisk"}' self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_glance_image_mock.return_value = True task.driver.boot.validate(task) is_glance_image_mock.assert_called_once_with(boot_iso) self.assertFalse(validate_href_mock.called) self.assertFalse(val_driver_info_mock.called) @mock.patch.object(ilo_boot, '_validate_driver_info', spec_set=True, autospec=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_validate_ramdisk_boot_option_webserver(self, is_glance_image_mock, validate_href_mock, val_driver_info_mock): instance_info = self.node.instance_info boot_iso = 'http://myserver/boot.iso' instance_info['ilo_boot_iso'] = boot_iso instance_info['capabilities'] = '{"boot_option": "ramdisk"}' self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_glance_image_mock.return_value = False task.driver.boot.validate(task) is_glance_image_mock.assert_called_once_with(boot_iso) validate_href_mock.assert_called_once_with(mock.ANY, boot_iso) self.assertFalse(val_driver_info_mock.called) @mock.patch.object(ilo_boot.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_validate_driver_info', spec_set=True, autospec=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_validate_ramdisk_boot_option_webserver_exc(self, is_glance_image_mock, validate_href_mock, val_driver_info_mock, log_mock): instance_info = self.node.instance_info validate_href_mock.side_effect = exception.ImageRefValidationFailed( image_href='http://myserver/boot.iso', reason='fail') boot_iso = 'http://myserver/boot.iso' instance_info['ilo_boot_iso'] = boot_iso instance_info['capabilities'] = '{"boot_option": "ramdisk"}' self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_glance_image_mock.return_value = False self.assertRaisesRegex(exception.ImageRefValidationFailed, "Validation of image href " "http://myserver/boot.iso failed", task.driver.boot.validate, task) is_glance_image_mock.assert_called_once_with(boot_iso) validate_href_mock.assert_called_once_with(mock.ANY, boot_iso) self.assertFalse(val_driver_info_mock.called) self.assertIn("Virtual media deploy with 'ramdisk' boot_option " "accepts only Glance images or HTTP(S) URLs as " "instance_info['ilo_boot_iso'].", log_mock.call_args[0][0]) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(ilo_boot, '_validate_driver_info', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_validate_instance_image_info', spec_set=True, autospec=True) def test_validate_boot_from_volume(self, mock_val_instance_image_info, mock_val_driver_info, storage_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' storage_mock.return_value = False task.driver.boot.validate(task) mock_val_driver_info.assert_called_once_with(task) self.assertFalse(mock_val_instance_image_info.called) @mock.patch.object(ilo_boot, '_validate_driver_info', autospec=True) def test_validate_inspection(self, mock_val_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ilo_deploy_iso'] = 'deploy-iso' task.driver.boot.validate_inspection(task) mock_val_driver_info.assert_called_once_with(task) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_inspection_missing(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.UnsupportedDriverExtension, task.driver.boot.validate_inspection, task) @mock.patch.object(ilo_boot, 'prepare_node_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_single_nic_with_vif_port_id', spec_set=True, autospec=True) def _test_prepare_ramdisk(self, get_nic_mock, setup_vmedia_mock, eject_mock, prepare_node_for_deploy_mock, ilo_boot_iso, image_source, ramdisk_params={'a': 'b'}, mode='deploy'): instance_info = self.node.instance_info instance_info['ilo_boot_iso'] = ilo_boot_iso instance_info['image_source'] = image_source self.node.instance_info = instance_info self.node.save() iso = 'provisioning-iso' get_nic_mock.return_value = '12:34:56:78:90:ab' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_info = task.node.driver_info driver_info['ilo_%s_iso' % mode] = iso task.node.driver_info = driver_info task.driver.boot.prepare_ramdisk(task, ramdisk_params) prepare_node_for_deploy_mock.assert_called_once_with(task) eject_mock.assert_called_once_with(task) expected_ramdisk_opts = {'a': 'b', 'BOOTIF': '12:34:56:78:90:ab', 'ipa-agent-token': mock.ANY} get_nic_mock.assert_called_once_with(task) setup_vmedia_mock.assert_called_once_with(task, iso, expected_ramdisk_opts) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) def test_prepare_ramdisk_in_takeover(self, mock_is_image): """Ensure deploy ops are blocked when not deploying and not cleaning""" for state in states.STABLE_STATES: mock_is_image.reset_mock() self.node.provision_state = state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone( task.driver.boot.prepare_ramdisk(task, None)) self.assertFalse(mock_is_image.called) def test_prepare_ramdisk_rescue_glance_image(self): self.node.provision_state = states.RESCUING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='swift:abcdef', image_source='6b2f0c0c-79e8-4db6-842e-43c9764204af', mode='rescue') self.node.refresh() self.assertNotIn('ilo_boot_iso', self.node.instance_info) def test_prepare_ramdisk_rescue_not_a_glance_image(self): self.node.provision_state = states.RESCUING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='http://mybootiso', image_source='http://myimage', mode='rescue') self.node.refresh() self.assertEqual('http://mybootiso', self.node.instance_info['ilo_boot_iso']) def test_prepare_ramdisk_glance_image(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='swift:abcdef', image_source='6b2f0c0c-79e8-4db6-842e-43c9764204af') self.node.refresh() self.assertNotIn('ilo_boot_iso', self.node.instance_info) def test_prepare_ramdisk_not_a_glance_image(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='http://mybootiso', image_source='http://myimage') self.node.refresh() self.assertEqual('http://mybootiso', self.node.instance_info['ilo_boot_iso']) def test_prepare_ramdisk_glance_image_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='swift:abcdef', image_source='6b2f0c0c-79e8-4db6-842e-43c9764204af') self.node.refresh() self.assertNotIn('ilo_boot_iso', self.node.instance_info) def test_prepare_ramdisk_not_a_glance_image_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk( ilo_boot_iso='http://mybootiso', image_source='http://myimage') self.node.refresh() self.assertEqual('http://mybootiso', self.node.instance_info['ilo_boot_iso']) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot_with_boot_iso( self, get_boot_iso_mock, setup_vmedia_mock, set_boot_device_mock): root_uuid = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_boot_iso_mock.return_value = 'boot.iso' task.driver.boot._configure_vmedia_boot( task, root_uuid) get_boot_iso_mock.assert_called_once_with( task, root_uuid) setup_vmedia_mock.assert_called_once_with( task, 'boot.iso') set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM, persistent=True) self.assertEqual('boot.iso', task.node.instance_info['ilo_boot_iso']) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot_without_boot_iso( self, get_boot_iso_mock, setup_vmedia_mock, set_boot_device_mock): root_uuid = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_boot_iso_mock.return_value = None task.driver.boot._configure_vmedia_boot( task, root_uuid) get_boot_iso_mock.assert_called_once_with( task, root_uuid) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_clean_up_boot_iso_for_instance', spec_set=True, autospec=True) def _test_clean_up_instance(self, cleanup_iso_mock, cleanup_vmedia_mock, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_iso_created_in_web_server'] = False task.node.driver_internal_info = driver_internal_info task.node.save() is_iscsi_boot_mock.return_value = False task.driver.boot.clean_up_instance(task) cleanup_iso_mock.assert_called_once_with(task.node) cleanup_vmedia_mock.assert_called_once_with(task) driver_internal_info = task.node.driver_internal_info self.assertNotIn('boot_iso_created_in_web_server', driver_internal_info) node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) def test_clean_up_instance_deleting(self): self.node.provisioning_state = states.DELETING self._test_clean_up_instance() def test_clean_up_instance_rescuing(self): self.node.provisioning_state = states.RESCUING self._test_clean_up_instance() @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'clear_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume( self, node_power_mock, update_secure_boot_mode_mock, clear_iscsi_boot_target_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['ilo_uefi_iscsi_boot'] = True task.node.driver_internal_info = driver_internal_info task.node.save() is_iscsi_boot_mock.return_value = True task.driver.boot.clean_up_instance(task) node_power_mock.assert_called_once_with(task, states.POWER_OFF) clear_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) update_secure_boot_mode_mock.assert_called_once_with(task, False) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_clean_up_boot_iso_for_instance', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume_bios( self, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock, cleanup_iso_mock, cleanup_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_iscsi_boot_mock.return_value = False task.driver.boot.clean_up_instance(task) cleanup_iso_mock.assert_called_once_with(task.node) cleanup_vmedia_mock.assert_called_once_with(task) driver_internal_info = task.node.driver_internal_info self.assertNotIn('boot_iso_created_in_web_server', driver_internal_info) node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_ramdisk(self, cleanup_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_ramdisk(task) cleanup_vmedia_mock.assert_called_once_with(task) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def _test_prepare_instance_whole_disk_image( self, cleanup_vmedia_boot_mock, set_boot_device_mock, update_boot_mode_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() is_iscsi_boot_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) def test_prepare_instance_whole_disk_image_local(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} self.node.save() self._test_prepare_instance_whole_disk_image() def test_prepare_instance_whole_disk_image(self): self._test_prepare_instance_whole_disk_image() @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_partition_image( self, cleanup_vmedia_boot_mock, configure_vmedia_mock, update_boot_mode_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): self.node.driver_internal_info = {'root_uuid_or_disk_id': ( "12312642-09d3-467f-8e09-12385826a123")} self.node.instance_info = { 'capabilities': {'boot_option': 'netboot'}} self.node.save() is_iscsi_boot_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) configure_vmedia_mock.assert_called_once_with( mock.ANY, task, "12312642-09d3-467f-8e09-12385826a123") update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'set_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_prepare_instance_boot_from_volume( self, update_secure_boot_mode_mock, update_boot_mode_mock, set_boot_device_mock, set_iscsi_boot_target_mock, get_boot_mode_mock, is_iscsi_boot_mock, cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['ilo_uefi_iscsi_boot'] = True task.node.driver_internal_info = driver_internal_info task.node.save() is_iscsi_boot_mock.return_value = True get_boot_mode_mock.return_value = 'uefi' task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) set_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) set_boot_device_mock.assert_called_once_with( task, boot_devices.ISCSIBOOT, persistent=True) update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) self.assertTrue(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) def test_prepare_instance_boot_from_volume_bios( self, get_boot_mode_mock, is_iscsi_boot_mock, cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_iscsi_boot_mock.return_value = True get_boot_mode_mock.return_value = 'bios' self.assertRaisesRegex(exception.InstanceDeployFailure, "Virtual media can not boot volume " "in BIOS boot mode.", task.driver.boot.prepare_instance, task) cleanup_vmedia_boot_mock.assert_called_once_with(task) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(ilo_common, 'cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_boot, '_get_boot_iso', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_prepare_instance_boot_ramdisk(self, update_secure_boot_mode_mock, update_boot_mode_mock, set_boot_device_mock, get_boot_iso_mock, setup_vmedia_mock, is_iscsi_boot_mock, cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: instance_info = task.node.instance_info instance_info['capabilities'] = '{"boot_option": "ramdisk"}' task.node.instance_info = instance_info task.node.save() is_iscsi_boot_mock.return_value = False url = 'http://myserver/boot.iso' get_boot_iso_mock.return_value = url task.driver.boot.prepare_instance(task) cleanup_vmedia_boot_mock.assert_called_once_with(task) get_boot_iso_mock.assert_called_once_with(task, None) setup_vmedia_mock.assert_called_once_with(task, url) set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM, persistent=True) update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) def test_validate_rescue(self): driver_info = self.node.driver_info driver_info['ilo_rescue_iso'] = 'rescue.iso' self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_rescue(task) def test_validate_rescue_no_rescue_ramdisk(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*ilo_rescue_iso', task.driver.boot.validate_rescue, task) class IloPXEBootTestCase(test_common.BaseIloTest): boot_interface = 'ilo-pxe' @mock.patch.object(ilo_boot, 'prepare_node_for_deploy', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) def _test_prepare_ramdisk_needs_node_prep(self, pxe_prepare_ramdisk_mock, prepare_node_mock, prov_state): self.node.provision_state = prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone( task.driver.boot.prepare_ramdisk(task, None)) prepare_node_mock.assert_called_once_with(task) pxe_prepare_ramdisk_mock.assert_called_once_with( mock.ANY, task, None) def test_prepare_ramdisk_in_deploying(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.DEPLOYING) def test_prepare_ramdisk_in_rescuing(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.RESCUING) def test_prepare_ramdisk_in_cleaning(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.CLEANING) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance(self, pxe_cleanup_mock, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) is_iscsi_boot_mock.return_value = False node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_cleanup_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume_bios( self, pxe_cleanup_mock, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) is_iscsi_boot_mock.return_value = True node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_cleanup_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'clear_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume(self, node_power_mock, update_secure_boot_mode_mock, clear_iscsi_boot_target_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['ilo_uefi_iscsi_boot'] = True task.node.driver_internal_info = driver_internal_info task.node.save() is_iscsi_boot_mock.return_value = True task.driver.boot.clean_up_instance(task) clear_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance(self, pxe_prepare_instance_mock, update_boot_mode_mock, update_secure_boot_mode_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) is_iscsi_boot_mock.return_value = False get_boot_mode_mock.return_value = 'uefi' update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) pxe_prepare_instance_mock.assert_called_once_with(mock.ANY, task) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance_bios(self, pxe_prepare_instance_mock, update_boot_mode_mock, update_secure_boot_mode_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) is_iscsi_boot_mock.return_value = False get_boot_mode_mock.return_value = 'bios' update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) pxe_prepare_instance_mock.assert_called_once_with(mock.ANY, task) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'set_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_prepare_instance_boot_from_volume( self, update_secure_boot_mode_mock, update_boot_mode_mock, set_boot_device_mock, set_iscsi_boot_target_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_iscsi_boot_mock.return_value = True get_boot_mode_mock.return_value = 'uefi' task.driver.boot.prepare_instance(task) set_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) set_boot_device_mock.assert_called_once_with( task, boot_devices.ISCSIBOOT, persistent=True) update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) self.assertTrue(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) class IloiPXEBootTestCase(test_common.BaseIloTest): boot_interface = 'ilo-ipxe' def setUp(self): super(IloiPXEBootTestCase, self).setUp() self.config(enabled_boot_interfaces=['ilo-ipxe']) @mock.patch.object(ilo_boot, 'prepare_node_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ipxe.iPXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) def _test_prepare_ramdisk_needs_node_prep(self, pxe_prepare_ramdisk_mock, prepare_node_mock, prov_state): self.node.provision_state = prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone( task.driver.boot.prepare_ramdisk(task, None)) prepare_node_mock.assert_called_once_with(task) pxe_prepare_ramdisk_mock.assert_called_once_with( mock.ANY, task, None) def test_prepare_ramdisk_in_deploying(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.DEPLOYING) def test_prepare_ramdisk_in_rescuing(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.RESCUING) def test_prepare_ramdisk_in_cleaning(self): self._test_prepare_ramdisk_needs_node_prep(prov_state=states.CLEANING) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ipxe.iPXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance(self, pxe_cleanup_mock, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) is_iscsi_boot_mock.return_value = False node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_cleanup_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ipxe.iPXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume_bios( self, pxe_cleanup_mock, node_power_mock, update_secure_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) is_iscsi_boot_mock.return_value = True node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) pxe_cleanup_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'clear_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) def test_clean_up_instance_boot_from_volume(self, node_power_mock, update_secure_boot_mode_mock, clear_iscsi_boot_target_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['ilo_uefi_iscsi_boot'] = True task.node.driver_internal_info = driver_internal_info task.node.save() is_iscsi_boot_mock.return_value = True task.driver.boot.clean_up_instance(task) clear_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) node_power_mock.assert_called_once_with(task, states.POWER_OFF) update_secure_boot_mode_mock.assert_called_once_with(task, False) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ipxe.iPXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance(self, pxe_prepare_instance_mock, update_boot_mode_mock, update_secure_boot_mode_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) is_iscsi_boot_mock.return_value = False get_boot_mode_mock.return_value = 'uefi' update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) pxe_prepare_instance_mock.assert_called_once_with(mock.ANY, task) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ipxe.iPXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance_bios(self, pxe_prepare_instance_mock, update_boot_mode_mock, update_secure_boot_mode_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) is_iscsi_boot_mock.return_value = False get_boot_mode_mock.return_value = 'bios' update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) pxe_prepare_instance_mock.assert_called_once_with(mock.ANY, task) self.assertIsNone(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) @mock.patch.object(deploy_utils, 'is_iscsi_boot', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(ilo_management.IloManagement, 'set_iscsi_boot_target', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_boot_mode', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_secure_boot_mode', spec_set=True, autospec=True) def test_prepare_instance_boot_from_volume( self, update_secure_boot_mode_mock, update_boot_mode_mock, set_boot_device_mock, set_iscsi_boot_target_mock, get_boot_mode_mock, is_iscsi_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: is_iscsi_boot_mock.return_value = True get_boot_mode_mock.return_value = 'uefi' task.driver.boot.prepare_instance(task) set_iscsi_boot_target_mock.assert_called_once_with(mock.ANY, task) set_boot_device_mock.assert_called_once_with( task, boot_devices.ISCSIBOOT, persistent=True) update_boot_mode_mock.assert_called_once_with(task) update_secure_boot_mode_mock.assert_called_once_with(task, True) self.assertTrue(task.node.driver_internal_info.get( 'ilo_uefi_iscsi_boot')) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_inspect.py0000664000175000017500000006110613652514273025511 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Management Interface used by iLO modules.""" import mock from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as conductor_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import inspect as ilo_inspect from ironic.drivers.modules.ilo import power as ilo_power from ironic.drivers.modules import inspect_utils from ironic.tests.unit.drivers.modules.ilo import test_common class IloInspectTestCase(test_common.BaseIloTest): def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: properties = ilo_common.REQUIRED_PROPERTIES.copy() properties.update(ilo_common.SNMP_PROPERTIES) properties.update(ilo_common.SNMP_OPTIONAL_PROPERTIES) self.assertEqual(properties, task.driver.inspect.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.validate(task) driver_info_mock.assert_called_once_with(task.node) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = {} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) self.assertEqual(properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_ok_local_gb_zero(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock, log_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': 0, 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = {} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: properties = task.node.properties properties['local_gb'] = 10 task.node.properties = properties task.node.save() expected_properties = {'memory_mb': '512', 'local_gb': 10, 'cpus': '1', 'cpu_arch': 'x86_64'} task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) self.assertTrue(log_mock.called) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_ok_gen8(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock, log_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': 10, 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = {'server_model': 'Gen8'} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_properties = {'memory_mb': '512', 'local_gb': 10, 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': 'server_model:Gen8'} task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) self.assertTrue(log_mock.called) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_ok_gen10(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock, log_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': 10, 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'NIC.LOM.1.1': 'aa:aa:aa:aa:aa:aa'} capabilities = {'server_model': 'Gen10'} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_properties = {'memory_mb': '512', 'local_gb': 10, 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': 'server_model:Gen10'} task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) self.assertFalse(log_mock.called) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(conductor_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_ok_power_off(self, get_ilo_object_mock, power_mock, set_power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capabilities = {} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) self.assertEqual(properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) set_power_mock.assert_any_call(task, states.POWER_ON) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_capabilities_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} capability_str = 'sriov_enabled:true' capabilities = {'sriov_enabled': 'true'} result = {'properties': properties, 'macs': macs} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.inspect.inspect_hardware(task) expected_properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': capability_str} self.assertEqual(expected_properties, task.node.properties) power_mock.assert_called_once_with(mock.ANY, task) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) @mock.patch.object(ilo_inspect, '_get_capabilities', spec_set=True, autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', spec_set=True, autospec=True) @mock.patch.object(ilo_inspect, '_get_essential_properties', spec_set=True, autospec=True) @mock.patch.object(ilo_power.IloPower, 'get_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inspect_essential_capabilities_exist_ok(self, get_ilo_object_mock, power_mock, get_essential_mock, create_port_mock, get_capabilities_mock): ilo_object_mock = get_ilo_object_mock.return_value properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'somekey': 'somevalue'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} capabilities = {'sriov_enabled': 'true'} get_essential_mock.return_value = result get_capabilities_mock.return_value = capabilities power_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = {'capabilities': 'boot_mode:uefi'} expected_capabilities = ('sriov_enabled:true,' 'boot_mode:uefi') set1 = set(expected_capabilities.split(',')) task.driver.inspect.inspect_hardware(task) end_capabilities = task.node.properties['capabilities'] set2 = set(end_capabilities.split(',')) self.assertEqual(set1, set2) expected_properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64', 'capabilities': end_capabilities} power_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(task.node.properties, expected_properties) get_essential_mock.assert_called_once_with(task.node, ilo_object_mock) get_capabilities_mock.assert_called_once_with(task.node, ilo_object_mock) create_port_mock.assert_called_once_with(task, macs) class TestInspectPrivateMethods(test_common.BaseIloTest): def test__get_essential_properties_ok(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result actual_result = ilo_inspect._get_essential_properties(self.node, ilo_mock) self.assertEqual(result, actual_result) def test__get_essential_properties_fail(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) # Missing key: cpu_arch properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result result = self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) self.assertEqual( str(result), ("Failed to inspect hardware. Reason: Server didn't return the " "key(s): cpu_arch")) def test__get_essential_properties_fail_invalid_format(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) # Not a dict properties = ['memory_mb', '512', 'local_gb', '10', 'cpus', '1'] macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] capabilities = '' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result ilo_mock.get_additional_capabilities.return_value = capabilities self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_fail_mac_invalid_format(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dict macs = 'aa:aa:aa:aa:aa:aa' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_hardware_port_empty(self): ilo_mock = mock.MagicMock( spec=['get_additional_capabilities', 'get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dictionary macs = None result = {'properties': properties, 'macs': macs} capabilities = '' ilo_mock.get_essential_properties.return_value = result ilo_mock.get_additional_capabilities.return_value = capabilities self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) def test__get_essential_properties_hardware_port_not_dict(self): ilo_mock = mock.MagicMock(spec=['get_essential_properties']) properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1', 'cpu_arch': 'x86_64'} # Not a dict macs = 'aa:bb:cc:dd:ee:ff' result = {'properties': properties, 'macs': macs} ilo_mock.get_essential_properties.return_value = result result = self.assertRaises( exception.HardwareInspectionFailure, ilo_inspect._get_essential_properties, self.node, ilo_mock) @mock.patch.object(utils, 'get_updated_capabilities', spec_set=True, autospec=True) def test__get_capabilities_ok(self, capability_mock): ilo_mock = mock.MagicMock(spec=['get_server_capabilities']) capabilities = {'ilo_firmware_version': 'xyz'} ilo_mock.get_server_capabilities.return_value = capabilities cap = ilo_inspect._get_capabilities(self.node, ilo_mock) self.assertEqual(cap, capabilities) def test__validate_ok(self): properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '2', 'cpu_arch': 'x86_arch'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa'} data = {'properties': properties, 'macs': macs} valid_keys = ilo_inspect.IloInspect.ESSENTIAL_PROPERTIES ilo_inspect._validate(self.node, data) self.assertEqual(sorted(set(properties)), sorted(valid_keys)) def test__validate_essential_keys_fail_missing_key(self): properties = {'memory_mb': '512', 'local_gb': '10', 'cpus': '1'} macs = {'Port 1': 'aa:aa:aa:aa:aa:aa'} data = {'properties': properties, 'macs': macs} self.assertRaises(exception.HardwareInspectionFailure, ilo_inspect._validate, self.node, data) def test___create_supported_capabilities_dict(self): capabilities = {} expected = {} for key in ilo_inspect.CAPABILITIES_KEYS: capabilities.update({key: 'true'}) expected.update({key: 'true'}) capabilities.update({'unknown_property': 'true'}) cap = ilo_inspect._create_supported_capabilities_dict(capabilities) self.assertEqual(expected, cap) def test___create_supported_capabilities_dict_excluded_capability(self): capabilities = {} expected = {} for key in ilo_inspect.CAPABILITIES_KEYS - {'has_ssd'}: capabilities.update({key: 'true'}) expected.update({key: 'true'}) cap = ilo_inspect._create_supported_capabilities_dict(capabilities) self.assertEqual(expected, cap) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_raid.py0000664000175000017500000006670713652514273024777 0ustar zuulzuul00000000000000# Copyright 2018 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Raid Interface used by iLO5.""" import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import raid from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import raid as ilo_raid from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() class Ilo5RAIDTestCase(db_base.DbTestCase): def setUp(self): super(Ilo5RAIDTestCase, self).setUp() self.driver = mock.Mock(raid=ilo_raid.Ilo5RAID()) self.target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, {'size_gb': 200, 'raid_level': 5} ]} n = { 'driver': 'ilo5', 'driver_info': INFO_DICT, 'target_raid_config': self.target_raid_config, } self.config(enabled_hardware_types=['ilo5'], enabled_boot_interfaces=['ilo-virtual-media'], enabled_console_interfaces=['ilo'], enabled_deploy_interfaces=['iscsi'], enabled_inspect_interfaces=['ilo'], enabled_management_interfaces=['ilo5'], enabled_power_interfaces=['ilo'], enabled_raid_interfaces=['ilo5']) self.node = obj_utils.create_test_node(self.context, **n) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def _test__prepare_for_read_raid_create_raid( self, mock_reboot, mock_build_opt): with task_manager.acquire(self.context, self.node.uuid) as task: mock_build_opt.return_value = [] task.driver.raid._prepare_for_read_raid(task, 'create_raid') self.assertTrue( task.node.driver_internal_info.get( 'ilo_raid_create_in_progress')) if task.node.clean_step: self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_clean_step')) if task.node.deploy_step: self.assertTrue( task.node.driver_internal_info.get( 'deployment_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_deploy_step')) mock_reboot.assert_called_once_with(task, states.REBOOT) def test__prepare_for_read_raid_create_raid_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test__prepare_for_read_raid_create_raid() def test__prepare_for_read_raid_create_raid_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test__prepare_for_read_raid_create_raid() @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def _test__prepare_for_read_raid_delete_raid( self, mock_reboot, mock_build_opt): with task_manager.acquire(self.context, self.node.uuid) as task: mock_build_opt.return_value = [] task.driver.raid._prepare_for_read_raid(task, 'delete_raid') self.assertTrue( task.node.driver_internal_info.get( 'ilo_raid_delete_in_progress')) if task.node.clean_step: self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertEqual( task.node.driver_internal_info.get( 'skip_current_clean_step'), False) else: self.assertTrue( task.node.driver_internal_info.get( 'deployment_reboot')) self.assertEqual( task.node.driver_internal_info.get( 'skip_current_deploy_step'), False) mock_reboot.assert_called_once_with(task, states.REBOOT) def test__prepare_for_read_raid_delete_raid_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test__prepare_for_read_raid_delete_raid() def test__prepare_for_read_raid_delete_raid_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test__prepare_for_read_raid_delete_raid() @mock.patch.object(ilo_raid.Ilo5RAID, '_prepare_for_read_raid') @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration( self, ilo_mock, filter_target_raid_config_mock, prepare_raid_mock): ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: filter_target_raid_config_mock.return_value = ( self.target_raid_config) result = task.driver.raid.create_configuration(task) prepare_raid_mock.assert_called_once_with(task, 'create_raid') if task.node.clean_step: self.assertEqual(states.CLEANWAIT, result) else: self.assertEqual(states.DEPLOYWAIT, result) (ilo_mock_object.create_raid_configuration. assert_called_once_with(self.target_raid_config)) def test_create_configuration_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration() def test_create_configuration_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration() @mock.patch.object(raid, 'update_raid_info', autospec=True) @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_with_read_raid( self, ilo_mock, filter_target_raid_config_mock, update_raid_mock): raid_conf = {u'logical_disks': [{u'size_gb': 89, u'physical_disks': [u'5I:1:1'], u'raid_level': u'0', u'root_device_hint': {u'wwn': u'0x600508b1001c7e87'}, u'controller': u'Smart Array P822 in Slot 1', u'volume_name': u'0006EB7BPDVTF0BRH5L0EAEDDA'}] } ilo_mock_object = ilo_mock.return_value driver_internal_info = self.node.driver_internal_info driver_internal_info['ilo_raid_create_in_progress'] = True if self.node.clean_step: driver_internal_info['skip_current_clean_step'] = False driver_internal_info['cleaning_reboot'] = True else: driver_internal_info['skip_current_deploy_step'] = False driver_internal_info['deployment_reboot'] = True self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: filter_target_raid_config_mock.return_value = ( self.target_raid_config) ilo_mock_object.read_raid_configuration.return_value = raid_conf task.driver.raid.create_configuration(task) update_raid_mock.assert_called_once_with(task.node, raid_conf) self.assertNotIn('ilo_raid_create_in_progress', task.node.driver_internal_info) if task.node.clean_step: self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) def test_create_configuration_with_read_raid_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_with_read_raid() def test_create_configuration_with_read_raid_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_with_read_raid() @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_with_read_raid_failed( self, ilo_mock, filter_target_raid_config_mock): raid_conf = {u'logical_disks': []} driver_internal_info = self.node.driver_internal_info driver_internal_info['ilo_raid_create_in_progress'] = True driver_internal_info['skip_current_clean_step'] = False self.node.driver_internal_info = driver_internal_info self.node.save() ilo_mock_object = ilo_mock.return_value if self.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid) as task: filter_target_raid_config_mock.return_value = ( self.target_raid_config) ilo_mock_object.read_raid_configuration.return_value = raid_conf self.assertRaises(exept, task.driver.raid.create_configuration, task) self.assertNotIn('ilo_raid_create_in_progress', task.node.driver_internal_info) if task.node.clean_step: self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) else: self.assertNotIn('skip_current_deploy_step', task.node.driver_internal_info) def test_create_configuration_with_read_raid_failed_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_with_read_raid_failed() def test_create_configuration_with_read_raid_failed_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_with_read_raid_failed() @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_empty_target_raid_config( self, ilo_mock, filter_target_raid_config_mock): self.node.target_raid_config = {} self.node.save() ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: msg = "Node %s has no target RAID configuration" % self.node.uuid filter_target_raid_config_mock.side_effect = ( exception.MissingParameterValue(msg)) self.assertRaises(exception.MissingParameterValue, task.driver.raid.create_configuration, task) self.assertFalse(ilo_mock_object.create_raid_configuration.called) def test_create_configuration_empty_target_raid_config_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_empty_target_raid_config() def test_create_configuration_empty_target_raid_config_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_empty_target_raid_config() @mock.patch.object(ilo_raid.Ilo5RAID, '_prepare_for_read_raid') @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_skip_root( self, ilo_mock, filter_target_raid_config_mock, prepare_raid_mock): ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 5} ]} filter_target_raid_config_mock.return_value = ( exp_target_raid_config) result = task.driver.raid.create_configuration( task, create_root_volume=False) (ilo_mock_object.create_raid_configuration. assert_called_once_with(exp_target_raid_config)) if task.node.clean_step: self.assertEqual(states.CLEANWAIT, result) else: self.assertEqual(states.DEPLOYWAIT, result) prepare_raid_mock.assert_called_once_with(task, 'create_raid') self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) def test_create_configuration_skip_root_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_root() def test_create_configuration_skip_root_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_root() @mock.patch.object(ilo_raid.Ilo5RAID, '_prepare_for_read_raid') @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_skip_non_root( self, ilo_mock, filter_target_raid_config_mock, prepare_raid_mock): ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True} ]} filter_target_raid_config_mock.return_value = ( exp_target_raid_config) result = task.driver.raid.create_configuration( task, create_nonroot_volumes=False) (ilo_mock_object.create_raid_configuration. assert_called_once_with(exp_target_raid_config)) prepare_raid_mock.assert_called_once_with(task, 'create_raid') if task.node.clean_step: self.assertEqual(states.CLEANWAIT, result) else: self.assertEqual(states.DEPLOYWAIT, result) self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) def test_create_configuration_skip_non_root_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_non_root() def test_create_configuration_skip_non_root_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_non_root() @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_skip_root_skip_non_root( self, ilo_mock, filter_target_raid_config_mock): ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: msg = "Node %s has no target RAID configuration" % self.node.uuid filter_target_raid_config_mock.side_effect = ( exception.MissingParameterValue(msg)) self.assertRaises( exception.MissingParameterValue, task.driver.raid.create_configuration, task, False, False) self.assertFalse(ilo_mock_object.create_raid_configuration.called) def test_create_configuration_skip_root_skip_non_root_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_root_skip_non_root() def test_create_configuration_skip_root_skip_non_root_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_skip_root_skip_non_root() @mock.patch.object(ilo_raid.Ilo5RAID, '_set_step_failed') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_create_configuration_ilo_error(self, ilo_mock, set_step_failed_mock): ilo_mock_object = ilo_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.create_raid_configuration.side_effect = exc with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.raid.create_configuration( task, create_nonroot_volumes=False) set_step_failed_mock.assert_called_once_with( task, 'Failed to create raid configuration ' 'on node %s' % self.node.uuid, exc) self.assertNotIn('ilo_raid_create_in_progress', task.node.driver_internal_info) if task.node.clean_step: self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) else: self.assertNotIn('skip_current_deploy_step', task.node.driver_internal_info) def test_create_configuration_ilo_error_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_ilo_error() def test_create_configuration_ilo_error_cleaning_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_create_configuration_ilo_error() @mock.patch.object(ilo_raid.Ilo5RAID, '_prepare_for_read_raid') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_delete_configuration(self, ilo_mock, prepare_raid_mock): ilo_mock_object = ilo_mock.return_value with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.raid.delete_configuration(task) if task.node.clean_step: self.assertEqual(states.CLEANWAIT, result) else: self.assertEqual(states.DEPLOYWAIT, result) ilo_mock_object.delete_raid_configuration.assert_called_once_with() prepare_raid_mock.assert_called_once_with(task, 'delete_raid') def test_delete_configuration_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration() def test_delete_configuration_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration() @mock.patch.object(ilo_raid.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(ilo_raid.Ilo5RAID, '_prepare_for_read_raid') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_delete_configuration_no_logical_drive( self, ilo_mock, prepare_raid_mock, log_mock): ilo_mock_object = ilo_mock.return_value exc = ilo_error.IloLogicalDriveNotFoundError('No logical drive found') with task_manager.acquire(self.context, self.node.uuid) as task: ilo_mock_object.delete_raid_configuration.side_effect = exc task.driver.raid.delete_configuration(task) self.assertTrue(log_mock.called) def test_delete_configuration_no_logical_drive_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_no_logical_drive() def test_delete_configuration_no_logical_drive_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_no_logical_drive() @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_delete_configuration_with_read_raid(self, ilo_mock): raid_conf = {u'logical_disks': []} driver_internal_info = self.node.driver_internal_info driver_internal_info['ilo_raid_delete_in_progress'] = True driver_internal_info['skip_current_clean_step'] = False self.node.driver_internal_info = driver_internal_info self.node.save() ilo_mock_object = ilo_mock.return_value if self.node.clean_step: skip_field_name = 'skip_current_clean_step' else: skip_field_name = 'skip_current_deploy_step' with task_manager.acquire(self.context, self.node.uuid) as task: ilo_mock_object.read_raid_configuration.return_value = raid_conf task.driver.raid.delete_configuration(task) self.assertEqual(self.node.raid_config, {}) self.assertNotIn('ilo_raid_delete_in_progress', task.node.driver_internal_info) self.assertNotIn(skip_field_name, task.node.driver_internal_info) def test_delete_configuration_with_read_raid_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_with_read_raid() def test_delete_configuration_with_read_raid_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_with_read_raid() @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_delete_configuration_with_read_raid_failed(self, ilo_mock): raid_conf = {u'logical_disks': [{'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}]} driver_internal_info = self.node.driver_internal_info driver_internal_info['ilo_raid_delete_in_progress'] = True driver_internal_info['skip_current_clean_step'] = False self.node.driver_internal_info = driver_internal_info self.node.save() ilo_mock_object = ilo_mock.return_value if self.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid) as task: ilo_mock_object.read_raid_configuration.return_value = raid_conf self.assertRaises(exept, task.driver.raid.delete_configuration, task) self.assertNotIn('ilo_raid_delete_in_progress', task.node.driver_internal_info) if task.node.clean_step: self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) else: self.assertNotIn('skip_current_deploy_step', task.node.driver_internal_info) def test_delete_configuration_with_read_raid_failed_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_with_read_raid_failed() def test_delete_configuration_with_read_raid_failed_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_with_read_raid_failed() @mock.patch.object(ilo_raid.Ilo5RAID, '_set_step_failed') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test_delete_configuration_ilo_error(self, ilo_mock, set_step_failed_mock): ilo_mock_object = ilo_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.delete_raid_configuration.side_effect = exc with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.raid.delete_configuration(task) ilo_mock_object.delete_raid_configuration.assert_called_once_with() self.assertNotIn('ilo_raid_delete_in_progress', task.node.driver_internal_info) self.assertNotIn('cleaning_reboot', task.node.driver_internal_info) self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) set_step_failed_mock.assert_called_once_with( task, 'Failed to delete raid configuration ' 'on node %s' % self.node.uuid, exc) def test_delete_configuration_ilo_error_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_ilo_error() def test_delete_configuration_ilo_error_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_delete_configuration_ilo_error() ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_common.py0000664000175000017500000015255213652514273025342 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for common methods used by iLO modules.""" import builtins import hashlib import io import os import shutil import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import importutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import images from ironic.common import swift from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_ilo_info() ilo_client = importutils.try_import('proliantutils.ilo.client') ilo_error = importutils.try_import('proliantutils.exception') CONF = cfg.CONF class BaseIloTest(db_base.DbTestCase): boot_interface = None def setUp(self): super(BaseIloTest, self).setUp() self.config(enabled_hardware_types=['ilo', 'fake-hardware'], enabled_boot_interfaces=['ilo-pxe', 'ilo-virtual-media', 'fake'], enabled_bios_interfaces=['ilo', 'no-bios'], enabled_power_interfaces=['ilo', 'fake'], enabled_management_interfaces=['ilo', 'fake'], enabled_inspect_interfaces=['ilo', 'fake', 'no-inspect'], enabled_console_interfaces=['ilo', 'fake', 'no-console'], enabled_vendor_interfaces=['ilo', 'fake', 'no-vendor']) self.info = INFO_DICT.copy() self.node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), driver='ilo', boot_interface=self.boot_interface, bios_interface='ilo', driver_info=self.info) class IloValidateParametersTestCase(BaseIloTest): @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def _test_parse_driver_info(self, isFile_mock): info = ilo_common.parse_driver_info(self.node) self.assertEqual(INFO_DICT['ilo_address'], info['ilo_address']) self.assertEqual(INFO_DICT['ilo_username'], info['ilo_username']) self.assertEqual(INFO_DICT['ilo_password'], info['ilo_password']) self.assertEqual(60, info['client_timeout']) self.assertEqual(443, info['client_port']) self.assertEqual('/home/user/cafile.pem', info['ca_file']) self.assertEqual('user', info['snmp_auth_user']) self.assertEqual('1234', info['snmp_auth_prot_password']) self.assertEqual('4321', info['snmp_auth_priv_password']) self.assertEqual('SHA', info['snmp_auth_protocol']) self.assertEqual('AES', info['snmp_auth_priv_protocol']) @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def test_parse_driver_info_snmp_inspection_false(self, isFile_mock): info = ilo_common.parse_driver_info(self.node) self.assertEqual(INFO_DICT['ilo_address'], info['ilo_address']) self.assertEqual(INFO_DICT['ilo_username'], info['ilo_username']) self.assertEqual(INFO_DICT['ilo_password'], info['ilo_password']) self.assertEqual(60, info['client_timeout']) self.assertEqual(443, info['client_port']) @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def test_parse_driver_info_snmp_true_no_auth_priv_protocols(self, isFile_mock): d_info = {'ca_file': '/home/user/cafile.pem', 'snmp_auth_prot_password': '1234', 'snmp_auth_user': 'user', 'snmp_auth_priv_password': '4321'} self.node.driver_info.update(d_info) info = ilo_common.parse_driver_info(self.node) self.assertEqual(INFO_DICT['ilo_address'], info['ilo_address']) self.assertEqual(INFO_DICT['ilo_username'], info['ilo_username']) self.assertEqual(INFO_DICT['ilo_password'], info['ilo_password']) self.assertEqual(60, info['client_timeout']) self.assertEqual(443, info['client_port']) self.assertEqual('/home/user/cafile.pem', info['ca_file']) self.assertEqual('user', info['snmp_auth_user']) self.assertEqual('1234', info['snmp_auth_prot_password']) self.assertEqual('4321', info['snmp_auth_priv_password']) def test_parse_driver_info_ca_file_and_snmp_inspection_true(self): d_info = {'ca_file': '/home/user/cafile.pem', 'snmp_auth_prot_password': '1234', 'snmp_auth_user': 'user', 'snmp_auth_priv_password': '4321', 'snmp_auth_protocol': 'SHA', 'snmp_auth_priv_protocol': 'AES'} self.node.driver_info.update(d_info) self._test_parse_driver_info() def test_parse_driver_info_snmp_true_invalid_auth_protocol(self): d_info = {'ca_file': '/home/user/cafile.pem', 'snmp_auth_prot_password': '1234', 'snmp_auth_user': 'user', 'snmp_auth_priv_password': '4321', 'snmp_auth_protocol': 'abc', 'snmp_auth_priv_protocol': 'AES'} self.node.driver_info.update(d_info) self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_snmp_true_invalid_priv_protocol(self): d_info = {'ca_file': '/home/user/cafile.pem', 'snmp_auth_prot_password': '1234', 'snmp_auth_user': 'user', 'snmp_auth_priv_password': '4321', 'snmp_auth_protocol': 'SHA', 'snmp_auth_priv_protocol': 'xyz'} self.node.driver_info.update(d_info) self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_snmp_true_integer_auth_protocol(self): d_info = {'ca_file': '/home/user/cafile.pem', 'snmp_auth_prot_password': '1234', 'snmp_auth_user': 'user', 'snmp_auth_priv_password': '4321', 'snmp_auth_protocol': 12, 'snmp_auth_priv_protocol': 'AES'} self.node.driver_info.update(d_info) self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_snmp_inspection_true_raises(self): self.node.driver_info['snmp_auth_user'] = 'abc' self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_address(self): del self.node.driver_info['ilo_address'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['ilo_username'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['ilo_password'] self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) @mock.patch.object(os.path, 'isfile', return_value=False, autospec=True) def test_parse_driver_info_invalid_cafile(self, isFile_mock): self.node.driver_info['ca_file'] = '/home/missing.pem' self.assertRaisesRegex(exception.InvalidParameterValue, 'ca_file "/home/missing.pem" is not found.', ilo_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_timeout(self): self.node.driver_info['client_timeout'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_port(self): self.node.driver_info['client_port'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['client_port'] = '65536' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['console_port'] = 'invalid' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.node.driver_info['console_port'] = '-1' self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) def test_parse_driver_info_missing_multiple_params(self): del self.node.driver_info['ilo_password'] del self.node.driver_info['ilo_address'] e = self.assertRaises(exception.MissingParameterValue, ilo_common.parse_driver_info, self.node) self.assertIn('ilo_password', str(e)) self.assertIn('ilo_address', str(e)) def test_parse_driver_info_invalid_multiple_params(self): self.node.driver_info['client_timeout'] = 'qwe' e = self.assertRaises(exception.InvalidParameterValue, ilo_common.parse_driver_info, self.node) self.assertIn('client_timeout', str(e)) class IloCommonMethodsTestCase(BaseIloTest): @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) @mock.patch.object(ilo_client, 'IloClient', spec_set=True, autospec=True) def _test_get_ilo_object(self, ilo_client_mock, isFile_mock, ca_file=None): self.info['client_timeout'] = 600 self.info['client_port'] = 4433 self.info['ca_file'] = ca_file self.node.driver_info = self.info ilo_client_mock.return_value = 'ilo_object' returned_ilo_object = ilo_common.get_ilo_object(self.node) ilo_client_mock.assert_called_with( self.info['ilo_address'], self.info['ilo_username'], self.info['ilo_password'], self.info['client_timeout'], self.info['client_port'], cacert=self.info['ca_file'], snmp_credentials=None) self.assertEqual('ilo_object', returned_ilo_object) @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) @mock.patch.object(ilo_client, 'IloClient', spec_set=True, autospec=True) def test_get_ilo_object_snmp(self, ilo_client_mock, isFile_mock): info = {'auth_user': 'user', 'auth_prot_pp': '1234', 'auth_priv_pp': '4321', 'auth_protocol': 'SHA', 'priv_protocol': 'AES', 'snmp_inspection': True} d_info = {'client_timeout': 600, 'client_port': 4433, 'ca_file': 'ca_file', 'snmp_auth_user': 'user', 'snmp_auth_prot_password': '1234', 'snmp_auth_priv_password': '4321', 'snmp_auth_protocol': 'SHA', 'snmp_auth_priv_protocol': 'AES'} self.info.update(d_info) self.node.driver_info = self.info ilo_client_mock.return_value = 'ilo_object' returned_ilo_object = ilo_common.get_ilo_object(self.node) ilo_client_mock.assert_called_with( self.info['ilo_address'], self.info['ilo_username'], self.info['ilo_password'], self.info['client_timeout'], self.info['client_port'], cacert=self.info['ca_file'], snmp_credentials=info) self.assertEqual('ilo_object', returned_ilo_object) def test_get_ilo_object_cafile(self): self._test_get_ilo_object(ca_file='/home/user/ilo.pem') def test_get_ilo_object_no_cafile(self): self._test_get_ilo_object() def test_update_ipmi_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ipmi_info = { "ipmi_address": "1.2.3.4", "ipmi_username": "admin", "ipmi_password": "fake", "ipmi_terminal_port": 60 } self.info['console_port'] = 60 task.node.driver_info = self.info ilo_common.update_ipmi_properties(task) actual_info = task.node.driver_info expected_info = dict(self.info, **ipmi_info) self.assertEqual(expected_info, actual_info) def test__get_floppy_image_name(self): image_name_expected = 'image-' + self.node.uuid image_name_actual = ilo_common._get_floppy_image_name(self.node) self.assertEqual(image_name_expected, image_name_actual) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image(self, tempfile_mock, fatimage_mock, swift_api_mock): mock_image_file_handle = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.return_value = mock_image_file_handle swift_obj_mock = swift_api_mock.return_value self.config(swift_ilo_container='ilo_cont', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') deploy_args = {'arg1': 'val1', 'arg2': 'val2'} swift_obj_mock.get_temp_url.return_value = 'temp-url' timeout = CONF.ilo.swift_object_expiry_timeout object_headers = {'X-Delete-After': str(timeout)} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: temp_url = ilo_common._prepare_floppy_image(task, deploy_args) node_uuid = task.node.uuid object_name = 'image-' + node_uuid fatimage_mock.assert_called_once_with('image-tmp-file', parameters=deploy_args) swift_obj_mock.create_object.assert_called_once_with( 'ilo_cont', object_name, 'image-tmp-file', object_headers=object_headers) swift_obj_mock.get_temp_url.assert_called_once_with( 'ilo_cont', object_name, timeout) self.assertEqual('temp-url', temp_url) @mock.patch.object(ilo_common, 'copy_image_to_web_server', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image_use_webserver(self, tempfile_mock, fatimage_mock, copy_mock): mock_image_file_handle = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.return_value = mock_image_file_handle self.config(use_web_server_for_images=True, group='ilo') deploy_args = {'arg1': 'val1', 'arg2': 'val2'} CONF.deploy.http_url = "http://abc.com/httpboot" CONF.deploy.http_root = "/httpboot" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: node_uuid = task.node.uuid object_name = 'image-' + node_uuid http_url = CONF.deploy.http_url + '/' + object_name copy_mock.return_value = "http://abc.com/httpboot/" + object_name temp_url = ilo_common._prepare_floppy_image(task, deploy_args) fatimage_mock.assert_called_once_with('image-tmp-file', parameters=deploy_args) copy_mock.assert_called_once_with('image-tmp-file', object_name) self.assertEqual(http_url, temp_url) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_attach_vmedia(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value insert_media_mock = ilo_mock_object.insert_virtual_media set_status_mock = ilo_mock_object.set_vm_status ilo_common.attach_vmedia(self.node, 'FLOPPY', 'url') insert_media_mock.assert_called_once_with('url', device='FLOPPY') set_status_mock.assert_called_once_with( device='FLOPPY', boot_option='CONNECT', write_protect='YES') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_attach_vmedia_fails(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value set_status_mock = ilo_mock_object.set_vm_status exc = ilo_error.IloError('error') set_status_mock.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_common.attach_vmedia, self.node, 'FLOPPY', 'url') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode set_pending_boot_mode_mock = ilo_object_mock.set_pending_boot_mode get_pending_boot_mode_mock.return_value = 'LEGACY' ilo_common.set_boot_mode(self.node, 'uefi') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() set_pending_boot_mode_mock.assert_called_once_with('UEFI') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode_without_set_pending_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode get_pending_boot_mode_mock.return_value = 'LEGACY' ilo_common.set_boot_mode(self.node, 'bios') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() self.assertFalse(ilo_object_mock.set_pending_boot_mode.called) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_mode_with_IloOperationError(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value get_pending_boot_mode_mock = ilo_object_mock.get_pending_boot_mode get_pending_boot_mode_mock.return_value = 'UEFI' set_pending_boot_mode_mock = ilo_object_mock.set_pending_boot_mode exc = ilo_error.IloError('error') set_pending_boot_mode_mock.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_common.set_boot_mode, self.node, 'bios') get_ilo_object_mock.assert_called_once_with(self.node) get_pending_boot_mode_mock.assert_called_once_with() @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_instance_info_exists(self, set_boot_mode_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['deploy_boot_mode'] = 'bios' ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'bios') @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_capabilities_exist(self, set_boot_mode_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['capabilities'] = 'boot_mode:bios' ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'bios') @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_use_def_boot_mode(self, set_boot_mode_mock): self.config(default_boot_mode='bios', group='ilo') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'bios') self.assertEqual('bios', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode(self, get_ilo_object_mock): self.config(default_boot_mode="auto", group='ilo') ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'LEGACY' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() self.assertEqual('bios', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_unknown(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'UNKNOWN' set_pending_boot_mode_mock = ilo_mock_obj.set_pending_boot_mode with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() set_pending_boot_mode_mock.assert_called_once_with('UEFI') self.assertEqual('uefi', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_unknown_except(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value ilo_mock_obj.get_pending_boot_mode.return_value = 'UNKNOWN' set_pending_boot_mode_mock = ilo_mock_obj.set_pending_boot_mode exc = ilo_error.IloError('error') set_pending_boot_mode_mock.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.update_boot_mode, task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_update_boot_mode_legacy(self, get_ilo_object_mock): ilo_mock_obj = get_ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_obj.get_pending_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.update_boot_mode(task) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock_obj.get_pending_boot_mode.assert_called_once_with() self.assertEqual('bios', task.node.instance_info['deploy_boot_mode']) @mock.patch.object(ilo_common, 'set_boot_mode', spec_set=True, autospec=True) def test_update_boot_mode_prop_boot_mode_exist(self, set_boot_mode_mock): properties = {'capabilities': 'boot_mode:uefi'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties = properties ilo_common.update_boot_mode(task) set_boot_mode_mock.assert_called_once_with(task.node, 'uefi') @mock.patch.object(images, 'get_temp_url_for_glance_image', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_prepare_floppy_image', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_parameters( self, prepare_image_mock, attach_vmedia_mock, temp_url_mock): parameters = {'a': 'b'} boot_iso = '733d1c44-a2ea-414b-aca7-69decf20d810' prepare_image_mock.return_value = 'floppy_url' temp_url_mock.return_value = 'image_url' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso, parameters) prepare_image_mock.assert_called_once_with(task, parameters) attach_vmedia_mock.assert_any_call(task.node, 'FLOPPY', 'floppy_url') temp_url_mock.assert_called_once_with( task.context, '733d1c44-a2ea-414b-aca7-69decf20d810') attach_vmedia_mock.assert_any_call(task.node, 'CDROM', 'image_url') @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_swift(self, attach_vmedia_mock, swift_api_mock): swift_obj_mock = swift_api_mock.return_value boot_iso = 'swift:object-name' swift_obj_mock.get_temp_url.return_value = 'image_url' CONF.keystone_authtoken.auth_uri = 'http://authurl' CONF.ilo.swift_ilo_container = 'ilo_cont' CONF.ilo.swift_object_expiry_timeout = 1 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso) swift_obj_mock.get_temp_url.assert_called_once_with( 'ilo_cont', 'object-name', 1) attach_vmedia_mock.assert_called_once_with( task.node, 'CDROM', 'image_url') @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) def test_setup_vmedia_for_boot_with_url(self, attach_vmedia_mock): boot_iso = 'http://abc.com/img.iso' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.setup_vmedia_for_boot(task, boot_iso) attach_vmedia_mock.assert_called_once_with(task.node, 'CDROM', boot_iso) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot(self, get_name_mock, swift_api_mock, eject_mock): swift_obj_mock = swift_api_mock.return_value CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common.LOG, 'exception', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_exc(self, get_name_mock, swift_api_mock, eject_mock, log_mock): exc = exception.SwiftOperationError('error') swift_obj_mock = swift_api_mock.return_value swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') self.assertTrue(log_mock.called) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_exc_resource_not_found(self, get_name_mock, swift_api_mock, eject_mock, log_mock): exc = exception.SwiftObjectNotFoundError('error') swift_obj_mock = swift_api_mock.return_value swift_obj_mock.delete_object.side_effect = exc CONF.ilo.swift_ilo_container = 'ilo_cont' get_name_mock.return_value = 'image-node-uuid' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) swift_obj_mock.delete_object.assert_called_once_with( 'ilo_cont', 'image-node-uuid') self.assertTrue(log_mock.called) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common, 'eject_vmedia_devices', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'destroy_floppy_image_from_web_server', spec_set=True, autospec=True) def test_cleanup_vmedia_boot_for_webserver(self, destroy_image_mock, eject_mock): CONF.ilo.use_web_server_for_images = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.cleanup_vmedia_boot(task) destroy_image_mock.assert_called_once_with(task.node) eject_mock.assert_called_once_with(task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_eject_vmedia_devices(self, get_ilo_object_mock): ilo_object_mock = mock.MagicMock(spec=['eject_virtual_media']) get_ilo_object_mock.return_value = ilo_object_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.eject_vmedia_devices(task) ilo_object_mock.eject_virtual_media.assert_has_calls( [mock.call('FLOPPY'), mock.call('CDROM')]) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_eject_vmedia_devices_raises( self, get_ilo_object_mock): ilo_object_mock = mock.MagicMock(spec=['eject_virtual_media']) get_ilo_object_mock.return_value = ilo_object_mock exc = ilo_error.IloError('error') ilo_object_mock.eject_virtual_media.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.eject_vmedia_devices, task) ilo_object_mock.eject_virtual_media.assert_called_once_with( 'FLOPPY') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_current_boot_mode.return_value = 'UEFI' ilo_object_mock.get_secure_boot_mode.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = ilo_common.get_secure_boot_mode(task) ilo_object_mock.get_current_boot_mode.assert_called_once_with() ilo_object_mock.get_secure_boot_mode.assert_called_once_with() self.assertTrue(ret) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_bios(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_current_boot_mode.return_value = 'BIOS' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = ilo_common.get_secure_boot_mode(task) ilo_object_mock.get_current_boot_mode.assert_called_once_with() self.assertFalse(ilo_object_mock.get_secure_boot_mode.called) self.assertFalse(ret) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_current_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.get_secure_boot_mode, task) ilo_mock_object.get_current_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_secure_boot_mode_not_supported(self, ilo_object_mock): ilo_mock_object = ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_object.get_current_boot_mode.return_value = 'UEFI' ilo_mock_object.get_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationNotSupported, ilo_common.get_secure_boot_mode, task) ilo_mock_object.get_current_boot_mode.assert_called_once_with() ilo_mock_object.get_secure_boot_mode.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.set_secure_boot_mode(task, True) ilo_object_mock.set_secure_boot_mode.assert_called_once_with(True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.set_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.set_secure_boot_mode, task, False) ilo_mock_object.set_secure_boot_mode.assert_called_once_with(False) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_secure_boot_mode_not_supported(self, ilo_object_mock): ilo_mock_object = ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_object.set_secure_boot_mode.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationNotSupported, ilo_common.set_secure_boot_mode, task, False) ilo_mock_object.set_secure_boot_mode.assert_called_once_with(False) @mock.patch.object(os, 'chmod', spec_set=True, autospec=True) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) def test_copy_image_to_web_server(self, copy_mock, chmod_mock): CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" expected_url = "http://x.y.z.a/webserver/image-UUID" source = 'tmp_image_file' destination = "image-UUID" image_path = "/webserver/image-UUID" actual_url = ilo_common.copy_image_to_web_server(source, destination) self.assertEqual(expected_url, actual_url) copy_mock.assert_called_once_with(source, image_path) chmod_mock.assert_called_once_with(image_path, 0o644) @mock.patch.object(os, 'chmod', spec_set=True, autospec=True) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) def test_copy_image_to_web_server_fails(self, copy_mock, chmod_mock): CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" source = 'tmp_image_file' destination = "image-UUID" image_path = "/webserver/image-UUID" exc = exception.ImageUploadFailed('reason') copy_mock.side_effect = exc self.assertRaises(exception.ImageUploadFailed, ilo_common.copy_image_to_web_server, source, destination) copy_mock.assert_called_once_with(source, image_path) self.assertFalse(chmod_mock.called) @mock.patch.object(ilo_common, 'ironic_utils', autospec=True) def test_remove_image_from_web_server(self, utils_mock): # | GIVEN | CONF.deploy.http_url = "http://x.y.z.a/webserver/" CONF.deploy.http_root = "/webserver" object_name = 'tmp_image_file' # | WHEN | ilo_common.remove_image_from_web_server(object_name) # | THEN | (utils_mock.unlink_without_raise. assert_called_once_with("/webserver/tmp_image_file")) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_copy_image_to_swift(self, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') container = CONF.ilo.swift_ilo_container timeout = CONF.ilo.swift_object_expiry_timeout swift_obj_mock = swift_api_mock.return_value destination_object_name = 'destination_object_name' source_file_path = 'tmp_image_file' object_headers = {'X-Delete-After': str(timeout)} # | WHEN | ilo_common.copy_image_to_swift(source_file_path, destination_object_name) # | THEN | swift_obj_mock.create_object.assert_called_once_with( container, destination_object_name, source_file_path, object_headers=object_headers) swift_obj_mock.get_temp_url.assert_called_once_with( container, destination_object_name, timeout) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_copy_image_to_swift_throws_error_if_swift_operation_fails( self, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') self.config(swift_object_expiry_timeout=1, group='ilo') swift_obj_mock = swift_api_mock.return_value destination_object_name = 'destination_object_name' source_file_path = 'tmp_image_file' swift_obj_mock.create_object.side_effect = ( exception.SwiftOperationError(operation='create_object', error='failed')) # | WHEN | & | THEN | self.assertRaises(exception.SwiftOperationError, ilo_common.copy_image_to_swift, source_file_path, destination_object_name) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift(self, swift_api_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' # | WHEN | ilo_common.remove_image_from_swift(object_name) # | THEN | swift_obj_mock.delete_object.assert_called_once_with( container, object_name) @mock.patch.object(ilo_common, 'LOG', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift_suppresses_notfound_exc( self, swift_api_mock, LOG_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' raised_exc = exception.SwiftObjectNotFoundError( operation='delete_object', obj=object_name, container=container) swift_obj_mock.delete_object.side_effect = raised_exc # | WHEN | ilo_common.remove_image_from_swift(object_name) # | THEN | LOG_mock.info.assert_called_once_with( mock.ANY, {'associated_with_msg': "", 'err': raised_exc}) @mock.patch.object(ilo_common, 'LOG', spec_set=True, autospec=True) @mock.patch.object(swift, 'SwiftAPI', spec_set=True, autospec=True) def test_remove_image_from_swift_suppresses_operror_exc( self, swift_api_mock, LOG_mock): # | GIVEN | self.config(swift_ilo_container='ilo_container', group='ilo') container = CONF.ilo.swift_ilo_container swift_obj_mock = swift_api_mock.return_value object_name = 'object_name' raised_exc = exception.SwiftOperationError(operation='delete_object', error='failed') swift_obj_mock.delete_object.side_effect = raised_exc # | WHEN | ilo_common.remove_image_from_swift(object_name, 'alice_in_wonderland') # | THEN | LOG_mock.exception.assert_called_once_with( mock.ANY, {'object_name': object_name, 'container': container, 'associated_with_msg': ("associated with " "alice_in_wonderland"), 'err': raised_exc}) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) @mock.patch.object(ilo_common, '_get_floppy_image_name', spec_set=True, autospec=True) def test_destroy_floppy_image_from_web_server(self, get_floppy_name_mock, utils_mock): get_floppy_name_mock.return_value = 'image-uuid' CONF.deploy.http_root = "/webserver/" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_common.destroy_floppy_image_from_web_server(task.node) get_floppy_name_mock.assert_called_once_with(task.node) utils_mock.assert_called_once_with('/webserver/image-uuid') @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test_setup_vmedia(self, func_setup_vmedia_for_boot, func_set_boot_device): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: parameters = {'a': 'b'} iso = '733d1c44-a2ea-414b-aca7-69decf20d810' ilo_common.setup_vmedia(task, iso, parameters) func_setup_vmedia_for_boot.assert_called_once_with(task, iso, parameters) func_set_boot_device.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(deploy_utils, 'is_secure_boot_requested', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) def test_update_secure_boot_mode_passed_true(self, func_set_secure_boot_mode, func_is_secure_boot_req): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_is_secure_boot_req.return_value = True ilo_common.update_secure_boot_mode(task, True) func_set_secure_boot_mode.assert_called_once_with(task, True) @mock.patch.object(deploy_utils, 'is_secure_boot_requested', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'set_secure_boot_mode', spec_set=True, autospec=True) def test_update_secure_boot_mode_passed_false(self, func_set_secure_boot_mode, func_is_secure_boot_req): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: func_is_secure_boot_req.return_value = False ilo_common.update_secure_boot_mode(task, False) self.assertFalse(func_set_secure_boot_mode.called) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test_remove_single_or_list_of_files_with_file_list(self, unlink_mock): # | GIVEN | file_list = ['/any_path1/any_file1', '/any_path2/any_file2', '/any_path3/any_file3'] # | WHEN | ilo_common.remove_single_or_list_of_files(file_list) # | THEN | calls = [mock.call('/any_path1/any_file1'), mock.call('/any_path2/any_file2'), mock.call('/any_path3/any_file3')] unlink_mock.assert_has_calls(calls) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test_remove_single_or_list_of_files_with_file_str(self, unlink_mock): # | GIVEN | file_path = '/any_path1/any_file' # | WHEN | ilo_common.remove_single_or_list_of_files(file_path) # | THEN | unlink_mock.assert_called_once_with('/any_path1/any_file') @mock.patch.object(builtins, 'open', autospec=True) def test_verify_image_checksum(self, open_mock): # | GIVEN | data = b'Yankee Doodle went to town riding on a pony;' file_like_object = io.BytesIO(data) open_mock().__enter__.return_value = file_like_object actual_hash = hashlib.md5(data).hexdigest() # | WHEN | ilo_common.verify_image_checksum(file_like_object, actual_hash) # | THEN | # no any exception thrown def test_verify_image_checksum_throws_for_nonexistent_file(self): # | GIVEN | invalid_file_path = '/some/invalid/file/path' # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, ilo_common.verify_image_checksum, invalid_file_path, 'hash_xxx') @mock.patch.object(builtins, 'open', autospec=True) def test_verify_image_checksum_throws_for_failed_validation(self, open_mock): # | GIVEN | data = b'Yankee Doodle went to town riding on a pony;' file_like_object = io.BytesIO(data) open_mock().__enter__.return_value = file_like_object invalid_hash = 'invalid_hash_value' # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, ilo_common.verify_image_checksum, file_like_object, invalid_hash) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_server_post_state(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value post_state = 'FinishedPost' ilo_object_mock.get_host_post_state.return_value = post_state with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = ilo_common.get_server_post_state(task.node) ilo_object_mock.get_host_post_state.assert_called_once_with() self.assertEqual(post_state, ret) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_server_post_state_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_host_post_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, ilo_common.get_server_post_state, task.node) ilo_mock_object.get_host_post_state.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_server_post_state_not_supported(self, ilo_object_mock): ilo_mock_object = ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError('error') ilo_mock_object.get_host_post_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationNotSupported, ilo_common.get_server_post_state, task.node) ilo_mock_object.get_host_post_state.assert_called_once_with() ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_vendor.py0000664000175000017500000001067313652514273025344 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for vendor methods used by iLO modules.""" import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import vendor as ilo_vendor from ironic.tests.unit.drivers.modules.ilo import test_common class VendorPassthruTestCase(test_common.BaseIloTest): boot_interface = 'ilo-virtual-media' @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia', spec_set=True, autospec=True) def test_boot_into_iso(self, setup_vmedia_mock, power_action_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.boot_into_iso(task, boot_iso_href='foo') setup_vmedia_mock.assert_called_once_with(task, 'foo', ramdisk_options=None) power_action_mock.assert_called_once_with(task, states.REBOOT) @mock.patch.object(ilo_vendor.VendorPassthru, '_validate_boot_into_iso', spec_set=True, autospec=True) def test_validate_boot_into_iso(self, validate_boot_into_iso_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vendor = ilo_vendor.VendorPassthru() vendor.validate(task, method='boot_into_iso', foo='bar') validate_boot_into_iso_mock.assert_called_once_with( vendor, task, {'foo': 'bar'}) def test__validate_boot_into_iso_invalid_state(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.AVAILABLE self.assertRaises( exception.InvalidStateRequested, task.driver.vendor._validate_boot_into_iso, task, {}) def test__validate_boot_into_iso_missing_boot_iso_href(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.MANAGEABLE self.assertRaises( exception.MissingParameterValue, task.driver.vendor._validate_boot_into_iso, task, {}) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) def test__validate_boot_into_iso_manage(self, validate_image_prop_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: info = {'boot_iso_href': 'foo'} task.node.provision_state = states.MANAGEABLE task.driver.vendor._validate_boot_into_iso( task, info) validate_image_prop_mock.assert_called_once_with( task.context, {'image_source': 'foo'}, []) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) def test__validate_boot_into_iso_maintenance( self, validate_image_prop_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: info = {'boot_iso_href': 'foo'} task.node.maintenance = True task.driver.vendor._validate_boot_into_iso( task, info) validate_image_prop_mock.assert_called_once_with( task.context, {'image_source': 'foo'}, []) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_firmware_processor.py0000664000175000017500000006774713652514273030000 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Firmware Processor used by iLO management interface.""" import builtins import io from urllib import parse as urlparse import mock from oslo_utils import importutils from ironic.common import exception from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import firmware_processor as ilo_fw_processor from ironic.tests import base ilo_error = importutils.try_import('proliantutils.exception') class FirmwareProcessorTestCase(base.TestCase): def setUp(self): super(FirmwareProcessorTestCase, self).setUp() self.any_url = 'http://netloc/path' self.fw_processor_fake = mock.MagicMock( parsed_url='set it as required') def test_verify_firmware_update_args_throws_for_invalid_update_mode(self): # | GIVEN | update_firmware_mock = mock.MagicMock() firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} # Note(deray): Need to set __name__ attribute explicitly to keep # ``functools.wraps`` happy. Passing this to the `name` argument at # the time creation of Mock doesn't help. update_firmware_mock.__name__ = 'update_firmware_mock' wrapped_func = (ilo_fw_processor. verify_firmware_update_args(update_firmware_mock)) node_fake = mock.MagicMock(uuid='fake_node_uuid') task_fake = mock.MagicMock(node=node_fake) # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, wrapped_func, mock.ANY, task_fake, **firmware_update_args) def test_verify_firmware_update_args_throws_for_no_firmware_url(self): # | GIVEN | update_firmware_mock = mock.MagicMock() firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} update_firmware_mock.__name__ = 'update_firmware_mock' wrapped_func = (ilo_fw_processor. verify_firmware_update_args(update_firmware_mock)) # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, wrapped_func, mock.ANY, mock.ANY, **firmware_update_args) def test_get_and_validate_firmware_image_info(self): # | GIVEN | firmware_image_info = { 'url': self.any_url, 'checksum': 'b64c8f7799cfbb553d384d34dc43fafe336cc889', 'component': 'BIOS' } # | WHEN | url, checksum, component = ( ilo_fw_processor.get_and_validate_firmware_image_info( firmware_image_info, 'ilo')) # | THEN | self.assertEqual(self.any_url, url) self.assertEqual('b64c8f7799cfbb553d384d34dc43fafe336cc889', checksum) self.assertEqual('bios', component) def test_get_and_validate_firmware_image_info_fails_for_missing_parameter( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'component': 'bios' } # | WHEN | & | THEN | self.assertRaisesRegex( exception.MissingParameterValue, 'checksum', ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info, 'ilo') def test_get_and_validate_firmware_image_info_fails_for_empty_parameter( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'checksum': 'valid_checksum', 'component': '' } # | WHEN | & | THEN | self.assertRaisesRegex( exception.MissingParameterValue, 'component', ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info, 'ilo') def test_get_and_validate_firmware_image_info_fails_for_invalid_component( self): # | GIVEN | invalid_firmware_image_info = { 'url': self.any_url, 'checksum': 'valid_checksum', 'component': 'INVALID' } # | WHEN | & | THEN | self.assertRaises( exception.InvalidParameterValue, ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info, 'ilo') def test_get_and_validate_firmware_image_info_sum(self): # | GIVEN | result = None firmware_image_info = { 'url': self.any_url, 'checksum': 'b64c8f7799cfbb553d384d34dc43fafe336cc889' } # | WHEN | & | THEN | ret_val = ilo_fw_processor.get_and_validate_firmware_image_info( firmware_image_info, 'sum') self.assertEqual(result, ret_val) def test_get_and_validate_firmware_image_info_sum_with_component(self): # | GIVEN | result = None firmware_image_info = { 'url': self.any_url, 'checksum': 'b64c8f7799cfbb553d384d34dc43fafe336cc889', 'components': ['CP02345.exe'] } # | WHEN | & | THEN | ret_val = ilo_fw_processor.get_and_validate_firmware_image_info( firmware_image_info, 'sum') self.assertEqual(result, ret_val) def test_get_and_validate_firmware_image_info_sum_invalid_component( self): # | GIVEN | invalid_firmware_image_info = { 'url': 'any_url', 'checksum': 'valid_checksum', 'components': 'INVALID' } # | WHEN | & | THEN | self.assertRaises( exception.InvalidParameterValue, ilo_fw_processor.get_and_validate_firmware_image_info, invalid_firmware_image_info, 'sum') def test__validate_sum_components(self): result = None components = ['CP02345.scexe', 'CP02678.exe'] ret_val = ilo_fw_processor._validate_sum_components(components) self.assertEqual(ret_val, result) @mock.patch.object(ilo_fw_processor, 'LOG') def test__validate_sum_components_fails(self, LOG_mock): components = ['INVALID'] self.assertRaises( exception.InvalidParameterValue, ilo_fw_processor._validate_sum_components, components) self.assertTrue(LOG_mock.error.called) def test_fw_processor_ctor_sets_parsed_url_attrib_of_fw_processor(self): # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # | THEN | self.assertEqual(self.any_url, fw_processor.parsed_url.geturl()) @mock.patch.object( ilo_fw_processor, '_download_file_based_fw_to', autospec=True) def test__download_file_based_fw_to_gets_invoked_for_file_based_firmware( self, _download_file_based_fw_to_mock): # | GIVEN | some_file_url = 'file:///some_location/some_firmware_file' # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_file_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_file_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) def test__download_http_based_fw_to_gets_invoked_for_http_based_firmware( self, _download_http_based_fw_to_mock): # | GIVEN | for some_http_url in ('http://netloc/path_to_firmware_file', 'https://netloc/path_to_firmware_file'): # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_http_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_http_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') _download_http_based_fw_to_mock.reset_mock() @mock.patch.object( ilo_fw_processor, '_download_swift_based_fw_to', autospec=True) def test__download_swift_based_fw_to_gets_invoked_for_swift_based_firmware( self, _download_swift_based_fw_to_mock): # | GIVEN | some_swift_url = 'swift://containername/objectname' # | WHEN | fw_processor = ilo_fw_processor.FirmwareProcessor(some_swift_url) fw_processor._download_fw_to('some_target_file') # | THEN | _download_swift_based_fw_to_mock.assert_called_once_with( fw_processor, 'some_target_file') def test_fw_processor_ctor_throws_exception_with_invalid_firmware_url( self): # | GIVEN | any_invalid_firmware_url = 'any_invalid_url' # | WHEN | & | THEN | self.assertRaises(exception.InvalidParameterValue, ilo_fw_processor.FirmwareProcessor, any_invalid_firmware_url) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_calls__download_fw_to( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _download_fw_to_mock.assert_called_once_with( os_mock.path.join.return_value) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_verifies_checksum_of_downloaded_fw_file( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _download_fw_to_mock.assert_called_once_with( os_mock.path.join.return_value) verify_checksum_mock.assert_called_once_with( os_mock.path.join.return_value, checksum_fake) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) def test_process_fw_on_throws_error_if_checksum_validation_fails( self, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock verify_checksum_mock.side_effect = exception.ImageRefValidationFailed( image_href='some image', reason='checksum verification failed') node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | & | THEN | self.assertRaises(exception.ImageRefValidationFailed, fw_processor.process_fw_on, node_mock, checksum_fake) shutil_mock.rmtree.assert_called_once_with( tempfile_mock.mkdtemp(), ignore_errors=True) @mock.patch.object(ilo_fw_processor, 'tempfile', autospec=True) @mock.patch.object(ilo_fw_processor, 'os', autospec=True) @mock.patch.object(ilo_fw_processor, 'shutil', autospec=True) @mock.patch.object(ilo_common, 'verify_image_checksum', spec_set=True, autospec=True) @mock.patch.object( ilo_fw_processor, '_extract_fw_from_file', autospec=True) def test_process_fw_on_calls__extract_fw_from_file( self, _extract_fw_from_file_mock, verify_checksum_mock, shutil_mock, os_mock, tempfile_mock): # | GIVEN | fw_processor = ilo_fw_processor.FirmwareProcessor(self.any_url) # Now mock the __download_fw_to method of fw_processor instance _download_fw_to_mock = mock.MagicMock() fw_processor._download_fw_to = _download_fw_to_mock expected_return_location = (ilo_fw_processor.FirmwareImageLocation( 'some_location/file', 'file')) _extract_fw_from_file_mock.return_value = (expected_return_location, True) node_mock = mock.ANY checksum_fake = mock.ANY # | WHEN | actual_return_location = fw_processor.process_fw_on(node_mock, checksum_fake) # | THEN | _extract_fw_from_file_mock.assert_called_once_with( node_mock, os_mock.path.join.return_value) self.assertEqual(expected_return_location.fw_image_location, actual_return_location.fw_image_location) self.assertEqual(expected_return_location.fw_image_filename, actual_return_location.fw_image_filename) shutil_mock.rmtree.assert_called_once_with( tempfile_mock.mkdtemp(), ignore_errors=True) @mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object( ilo_fw_processor.image_service, 'FileImageService', autospec=True) def test__download_file_based_fw_to_copies_file_to_target( self, file_image_service_mock, open_mock): # | GIVEN | fd_mock = mock.MagicMock(spec=io.BytesIO) open_mock.return_value = fd_mock fd_mock.__enter__.return_value = fd_mock any_file_based_firmware_file = 'file:///tmp/any_file_path' firmware_file_path = '/tmp/any_file_path' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_file_based_firmware_file) # | WHEN | ilo_fw_processor._download_file_based_fw_to(self.fw_processor_fake, 'target_file') # | THEN | file_image_service_mock.return_value.download.assert_called_once_with( firmware_file_path, fd_mock) @mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object(ilo_fw_processor, 'image_service', autospec=True) def test__download_http_based_fw_to_downloads_the_fw_file( self, image_service_mock, open_mock): # | GIVEN | fd_mock = mock.MagicMock(spec=io.BytesIO) open_mock.return_value = fd_mock fd_mock.__enter__.return_value = fd_mock any_http_based_firmware_file = 'http://netloc/path_to_firmware_file' any_target_file = 'any_target_file' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_http_based_firmware_file) # | WHEN | ilo_fw_processor._download_http_based_fw_to(self.fw_processor_fake, any_target_file) # | THEN | image_service_mock.HttpImageService().download.assert_called_once_with( any_http_based_firmware_file, fd_mock) @mock.patch.object(ilo_fw_processor, 'urlparse', autospec=True) @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) @mock.patch.object(ilo_fw_processor, 'swift', autospec=True) def test__download_swift_based_fw_to_creates_temp_url( self, swift_mock, _download_http_based_fw_to_mock, urlparse_mock): # | GIVEN | swift_based_firmware_files = [ 'swift://containername/objectname', 'swift://containername/pseudo-folder/objectname' ] for swift_firmware_file in swift_based_firmware_files: # | WHEN | self.fw_processor_fake.parsed_url = (urlparse. urlparse(swift_firmware_file)) ilo_fw_processor._download_swift_based_fw_to( self.fw_processor_fake, 'any_target_file') # | THEN | expected_temp_url_call_args_list = [ mock.call('containername', 'objectname', mock.ANY), mock.call('containername', 'pseudo-folder/objectname', mock.ANY) ] actual_temp_url_call_args_list = ( swift_mock.SwiftAPI().get_temp_url.call_args_list) self.assertEqual(expected_temp_url_call_args_list, actual_temp_url_call_args_list) @mock.patch.object(urlparse, 'urlparse', autospec=True) @mock.patch.object( ilo_fw_processor, '_download_http_based_fw_to', autospec=True) @mock.patch.object(ilo_fw_processor, 'swift', autospec=True) def test__download_swift_based_fw_to_calls__download_http_based_fw_to( self, swift_mock, _download_http_based_fw_to_mock, urlparse_mock): """_download_swift_based_fw_to invokes _download_http_based_fw_to _download_swift_based_fw_to makes a call to _download_http_based_fw_to in turn with temp url set as the url attribute of fw_processor instance """ # | GIVEN | any_swift_based_firmware_file = 'swift://containername/objectname' any_target_file = 'any_target_file' self.fw_processor_fake.parsed_url = urlparse.urlparse( any_swift_based_firmware_file) urlparse_mock.reset_mock() # | WHEN | ilo_fw_processor._download_swift_based_fw_to(self.fw_processor_fake, any_target_file) # | THEN | _download_http_based_fw_to_mock.assert_called_once_with( self.fw_processor_fake, any_target_file) urlparse_mock.assert_called_once_with( swift_mock.SwiftAPI().get_temp_url.return_value) self.assertEqual( urlparse_mock.return_value, self.fw_processor_fake.parsed_url) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_calls_process_firmware_image( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' ilo_object_mock = ilo_common_mock.get_ilo_object.return_value utils_mock.process_firmware_image.return_value = ('some_location', True, True) # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | utils_mock.process_firmware_image.assert_called_once_with( any_target_file, ilo_object_mock) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_doesnt_upload_firmware( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', False, True) # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_web_server.assert_not_called() @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_file_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_file_if_no_upload( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', False, True) # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_uploads_firmware_to_webserver( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=True, group='ilo') # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_web_server.assert_called_once_with( 'some_location/some_fw_file', 'some_fw_file') @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_webserver_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_webserver( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=True, group='ilo') # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) def test__extract_fw_from_file_uploads_firmware_to_swift( self, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=False, group='ilo') # | WHEN | ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file) # | THEN | ilo_common_mock.copy_image_to_swift.assert_called_once_with( 'some_location/some_fw_file', 'some_fw_file') @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) @mock.patch.object(ilo_fw_processor, 'proliantutils_utils', autospec=True) @mock.patch.object(ilo_fw_processor, '_remove_swift_based_me', autospec=True) def test__extract_fw_from_file_sets_loc_obj_remove_to_swift( self, _remove_mock, utils_mock, ilo_common_mock): # | GIVEN | node_mock = mock.MagicMock(uuid='fake_node_uuid') any_target_file = 'any_target_file' utils_mock.process_firmware_image.return_value = ( 'some_location/some_fw_file', True, True) self.config(use_web_server_for_images=False, group='ilo') # | WHEN | location_obj, is_different_file = ( ilo_fw_processor._extract_fw_from_file(node_mock, any_target_file)) location_obj.remove() # | THEN | _remove_mock.assert_called_once_with(location_obj) def test_fw_img_loc_sets_these_attributes(self): # | GIVEN | any_loc = 'some_location/some_fw_file' any_s_filename = 'some_fw_file' # | WHEN | location_obj = ilo_fw_processor.FirmwareImageLocation( any_loc, any_s_filename) # | THEN | self.assertEqual(any_loc, location_obj.fw_image_location) self.assertEqual(any_s_filename, location_obj.fw_image_filename) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_file_based_me( self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_file_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_single_or_list_of_files. assert_called_with(fw_img_location_obj_fake.fw_image_location)) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_swift_based_me(self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_swift_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_image_from_swift.assert_called_with( fw_img_location_obj_fake.fw_image_filename, "firmware update")) @mock.patch.object(ilo_fw_processor, 'ilo_common', autospec=True) def test__remove_webserver_based_me(self, ilo_common_mock): # | GIVEN | fw_img_location_obj_fake = mock.MagicMock() # | WHEN | ilo_fw_processor._remove_webserver_based_me(fw_img_location_obj_fake) # | THEN | (ilo_common_mock.remove_image_from_web_server.assert_called_with( fw_img_location_obj_fake.fw_image_filename)) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_management.py0000664000175000017500000021147213652514273026163 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for Management Interface used by iLO modules.""" import mock from oslo_utils import importutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import boot as ilo_boot from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import management as ilo_management from ironic.drivers.modules import ipmitool from ironic.drivers import utils as driver_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.ilo import test_common from ironic.tests.unit.objects import utils as obj_utils ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() class IloManagementTestCase(test_common.BaseIloTest): def setUp(self): super(IloManagementTestCase, self).setUp() port_1 = obj_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:66', uuid=uuidutils.generate_uuid()) port_2 = obj_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:67', uuid=uuidutils.generate_uuid()) self.ports = [port_1, port_2] def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = ilo_management.MANAGEMENT_PROPERTIES self.assertEqual(expected, task.driver.management.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.validate(task) driver_info_mock.assert_called_once_with(task.node) def test_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.ISCSIBOOT] self.assertEqual( sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_next_boot(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_one_time_boot.return_value = 'CDROM' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_device = boot_devices.CDROM expected_response = {'boot_device': expected_device, 'persistent': False} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) ilo_object_mock.get_one_time_boot.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_persistent(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value ilo_mock.get_one_time_boot.return_value = 'Normal' ilo_mock.get_persistent_boot_device.return_value = 'NETWORK' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_device = boot_devices.PXE expected_response = {'boot_device': expected_device, 'persistent': True} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) ilo_mock.get_one_time_boot.assert_called_once_with() ilo_mock.get_persistent_boot_device.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_one_time_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.get_boot_device, task) ilo_mock_object.get_one_time_boot.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_get_boot_device_persistent_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_one_time_boot.return_value = 'Normal' exc = ilo_error.IloError('error') ilo_mock_object.get_persistent_boot_device.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.get_boot_device, task) ilo_mock_object.get_one_time_boot.assert_called_once_with() ilo_mock_object.get_persistent_boot_device.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_ok(self, get_ilo_object_mock): ilo_object_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.CDROM, False) get_ilo_object_mock.assert_called_once_with(task.node) ilo_object_mock.set_one_time_boot.assert_called_once_with('CDROM') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_persistent_true(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device(task, boot_devices.PXE, True) get_ilo_object_mock.assert_called_once_with(task.node) ilo_mock.update_persistent_boot.assert_called_once_with( ['NETWORK']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.set_one_time_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.set_boot_device, task, boot_devices.PXE) ilo_mock_object.set_one_time_boot.assert_called_once_with('NETWORK') @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_boot_device_persistent_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.update_persistent_boot.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, task.driver.management.set_boot_device, task, boot_devices.PXE, True) ilo_mock_object.update_persistent_boot.assert_called_once_with( ['NETWORK']) def test_set_boot_device_invalid_device(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'fake-device') @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) @mock.patch.object(ipmitool.IPMIManagement, 'get_sensors_data', spec_set=True, autospec=True) def test_get_sensor_data(self, get_sensors_data_mock, update_ipmi_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.get_sensors_data(task) update_ipmi_mock.assert_called_once_with(task) get_sensors_data_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_ilo_step_ok(self, get_ilo_object_mock): ilo_mock = get_ilo_object_mock.return_value step_mock = getattr(ilo_mock, 'fake-step') ilo_management._execute_ilo_step( self.node, 'fake-step', 'args', kwarg='kwarg') step_mock.assert_called_once_with('args', kwarg='kwarg') @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_ilo_step_not_supported(self, get_ilo_object_mock, log_mock): ilo_mock = get_ilo_object_mock.return_value exc = ilo_error.IloCommandNotSupportedError("error") step_mock = getattr(ilo_mock, 'fake-step') step_mock.side_effect = exc ilo_management._execute_ilo_step( self.node, 'fake-step', 'args', kwarg='kwarg') step_mock.assert_called_once_with('args', kwarg='kwarg') self.assertTrue(log_mock.warning.called) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_ilo_step_fail(self, get_ilo_object_mock): if self.node.clean_step: step = self.node.clean_step step_name = step['step'] exept = exception.NodeCleaningFailure else: step = self.node.deploy_step step_name = step['step'] exept = exception.InstanceDeployFailure ilo_mock = get_ilo_object_mock.return_value exc = ilo_error.IloError("error") step_mock = getattr(ilo_mock, step_name) step_mock.side_effect = exc self.assertRaises(exept, ilo_management._execute_ilo_step, self.node, step_name, 'args', kwarg='kwarg') step_mock.assert_called_once_with('args', kwarg='kwarg') def test__execute_ilo_step_fail_clean(self): self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'fake-step', 'argsinfo': {}} self.node.save() self._test__execute_ilo_step_fail() def test__execute_ilo_step_fail_deploy(self): self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'fake-step', 'argsinfo': {}} self.node.save() self._test__execute_ilo_step_fail() @mock.patch.object(deploy_utils, 'build_agent_options', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'clean_up_ramdisk', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_ilo( self, execute_step_mock, prepare_mock, cleanup_mock, build_mock): build_mock.return_value = {'a': 'b'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo(task) execute_step_mock.assert_called_once_with(task.node, 'reset_ilo') cleanup_mock.assert_called_once_with(mock.ANY, task) build_mock.assert_called_once_with(task.node) prepare_mock.assert_called_once_with( mock.ANY, mock.ANY, {'a': 'b'}) @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_ilo_credential_ok(self, step_mock): info = self.node.driver_info info['ilo_change_password'] = "fake-password" self.node.driver_info = info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo_credential(task) step_mock.assert_called_once_with( task.node, 'reset_ilo_credential', 'fake-password') self.assertNotIn('ilo_change_password', task.node.driver_info) self.assertEqual('fake-password', task.node.driver_info['ilo_password']) @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_ilo_credential_pass_as_arg_ok(self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo_credential( task, change_password='fake-password') step_mock.assert_called_once_with( task.node, 'reset_ilo_credential', 'fake-password') self.assertNotIn('ilo_change_password', task.node.driver_info) self.assertEqual('fake-password', task.node.driver_info['ilo_password']) @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_ilo_credential_no_password(self, step_mock, log_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_ilo_credential(task) self.assertFalse(step_mock.called) self.assertTrue(log_mock.info.called) @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_bios_to_default(self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_bios_to_default(task) step_mock.assert_called_once_with(task.node, 'reset_bios_to_default') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_reset_secure_boot_keys_to_default(self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.reset_secure_boot_keys_to_default(task) step_mock.assert_called_once_with(task.node, 'reset_secure_boot_keys') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_clear_secure_boot_keys(self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.clear_secure_boot_keys(task) step_mock.assert_called_once_with(task.node, 'clear_secure_boot_keys') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_activate_license(self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: activate_license_args = { 'ilo_license_key': 'XXXXX-YYYYY-ZZZZZ-XYZZZ-XXYYZ'} task.driver.management.activate_license(task, **activate_license_args) step_mock.assert_called_once_with( task.node, 'activate_license', 'XXXXX-YYYYY-ZZZZZ-XYZZZ-XXYYZ') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) def test_activate_license_no_or_invalid_format_license_key( self, step_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: for license_key_value in (None, [], {}): activate_license_args = {'ilo_license_key': license_key_value} self.assertRaises(exception.InvalidParameterValue, task.driver.management.activate_license, task, **activate_license_args) self.assertFalse(step_mock.called) @mock.patch.object(deploy_utils, 'build_agent_options', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'clean_up_ramdisk', spec_set=True, autospec=True) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'remove_single_or_list_of_files', spec_set=True, autospec=True) def _test_update_firmware_calls_step_foreach_url( self, remove_file_mock, FirmwareProcessor_mock, execute_step_mock, LOG_mock, prepare_mock, cleanup_mock, build_mock): if self.node.clean_step: step = self.node.clean_step else: step = self.node.deploy_step build_mock.return_value = {'a': 'b'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_filepath', 'filepath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_httppath', 'httppath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_httpspath', 'httpspath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_swiftpath', 'swiftpath'), ilo_management.firmware_processor.FirmwareImageLocation( 'fw_location_for_another_filepath', 'filepath2') ] task.driver.management.update_firmware(task, **firmware_update_args) calls = [mock.call(task.node, 'update_firmware', 'fw_location_for_filepath', 'ilo'), mock.call(task.node, 'update_firmware', 'fw_location_for_httppath', 'cpld'), mock.call(task.node, 'update_firmware', 'fw_location_for_httpspath', 'power_pic'), mock.call(task.node, 'update_firmware', 'fw_location_for_swiftpath', 'bios'), mock.call(task.node, 'update_firmware', 'fw_location_for_another_filepath', 'chassis'), ] execute_step_mock.assert_has_calls(calls) self.assertEqual(5, execute_step_mock.call_count) cleanup_mock.assert_called_once_with(mock.ANY, task) build_mock.assert_called_once_with(task.node) prepare_mock.assert_called_once_with( mock.ANY, mock.ANY, {'a': 'b'}) def test_update_firmware_calls_step_foreach_url_clean(self): firmware_images = [ { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'http://any_url', 'checksum': 'xxxx', 'component': 'cpld' }, { 'url': 'https://any_url', 'checksum': 'xxxx', 'component': 'power_pic' }, { 'url': 'swift://container/object', 'checksum': 'xxxx', 'component': 'bios' }, { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'chassis' } ] firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': firmware_images} self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_calls_step_foreach_url() def test_update_firmware_calls_step_foreach_url_deploy(self): firmware_images = [ { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'http://any_url', 'checksum': 'xxxx', 'component': 'cpld' }, { 'url': 'https://any_url', 'checksum': 'xxxx', 'component': 'power_pic' }, { 'url': 'swift://container/object', 'checksum': 'xxxx', 'component': 'bios' }, { 'url': 'file:///any_path', 'checksum': 'xxxx', 'component': 'chassis' } ] firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': firmware_images} self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_calls_step_foreach_url() def _test_update_firmware_invalid_update_mode_provided(self): if self.node.clean_step: step = self.node.clean_step else: step = self.node.deploy_step with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} self.assertRaises(exception.InvalidParameterValue, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_invalid_update_mode_provided_clean(self): firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_invalid_update_mode_provided() def test_update_firmware_invalid_update_mode_provided_deploy(self): firmware_update_args = {'firmware_update_mode': 'invalid_mode', 'firmware_images': None} self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_invalid_update_mode_provided() def _test_update_firmware_error_for_no_firmware_url(self): if self.node.clean_step: step = self.node.clean_step else: step = self.node.deploy_step with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} self.assertRaises(exception.InvalidParameterValue, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_error_for_no_firmware_url_clean(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_error_for_no_firmware_url() def test_update_firmware_error_for_no_firmware_url_deploy(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': []} self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_error_for_no_firmware_url() def _test_update_firmware_throws_error_for_invalid_component_type(self): if self.node.clean_step: step = self.node.clean_step exept = exception.NodeCleaningFailure else: step = self.node.deploy_step exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] self.assertRaises(exept, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_error_for_invalid_component_type_clean(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'xyz' } ]} self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_throws_error_for_invalid_component_type() def test_update_firmware_error_for_invalid_component_type_deploy(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'xyz' } ]} self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_throws_error_for_invalid_component_type() @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management.firmware_processor.FirmwareProcessor, 'process_fw_on', spec_set=True, autospec=True) def _test_update_firmware_throws_error_for_checksum_validation_error( self, process_fw_on_mock, LOG_mock): if self.node.clean_step: step = self.node.clean_step exept = exception.NodeCleaningFailure else: step = self.node.deploy_step exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = step['argsinfo'] process_fw_on_mock.side_effect = exception.ImageRefValidationFailed # | WHEN & THEN | self.assertRaises(exept, task.driver.management.update_firmware, task, **firmware_update_args) def test_update_firmware_error_for_checksum_validation_error_clean(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'invalid_checksum', 'component': 'bios' } ]} self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_throws_error_for_checksum_validation_error() def test_update_firmware_error_for_checksum_validation_error_deploy(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'invalid_checksum', 'component': 'bios' } ]} self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_throws_error_for_checksum_validation_error() @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) def _test_update_firmware_doesnt_update_any_if_any_url_fails( self, FirmwareProcessor_mock, clean_step_mock): """update_firmware throws error for failure in processing any url update_firmware doesn't invoke firmware update of proliantutils for any url if processing on any firmware url fails. """ if self.node.clean_step: step = self.node.clean_step exept = exception.NodeCleaningFailure else: step = self.node.deploy_step exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ ilo_management.firmware_processor.FirmwareImageLocation( 'extracted_firmware_url_of_any_valid_url', 'filename'), exception.IronicException ] self.assertRaises(exept, task.driver.management.update_firmware, task, **firmware_update_args) self.assertFalse(clean_step_mock.called) def test_update_firmware_doesnt_update_any_if_any_url_fails_clean(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_doesnt_update_any_if_any_url_fails() def test_update_firmware_doesnt_update_any_if_any_url_fails_deploy(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_doesnt_update_any_if_any_url_fails() @mock.patch.object(ilo_management, 'LOG') @mock.patch.object(ilo_management, '_execute_ilo_step', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'FirmwareProcessor', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor.FirmwareImageLocation, 'remove', spec_set=True, autospec=True) def _test_update_firmware_cleans_all_files_if_exc_thrown( self, remove_mock, FirmwareProcessor_mock, clean_step_mock, LOG_mock): if self.node.clean_step: step = self.node.clean_step exept = exception.NodeCleaningFailure else: step = self.node.deploy_step exept = exception.InstanceDeployFailure with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: firmware_update_args = step['argsinfo'] fw_loc_obj_1 = (ilo_management.firmware_processor. FirmwareImageLocation('extracted_firmware_url_1', 'filename_1')) fw_loc_obj_2 = (ilo_management.firmware_processor. FirmwareImageLocation('extracted_firmware_url_2', 'filename_2')) FirmwareProcessor_mock.return_value.process_fw_on.side_effect = [ fw_loc_obj_1, fw_loc_obj_2 ] clean_step_mock.side_effect = exept( node=self.node.uuid, reason='ilo_exc') self.assertRaises(exept, task.driver.management.update_firmware, task, **firmware_update_args) clean_step_mock.assert_called_once_with( task.node, 'update_firmware', 'extracted_firmware_url_1', 'ilo') self.assertTrue(LOG_mock.error.called) remove_mock.assert_has_calls([mock.call(fw_loc_obj_1), mock.call(fw_loc_obj_2)]) def test_update_firmware_cleans_all_files_if_exc_thrown_clean(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } self.node.clean_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_cleans_all_files_if_exc_thrown() def test_update_firmware_cleans_all_files_if_exc_thrown_deploy(self): firmware_update_args = {'firmware_update_mode': 'ilo', 'firmware_images': [ { 'url': 'any_valid_url', 'checksum': 'xxxx', 'component': 'ilo' }, { 'url': 'any_invalid_url', 'checksum': 'xxxx', 'component': 'bios' }] } self.node.deploy_step = {'priority': 100, 'interface': 'management', 'step': 'update_firmware', 'argsinfo': firmware_update_args} self.node.save() self._test_update_firmware_cleans_all_files_if_exc_thrown() @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) @mock.patch.object(agent_base, 'execute_clean_step', autospec=True) def test_update_firmware_sum_mode_with_component( self, execute_mock, attach_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: execute_mock.return_value = states.CLEANWAIT # | GIVEN | firmware_update_args = { 'url': 'http://any_url', 'checksum': 'xxxx', 'component': ['CP02345.scexe', 'CP02567.exe']} clean_step = {'step': 'update_firmware', 'interface': 'management', 'args': firmware_update_args} task.node.clean_step = clean_step # | WHEN | return_value = task.driver.management.update_firmware_sum( task, **firmware_update_args) # | THEN | attach_vmedia_mock.assert_any_call( task.node, 'CDROM', 'http://any_url') self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, clean_step) @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) @mock.patch.object(ilo_management.firmware_processor, 'get_swift_url', autospec=True) @mock.patch.object(agent_base, 'execute_clean_step', autospec=True) def test_update_firmware_sum_mode_swift_url( self, execute_mock, swift_url_mock, attach_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: swift_url_mock.return_value = "http://path-to-file" execute_mock.return_value = states.CLEANWAIT # | GIVEN | firmware_update_args = { 'url': 'swift://container/object', 'checksum': 'xxxx', 'components': ['CP02345.scexe', 'CP02567.exe']} clean_step = {'step': 'update_firmware', 'interface': 'management', 'args': firmware_update_args} task.node.clean_step = clean_step # | WHEN | return_value = task.driver.management.update_firmware_sum( task, **firmware_update_args) # | THEN | attach_vmedia_mock.assert_any_call( task.node, 'CDROM', 'http://path-to-file') self.assertEqual(states.CLEANWAIT, return_value) self.assertEqual(task.node.clean_step['args']['url'], "http://path-to-file") @mock.patch.object(ilo_common, 'attach_vmedia', spec_set=True, autospec=True) @mock.patch.object(agent_base, 'execute_clean_step', autospec=True) def test_update_firmware_sum_mode_without_component( self, execute_mock, attach_vmedia_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: execute_mock.return_value = states.CLEANWAIT # | GIVEN | firmware_update_args = { 'url': 'any_valid_url', 'checksum': 'xxxx'} clean_step = {'step': 'update_firmware', 'interface': 'management', 'args': firmware_update_args} task.node.clean_step = clean_step # | WHEN | return_value = task.driver.management.update_firmware_sum( task, **firmware_update_args) # | THEN | attach_vmedia_mock.assert_any_call( task.node, 'CDROM', 'any_valid_url') self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, clean_step) def test_update_firmware_sum_mode_invalid_component(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # | GIVEN | firmware_update_args = { 'url': 'any_valid_url', 'checksum': 'xxxx', 'components': ['CP02345']} # | WHEN & THEN | self.assertRaises(exception.InvalidParameterValue, task.driver.management.update_firmware_sum, task, **firmware_update_args) @mock.patch.object(driver_utils, 'store_ramdisk_logs') def test__update_firmware_sum_final_with_logs(self, store_mock): self.config(deploy_logs_collect='always', group='agent') command = {'command_status': 'SUCCEEDED', 'command_result': { 'clean_result': {'Log Data': 'aaaabbbbcccdddd'}} } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management._update_firmware_sum_final( task, command) store_mock.assert_called_once_with(task.node, 'aaaabbbbcccdddd', label='update_firmware_sum') @mock.patch.object(driver_utils, 'store_ramdisk_logs') def test__update_firmware_sum_final_without_logs(self, store_mock): self.config(deploy_logs_collect='on_failure', group='agent') command = {'command_status': 'SUCCEEDED', 'command_result': { 'clean_result': {'Log Data': 'aaaabbbbcccdddd'}} } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management._update_firmware_sum_final( task, command) self.assertFalse(store_mock.called) @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'store_ramdisk_logs') def test__update_firmware_sum_final_swift_error(self, store_mock, log_mock): self.config(deploy_logs_collect='always', group='agent') command = {'command_status': 'SUCCEEDED', 'command_result': { 'clean_result': {'Log Data': 'aaaabbbbcccdddd'}} } store_mock.side_effect = exception.SwiftOperationError('Error') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management._update_firmware_sum_final( task, command) self.assertTrue(log_mock.error.called) @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'store_ramdisk_logs') def test__update_firmware_sum_final_environment_error(self, store_mock, log_mock): self.config(deploy_logs_collect='always', group='agent') command = {'command_status': 'SUCCEEDED', 'command_result': { 'clean_result': {'Log Data': 'aaaabbbbcccdddd'}} } store_mock.side_effect = EnvironmentError('Error') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management._update_firmware_sum_final( task, command) self.assertTrue(log_mock.exception.called) @mock.patch.object(ilo_management, 'LOG', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'store_ramdisk_logs') def test__update_firmware_sum_final_unknown_exception(self, store_mock, log_mock): self.config(deploy_logs_collect='always', group='agent') command = {'command_status': 'SUCCEEDED', 'command_result': { 'clean_result': {'Log Data': 'aaaabbbbcccdddd'}} } store_mock.side_effect = Exception('Error') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management._update_firmware_sum_final( task, command) self.assertTrue(log_mock.exception.called) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_iscsi_boot_target_with_auth(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id task.node.driver_internal_info = driver_internal_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value task.driver.management.set_iscsi_boot_target(task) ilo_object_mock.set_iscsi_info.assert_called_once_with( 'fake_iqn', 0, 'fake_host', '3260', auth_method='CHAP', username='fake_username', password='fake_password', macs=['11:22:33:44:55:66', '11:22:33:44:55:67']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_iscsi_boot_target_without_auth(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn'}) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id task.node.driver_internal_info = driver_internal_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value task.driver.management.set_iscsi_boot_target(task) ilo_object_mock.set_iscsi_info.assert_called_once_with( 'fake_iqn', 0, 'fake_host', '3260', auth_method=None, password=None, username=None, macs=['11:22:33:44:55:66', '11:22:33:44:55:67']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_iscsi_boot_target_failed(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id task.node.driver_internal_info = driver_internal_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.set_iscsi_info.side_effect = ( ilo_error.IloError('error')) self.assertRaises(exception.IloOperationError, task.driver.management.set_iscsi_boot_target, task) def test_set_iscsi_boot_target_missed_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id task.node.driver_internal_info = driver_internal_info task.node.save() self.assertRaises(exception.MissingParameterValue, task.driver.management.set_iscsi_boot_target, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_set_iscsi_boot_target_in_bios(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) driver_internal_info = task.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id task.node.driver_internal_info = driver_internal_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.set_iscsi_info.side_effect = ( ilo_error.IloCommandNotSupportedInBiosError('error')) self.assertRaises(exception.IloOperationNotSupported, task.driver.management.set_iscsi_boot_target, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_clear_iscsi_boot_target(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value task.driver.management.clear_iscsi_boot_target(task) ilo_object_mock.unset_iscsi_info.assert_called_once_with( macs=['11:22:33:44:55:66', '11:22:33:44:55:67']) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_clear_iscsi_boot_target_failed(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.unset_iscsi_info.side_effect = ( ilo_error.IloError('error')) self.assertRaises(exception.IloOperationError, task.driver.management.clear_iscsi_boot_target, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_clear_iscsi_boot_target_in_bios(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.unset_iscsi_info.side_effect = ( ilo_error.IloCommandNotSupportedInBiosError('error')) self.assertRaises(exception.IloOperationNotSupported, task.driver.management.clear_iscsi_boot_target, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inject_nmi(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value task.driver.management.inject_nmi(task) ilo_object_mock.inject_nmi.assert_called_once() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inject_nmi_failed(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.inject_nmi.side_effect = ( ilo_error.IloError('error')) self.assertRaises(exception.IloOperationError, task.driver.management.inject_nmi, task) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test_inject_nmi_not_supported(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.inject_nmi.side_effect = ( ilo_error.IloCommandNotSupportedError('error')) self.assertRaises(exception.IloOperationNotSupported, task.driver.management.inject_nmi, task) class Ilo5ManagementTestCase(db_base.DbTestCase): def setUp(self): super(Ilo5ManagementTestCase, self).setUp() self.driver = mock.Mock(management=ilo_management.Ilo5Management()) self.clean_step = {'step': 'erase_devices', 'interface': 'management'} n = { 'driver': 'ilo5', 'driver_info': INFO_DICT, 'clean_step': self.clean_step, } self.config(enabled_hardware_types=['ilo5'], enabled_boot_interfaces=['ilo-virtual-media'], enabled_console_interfaces=['ilo'], enabled_deploy_interfaces=['iscsi'], enabled_inspect_interfaces=['ilo'], enabled_management_interfaces=['ilo5'], enabled_power_interfaces=['ilo'], enabled_raid_interfaces=['ilo5']) self.node = obj_utils.create_test_node(self.context, **n) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_erase_devices_hdd(self, mock_power, ilo_mock, build_agent_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['HDD'] build_agent_mock.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.management.erase_devices(task) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_clean_step')) ilo_mock_object.do_disk_erase.assert_called_once_with( 'HDD', 'overwrite') self.assertEqual(states.CLEANWAIT, result) mock_power.assert_called_once_with(task, states.REBOOT) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_erase_devices_ssd(self, mock_power, ilo_mock, build_agent_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['SSD'] build_agent_mock.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.management.erase_devices(task) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_ssd_check')) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_clean_step')) ilo_mock_object.do_disk_erase.assert_called_once_with( 'SSD', 'block') self.assertEqual(states.CLEANWAIT, result) mock_power.assert_called_once_with(task, states.REBOOT) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_erase_devices_ssd_when_hdd_done(self, mock_power, ilo_mock, build_agent_mock): build_agent_mock.return_value = [] ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['HDD', 'SSD'] self.node.driver_internal_info = {'ilo_disk_erase_hdd_check': True} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.management.erase_devices(task) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_ssd_check')) self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_clean_step')) ilo_mock_object.do_disk_erase.assert_called_once_with( 'SSD', 'block') self.assertEqual(states.CLEANWAIT, result) mock_power.assert_called_once_with(task, states.REBOOT) @mock.patch.object(ilo_management.LOG, 'info') @mock.patch.object(ilo_management.Ilo5Management, '_wait_for_disk_erase_status', autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_erase_devices_completed(self, ilo_mock, disk_status_mock, log_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['HDD', 'SSD'] disk_status_mock.return_value = True self.node.driver_internal_info = {'ilo_disk_erase_hdd_check': True, 'ilo_disk_erase_ssd_check': True} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.erase_devices(task) self.assertFalse( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertFalse( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertTrue(log_mock.called) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_erase_devices_hdd_with_erase_pattern_zero( self, mock_power, ilo_mock, build_agent_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['HDD'] build_agent_mock.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.management.erase_devices( task, erase_pattern={'hdd': 'zero', 'ssd': 'zero'}) self.assertTrue( task.node.driver_internal_info.get( 'ilo_disk_erase_hdd_check')) self.assertTrue( task.node.driver_internal_info.get( 'cleaning_reboot')) self.assertFalse( task.node.driver_internal_info.get( 'skip_current_clean_step')) ilo_mock_object.do_disk_erase.assert_called_once_with( 'HDD', 'zero') self.assertEqual(states.CLEANWAIT, result) mock_power.assert_called_once_with(task, states.REBOOT) @mock.patch.object(ilo_management.LOG, 'info') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_erase_devices_when_no_drive_available( self, ilo_mock, log_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.erase_devices(task) self.assertTrue(log_mock.called) def test_erase_devices_hdd_with_invalid_format_erase_pattern( self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.erase_devices, task, erase_pattern=123) def test_erase_devices_hdd_with_invalid_device_type_erase_pattern( self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.erase_devices, task, erase_pattern={'xyz': 'block'}) def test_erase_devices_hdd_with_invalid_erase_pattern( self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.erase_devices, task, erase_pattern={'ssd': 'xyz'}) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) @mock.patch.object(ilo_management.Ilo5Management, '_set_clean_failed') def test_erase_devices_hdd_ilo_error(self, set_clean_failed_mock, ilo_mock): ilo_mock_object = ilo_mock.return_value ilo_mock_object.get_available_disk_types.return_value = ['HDD'] exc = ilo_error.IloError('error') ilo_mock_object.do_disk_erase.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.erase_devices(task) ilo_mock_object.do_disk_erase.assert_called_once_with( 'HDD', 'overwrite') self.assertNotIn('ilo_disk_erase_hdd_check', task.node.driver_internal_info) self.assertNotIn('ilo_disk_erase_ssd_check', task.node.driver_internal_info) self.assertNotIn('cleaning_reboot', task.node.driver_internal_info) self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) set_clean_failed_mock.assert_called_once_with( task, exc) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_bios.py0000664000175000017500000007704213652514273025006 0ustar zuulzuul00000000000000# Copyright 2018 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for IloPower module.""" import mock from oslo_config import cfg from oslo_utils import importutils from ironic.common import exception from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.ilo import bios as ilo_bios from ironic.drivers.modules.ilo import boot as ilo_boot from ironic.drivers.modules.ilo import common as ilo_common from ironic import objects from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.ilo import test_common ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF class IloBiosTestCase(test_common.BaseIloTest): def test_get_properties(self): expected = ilo_common.REQUIRED_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.bios.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.bios.validate(task) mock_drvinfo.assert_called_once_with(task.node) def _test_ilo_error(self, exc_cls, test_methods_not_called, test_methods_called, method_details, exception_mock, operation='cleaning'): exception_mock.side_effect = exc_cls('error') method = method_details.get("name") args = method_details.get("args") if self.node.clean_step: self.assertRaises(exception.NodeCleaningFailure, method, *args) else: self.assertRaises(exception.InstanceDeployFailure, method, *args) for test_method in test_methods_not_called: test_method.assert_not_called() for called_method in test_methods_called: called_method["name"].assert_called_once_with( *called_method["args"]) @mock.patch.object(ilo_bios.IloBIOS, 'cache_bios_settings', autospec=True) @mock.patch.object(ilo_bios.IloBIOS, '_execute_post_boot_bios_step', autospec=True) @mock.patch.object(ilo_bios.IloBIOS, '_execute_pre_boot_bios_step', autospec=True) def test_apply_configuration_pre_boot(self, exe_pre_boot_mock, exe_post_boot_mock, cache_settings_mock): settings = [ { "name": "SET_A", "value": "VAL_A", }, { "name": "SET_B", "value": "VAL_B", }, { "name": "SET_C", "value": "VAL_C", }, { "name": "SET_D", "value": "VAL_D", } ] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info.pop('apply_bios', None) task.node.driver_internal_info = driver_internal_info task.node.save() actual_settings = {'SET_A': 'VAL_A', 'SET_B': 'VAL_B', 'SET_C': 'VAL_C', 'SET_D': 'VAL_D'} task.driver.bios.apply_configuration(task, settings) exe_pre_boot_mock.assert_called_once_with( task.driver.bios, task, 'apply_configuration', actual_settings) self.assertFalse(exe_post_boot_mock.called) cache_settings_mock.assert_called_once_with(task.driver.bios, task) @mock.patch.object(ilo_bios.IloBIOS, 'cache_bios_settings', autospec=True) @mock.patch.object(ilo_bios.IloBIOS, '_execute_post_boot_bios_step', autospec=True) @mock.patch.object(ilo_bios.IloBIOS, '_execute_pre_boot_bios_step', autospec=True) def test_apply_configuration_post_boot(self, exe_pre_boot_mock, exe_post_boot_mock, cache_settings_mock): settings = [ { "name": "SET_A", "value": "VAL_A", }, { "name": "SET_B", "value": "VAL_B", }, { "name": "SET_C", "value": "VAL_C", }, { "name": "SET_D", "value": "VAL_D", } ] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_internal_info = task.node.driver_internal_info driver_internal_info['apply_bios'] = True task.node.driver_internal_info = driver_internal_info task.node.save() task.driver.bios.apply_configuration(task, settings) exe_post_boot_mock.assert_called_once_with( task.driver.bios, task, 'apply_configuration') self.assertFalse(exe_pre_boot_mock.called) cache_settings_mock.assert_called_once_with(task.driver.bios, task) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_pre_boot_bios_step_apply_configuration( self, get_ilo_object_mock, build_agent_mock, node_power_mock, prepare_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value data = { "SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D" } step = 'apply_configuration' task.driver.bios._execute_pre_boot_bios_step(task, step, data) driver_info = task.node.driver_internal_info self.assertTrue( all(x in driver_info for x in ( 'apply_bios', 'deployment_reboot', 'skip_current_deploy_step'))) ilo_object_mock.set_bios_settings.assert_called_once_with(data) self.assertFalse(ilo_object_mock.reset_bios_to_default.called) build_agent_mock.assert_called_once_with(task.node) self.assertTrue(prepare_mock.called) self.assertTrue(node_power_mock.called) @mock.patch.object(ilo_boot.IloVirtualMediaBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_pre_boot_bios_step( self, get_ilo_mock, build_agent_mock, node_power_mock, prepare_mock): if self.node.clean_step: step_data = self.node.clean_step check_fields = ['cleaning_reboot', 'skip_current_clean_step'] else: step_data = self.node.deploy_step check_fields = ['deployment_reboot', 'skip_current_deploy_step'] data = step_data['argsinfo'].get('settings', None) step = step_data['step'] if step == 'factory_reset': check_fields.append('reset_bios') elif step == 'apply_configuration': check_fields.append('apply_bios') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_mock = get_ilo_mock.return_value task.driver.bios._execute_pre_boot_bios_step(task, step, data) drv_internal_info = task.node.driver_internal_info self.assertTrue( all(x in drv_internal_info for x in check_fields)) if step == 'factory_reset': ilo_mock.reset_bios_to_default.assert_called_once_with() elif step == 'apply_configuration': ilo_mock.set_bios_settings.assert_called_once_with(data) build_agent_mock.assert_called_once_with(task.node) self.assertTrue(prepare_mock.called) self.assertTrue(node_power_mock.called) def test__execute_pre_boot_bios_step_apply_conf_cleaning(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step() def test__execute_pre_boot_bios_step_apply_conf_deploying(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step() def test__execute_pre_boot_bios_step_factory_reset_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_pre_boot_bios_step() def test__execute_pre_boot_bios_step_factory_reset_deploying(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_pre_boot_bios_step() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_pre_boot_bios_step_invalid( self, get_ilo_object_mock): if self.node.clean_step: step_data = self.node.clean_step exept = exception.NodeCleaningFailure else: step_data = self.node.deploy_step exept = exception.InstanceDeployFailure data = step_data['argsinfo'].get('settings', None) step = step_data['step'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.set_bios_settings.side_effect = ilo_error.IloError( 'err') if task.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure self.assertRaises(exept, task.driver.bios._execute_pre_boot_bios_step, task, step, data) def test__execute_pre_boot_bios_step_invalid_cleaning(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'invalid_step', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step_invalid() def test__execute_pre_boot_bios_step_invalid_deploying(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'invalid_step', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step_invalid() @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def _test__execute_pre_boot_bios_step_ilo_fail(self, get_ilo_mock): if self.node.clean_step: step_data = self.node.clean_step exept = exception.NodeCleaningFailure else: step_data = self.node.deploy_step exept = exception.InstanceDeployFailure data = step_data['argsinfo'].get('settings', None) step = step_data['step'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_ilo_mock.side_effect = exception.MissingParameterValue('err') self.assertRaises(exept, task.driver.bios._execute_pre_boot_bios_step, task, step, data) def test__execute_pre_boot_bios_step_iloobj_failed_cleaning(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step_ilo_fail() def test__execute_pre_boot_bios_step_iloobj_failed_deploying(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_pre_boot_bios_step_ilo_fail() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_pre_boot_bios_step_set_bios_failed( self, get_ilo_object_mock): if self.node.clean_step: step_data = self.node.clean_step exept = exception.NodeCleaningFailure else: step_data = self.node.deploy_step exept = exception.InstanceDeployFailure data = step_data['argsinfo'].get('settings', None) step = step_data['step'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.set_bios_settings.side_effect = ilo_error.IloError( 'err') if task.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure self.assertRaises(exept, task.driver.bios._execute_pre_boot_bios_step, task, step, data) def test__execute_pre_boot_bios_step_set_bios_failed_cleaning(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() def test__execute_pre_boot_bios_step_set_bios_failed_deploying(self): data = {"SET_A": "VAL_A", "SET_B": "VAL_B", "SET_C": "VAL_C", "SET_D": "VAL_D"} self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() def test__execute_pre_boot_bios_step_reset_bios_failed_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() def test__execute_pre_boot_bios_step_reset_bios_failed_deploying(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_post_boot_bios_step_apply_configuration( self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_internal_info driver_info.update({'apply_bios': True}) task.node.driver_internal_info = driver_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value step = 'apply_configuration' task.driver.bios._execute_post_boot_bios_step(task, step) driver_info = task.node.driver_internal_info self.assertTrue('apply_bios' not in driver_info) ilo_object_mock.get_bios_settings_result.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def test__execute_post_boot_bios_step_factory_reset( self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_internal_info driver_info.update({'reset_bios': True}) task.node.driver_internal_info = driver_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value step = 'factory_reset' task.driver.bios._execute_post_boot_bios_step(task, step) driver_info = task.node.driver_internal_info self.assertTrue('reset_bios' not in driver_info) ilo_object_mock.get_bios_settings_result.assert_called_once_with() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_post_boot_bios_step_invalid( self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_internal_info driver_info.update({'apply_bios': True}) task.node.driver_internal_info = driver_info task.node.save() step = 'invalid_step' if self.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure self.assertRaises(exept, task.driver.bios._execute_post_boot_bios_step, task, step) self.assertTrue( 'apply_bios' not in task.node.driver_internal_info) def test__execute_post_boot_bios_step_invalid_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': u'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_step_invalid() def test__execute_post_boot_bios_step_invalid_deploy(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': u'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_step_invalid() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_post_boot_bios_step_iloobj_failed( self, get_ilo_object_mock): if self.node.clean_step: step = self.node.clean_step['step'] exept = exception.NodeCleaningFailure if self.node.deploy_step: step = self.node.deploy_step['step'] exept = exception.InstanceDeployFailure driver_internal_info = self.node.driver_internal_info driver_internal_info['apply_bios'] = True self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_ilo_object_mock.side_effect = exception.MissingParameterValue( 'err') step = 'apply_configuration' self.assertRaises(exept, task.driver.bios._execute_post_boot_bios_step, task, step) self.assertTrue( 'apply_bios' not in task.node.driver_internal_info) def test__execute_post_boot_bios_step_iloobj_failed_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': u'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_step_iloobj_failed() def test__execute_post_boot_bios_step_iloobj_failed_deploy(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': u'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_step_iloobj_failed() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_post_boot_bios_get_settings_error( self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_internal_info driver_info.update({'apply_bios': True}) task.node.driver_internal_info = driver_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value step = 'apply_configuration' mdobj = { "name": task.driver.bios._execute_post_boot_bios_step, "args": (task, step,) } self._test_ilo_error(ilo_error.IloCommandNotSupportedError, [], [], mdobj, ilo_object_mock.get_bios_settings_result) self.assertTrue( 'apply_bios' not in task.node.driver_internal_info) def test__execute_post_boot_bios_get_settings_error_cleaning( self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': u'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_get_settings_error() def test__execute_post_boot_bios_get_settings_error_deploying( self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': {'a': 1, 'b': 2}}} self.node.save() self._test__execute_post_boot_bios_get_settings_error() @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) def _test__execute_post_boot_bios_get_settings_failed( self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_info = task.node.driver_internal_info driver_info.update({'reset_bios': True}) task.node.driver_internal_info = driver_info task.node.save() ilo_object_mock = get_ilo_object_mock.return_value ilo_object_mock.get_bios_settings_result.return_value = ( {'status': 'failed', 'message': 'Some data'}) step = 'factory_reset' if task.node.clean_step: exept = exception.NodeCleaningFailure else: exept = exception.InstanceDeployFailure self.assertRaises(exept, task.driver.bios._execute_post_boot_bios_step, task, step) self.assertTrue( 'reset_bios' not in task.node.driver_internal_info) def test__execute_post_boot_bios_get_settings_failed_cleaning( self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() def test__execute_post_boot_bios_get_settings_failed_deploying( self): self.node.depoy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test__execute_post_boot_bios_get_settings_failed() @mock.patch.object(objects.BIOSSettingList, 'create') @mock.patch.object(objects.BIOSSettingList, 'save') @mock.patch.object(objects.BIOSSettingList, 'delete') @mock.patch.object(objects.BIOSSettingList, 'sync_node_setting') @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_cache_bios_settings(self, get_ilo_object_mock, sync_node_mock, delete_mock, save_mock, create_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value settings = { "SET_A": True, "SET_B": True, "SET_C": True, "SET_D": True } ilo_object_mock.get_current_bios_settings.return_value = settings expected_bios_settings = [ {"name": "SET_A", "value": True}, {"name": "SET_B", "value": True}, {"name": "SET_C", "value": True}, {"name": "SET_D", "value": True} ] sync_node_mock.return_value = ([], [], [], []) all_settings = ( [ {"name": "C_1", "value": "C_1_VAL"}, {"name": "C_2", "value": "C_2_VAL"} ], [ {"name": "U_1", "value": "U_1_VAL"}, {"name": "U_2", "value": "U_2_VAL"} ], [ {"name": "D_1", "value": "D_1_VAL"}, {"name": "D_2", "value": "D_2_VAL"} ], [] ) sync_node_mock.return_value = all_settings task.driver.bios.cache_bios_settings(task) ilo_object_mock.get_current_bios_settings.assert_called_once_with() actual_arg = sorted(sync_node_mock.call_args[0][2], key=lambda x: x.get("name")) expected_arg = sorted(expected_bios_settings, key=lambda x: x.get("name")) self.assertEqual(actual_arg, expected_arg) create_mock.assert_called_once_with( self.context, task.node.id, all_settings[0]) save_mock.assert_called_once_with( self.context, task.node.id, all_settings[1]) del_names = [setting.get("name") for setting in all_settings[2]] delete_mock.assert_called_once_with( self.context, task.node.id, del_names) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_cache_bios_settings_missing_parameter(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mdobj = { "name": task.driver.bios.cache_bios_settings, "args": (task,) } self._test_ilo_error(exception.MissingParameterValue, [], [], mdobj, get_ilo_object_mock) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_cache_bios_settings_invalid_parameter(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mdobj = { "name": task.driver.bios.cache_bios_settings, "args": (task,) } self._test_ilo_error(exception.InvalidParameterValue, [], [], mdobj, get_ilo_object_mock) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_cache_bios_settings_with_ilo_error(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value mdobj = { "name": task.driver.bios.cache_bios_settings, "args": (task,) } self._test_ilo_error(ilo_error.IloError, [], [], mdobj, ilo_object_mock.get_current_bios_settings) @mock.patch.object(ilo_common, 'get_ilo_object', autospec=True) def test_cache_bios_settings_with_unknown_error(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_object_mock = get_ilo_object_mock.return_value mdobj = { "name": task.driver.bios.cache_bios_settings, "args": (task,) } self._test_ilo_error(ilo_error.IloCommandNotSupportedError, [], [], mdobj, ilo_object_mock.get_current_bios_settings) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_console.py0000664000175000017500000000463513652514273025512 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for common methods used by iLO modules.""" import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules import ipmitool from ironic.tests.unit.drivers.modules.ilo import test_common class IloConsoleInterfaceTestCase(test_common.BaseIloTest): boot_interface = 'ilo-virtual-media' @mock.patch.object(ipmitool.IPMIShellinaboxConsole, 'validate', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) def test_validate(self, update_ipmi_mock, ipmi_validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['console_port'] = 60 task.driver.console.validate(task) update_ipmi_mock.assert_called_once_with(task) ipmi_validate_mock.assert_called_once_with(mock.ANY, task) @mock.patch.object(ipmitool.IPMIShellinaboxConsole, 'validate', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'update_ipmi_properties', spec_set=True, autospec=True) def test_validate_exc(self, update_ipmi_mock, ipmi_validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, task.driver.console.validate, task) self.assertEqual(0, update_ipmi_mock.call_count) self.assertEqual(0, ipmi_validate_mock.call_count) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/test_power.py0000664000175000017500000005310713652514273025202 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for IloPower module.""" import mock from oslo_config import cfg from oslo_utils import importutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules.ilo import common as ilo_common from ironic.drivers.modules.ilo import power as ilo_power from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.ilo import test_common from ironic.tests.unit.objects import utils as obj_utils ilo_error = importutils.try_import('proliantutils.exception') INFO_DICT = db_utils.get_test_ilo_info() CONF = cfg.CONF @mock.patch.object(ilo_common, 'get_ilo_object', spec_set=True, autospec=True) class IloPowerInternalMethodsTestCase(test_common.BaseIloTest): def setUp(self): super(IloPowerInternalMethodsTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='ilo', driver_info=INFO_DICT, instance_uuid=uuidutils.generate_uuid()) CONF.set_override('power_wait', 1, 'ilo') CONF.set_override('soft_power_off_timeout', 1, 'conductor') def test__get_power_state(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' self.assertEqual( states.POWER_ON, ilo_power._get_power_state(self.node)) ilo_mock_object.get_host_power_status.return_value = 'OFF' self.assertEqual( states.POWER_OFF, ilo_power._get_power_state(self.node)) ilo_mock_object.get_host_power_status.return_value = 'ERROR' self.assertEqual(states.ERROR, ilo_power._get_power_state(self.node)) def test__get_power_state_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.get_host_power_status.side_effect = exc self.assertRaises(exception.IloOperationError, ilo_power._get_power_state, self.node) ilo_mock_object.get_host_power_status.assert_called_once_with() def test__set_power_state_invalid_state(self, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, ilo_power._set_power_state, task, states.ERROR) def test__set_power_state_reboot_fail(self, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value exc = ilo_error.IloError('error') ilo_mock_object.reset_server.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IloOperationError, ilo_power._set_power_state, task, states.REBOOT) ilo_mock_object.reset_server.assert_called_once_with() @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_reboot_ok(self, get_post_mock, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value get_post_mock.side_effect = (['FinishedPost', 'PowerOff', 'InPost']) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, states.REBOOT) get_post_mock.assert_called_with(task.node) ilo_mock_object.reset_server.assert_called_once_with() @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_off_fail(self, get_post_mock, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value get_post_mock.side_effect = (['FinishedPost', 'FinishedPost', 'FinishedPost', 'FinishedPost', 'FinishedPost']) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, ilo_power._set_power_state, task, states.POWER_OFF) get_post_mock.assert_called_with(task.node) ilo_mock_object.hold_pwr_btn.assert_called_once_with() @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_on_ok(self, get_post_mock, get_ilo_object_mock): ilo_mock_object = get_ilo_object_mock.return_value get_post_mock.side_effect = ['PowerOff', 'PowerOff', 'InPost'] target_state = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, target_state) get_post_mock.assert_called_with(task.node) ilo_mock_object.set_host_power.assert_called_once_with('ON') @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_power, '_attach_boot_iso_if_needed', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_reboot_ok( self, get_post_mock, attach_boot_iso_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' get_post_mock.side_effect = ( ['FinishedPost', 'FinishedPost', 'PowerOff', 'PowerOff', 'InPost', 'FinishedPost']) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, states.SOFT_REBOOT, timeout=3) get_post_mock.assert_called_with(task.node) ilo_mock_object.press_pwr_btn.assert_called_once_with() attach_boot_iso_mock.assert_called_once_with(task) ilo_mock_object.set_host_power.assert_called_once_with('ON') log_mock.assert_called_once_with( "The node %(node_id)s operation of '%(state)s' " "is completed in %(time_consumed)s seconds.", {'state': 'soft rebooting', 'node_id': task.node.uuid, 'time_consumed': 2}) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_power, '_attach_boot_iso_if_needed', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_reboot_ok_initial_power_off( self, get_post_mock, attach_boot_iso_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'OFF' get_post_mock.side_effect = ['FinishedPost', 'PowerOff', 'FinishedPost'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, states.SOFT_REBOOT, timeout=3) get_post_mock.assert_called_with(task.node) attach_boot_iso_mock.assert_called_once_with(task) ilo_mock_object.set_host_power.assert_called_once_with('ON') log_mock.assert_called_once_with( "The node %(node_id)s operation of '%(state)s' " "is completed in %(time_consumed)s seconds.", {'state': 'power on', 'node_id': task.node.uuid, 'time_consumed': 1}) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_power, '_attach_boot_iso_if_needed', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_reboot_fail_to_off( self, get_post_mock, attach_boot_iso_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') exc = ilo_error.IloError('error') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' ilo_mock_object.press_pwr_btn.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IloOperationError, ilo_power._set_power_state, task, states.SOFT_REBOOT, timeout=3) ilo_mock_object.press_pwr_btn.assert_called_once_with() self.assertFalse(get_post_mock.called) self.assertFalse(attach_boot_iso_mock.called) self.assertFalse(log_mock.called) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_power, '_attach_boot_iso_if_needed', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_reboot_fail_to_on( self, get_post_mock, attach_boot_iso_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') exc = ilo_error.IloError('error') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' get_post_mock.side_effect = ( ['FinishedPost', 'PowerOff', 'PowerOff', 'InPost', 'InPost', 'InPost', 'InPost', 'InPost']) ilo_mock_object.press_pwr_btn.side_effect = [None, exc] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, ilo_power._set_power_state, task, states.SOFT_REBOOT, timeout=3) get_post_mock.assert_called_with(task.node) ilo_mock_object.press_pwr_btn.assert_called_once_with() ilo_mock_object.set_host_power.assert_called_once_with('ON') attach_boot_iso_mock.assert_called_once_with(task) self.assertFalse(log_mock.called) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_power, '_attach_boot_iso_if_needed', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_reboot_timeout( self, get_post_mock, attach_boot_iso_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' get_post_mock.side_effect = ['FinishedPost', 'FinishedPost', 'PowerOff', 'InPost', 'InPost', 'InPost' 'InPost', 'InPost'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, ilo_power._set_power_state, task, states.SOFT_REBOOT, timeout=2) get_post_mock.assert_called_with(task.node) ilo_mock_object.press_pwr_btn.assert_called_once_with() ilo_mock_object.set_host_power.assert_called_once_with('ON') attach_boot_iso_mock.assert_called_once_with(task) self.assertFalse(log_mock.called) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_power_off_ok( self, get_post_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') ilo_mock_object = get_ilo_object_mock.return_value get_post_mock.side_effect = ['FinishedPost', 'FinishedPost', 'PowerOff' 'PowerOff', 'PowerOff'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: ilo_power._set_power_state(task, states.SOFT_POWER_OFF, timeout=3) get_post_mock.assert_called_with(task.node) ilo_mock_object.press_pwr_btn.assert_called_once_with() log_mock.assert_called_once_with( "The node %(node_id)s operation of '%(state)s' " "is completed in %(time_consumed)s seconds.", {'state': 'soft power off', 'node_id': task.node.uuid, 'time_consumed': 2}) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_power_off_fail( self, get_post_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') exc = ilo_error.IloError('error') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' ilo_mock_object.press_pwr_btn.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IloOperationError, ilo_power._set_power_state, task, states.SOFT_POWER_OFF, timeout=2) ilo_mock_object.press_pwr_btn.assert_called_once_with() self.assertFalse(get_post_mock.called) self.assertFalse(log_mock.called) @mock.patch.object(ilo_power.LOG, 'info') @mock.patch.object(ilo_common, 'get_server_post_state', spec_set=True, autospec=True) def test__set_power_state_soft_power_off_timeout( self, get_post_mock, log_mock, get_ilo_object_mock): CONF.set_override('power_wait', 1, 'ilo') ilo_mock_object = get_ilo_object_mock.return_value ilo_mock_object.get_host_power_status.return_value = 'ON' get_post_mock.side_effect = ['FinishedPost', 'InPost', 'InPost', 'InPost', 'InPost'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.PowerStateFailure, ilo_power._set_power_state, task, states.SOFT_POWER_OFF, timeout=2) get_post_mock.assert_called_with(task.node) ilo_mock_object.press_pwr_btn.assert_called_with() self.assertFalse(log_mock.called) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test__attach_boot_iso_if_needed( self, setup_vmedia_mock, set_boot_device_mock, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.ACTIVE task.node.instance_info['ilo_boot_iso'] = 'boot-iso' ilo_power._attach_boot_iso_if_needed(task) setup_vmedia_mock.assert_called_once_with(task, 'boot-iso') set_boot_device_mock.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(ilo_common, 'setup_vmedia_for_boot', spec_set=True, autospec=True) def test__attach_boot_iso_if_needed_on_rebuild( self, setup_vmedia_mock, set_boot_device_mock, get_ilo_object_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.instance_info['ilo_boot_iso'] = 'boot-iso' ilo_power._attach_boot_iso_if_needed(task) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) class IloPowerTestCase(test_common.BaseIloTest): def test_get_properties(self): expected = ilo_common.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.power.get_properties()) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(ilo_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = exception.InvalidParameterValue("Invalid Input") mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch.object(ilo_power, '_get_power_state', spec_set=True, autospec=True) def test_get_power_state(self, mock_get_power): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_get_power.return_value = states.POWER_ON self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) mock_get_power.assert_called_once_with(task.node) @mock.patch.object(ilo_power, '_set_power_state', spec_set=True, autospec=True) def _test_set_power_state(self, mock_set_power, timeout=None): mock_set_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON, timeout=timeout) mock_set_power.assert_called_once_with(task, states.POWER_ON, timeout=timeout) def test_set_power_state_no_timeout(self): self._test_set_power_state(timeout=None) def test_set_power_state_timeout(self): self._test_set_power_state(timeout=13) @mock.patch.object(ilo_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(ilo_power, '_get_power_state', spec_set=True, autospec=True) def _test_reboot( self, mock_get_power, mock_set_power, timeout=None): mock_get_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task, timeout=timeout) mock_get_power.assert_called_once_with(task.node) mock_set_power.assert_called_once_with( task, states.REBOOT, timeout=timeout) def test_reboot_no_timeout(self): self._test_reboot(timeout=None) def test_reboot_with_timeout(self): self._test_reboot(timeout=100) def test_get_supported_power_states(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected = [states.POWER_OFF, states.POWER_ON, states.REBOOT, states.SOFT_POWER_OFF, states.SOFT_REBOOT] self.assertEqual( sorted(expected), sorted(task.driver.power. get_supported_power_states(task))) ironic-15.0.0/ironic/tests/unit/drivers/modules/ilo/__init__.py0000664000175000017500000000000013652514273024526 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/0000775000175000017500000000000013652514443023267 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_utils.py0000664000175000017500000003630613652514273026051 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import os import mock from oslo_config import cfg from oslo_utils import importutils import requests from ironic.common import exception from ironic.drivers.modules.redfish import utils as redfish_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() class RedfishUtilsTestCase(db_base.DbTestCase): def setUp(self): super(RedfishUtilsTestCase, self).setUp() # Default configurations self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish']) # Redfish specific configurations self.config(connection_attempts=1, group='redfish') self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) self.parsed_driver_info = { 'address': 'https://example.com', 'system_id': '/redfish/v1/Systems/FAKESYSTEM', 'username': 'username', 'password': 'password', 'verify_ca': True, 'auth_type': 'auto', 'node_uuid': self.node.uuid } def test_parse_driver_info(self): response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_default_scheme(self): self.node.driver_info['redfish_address'] = 'example.com' response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_default_scheme_with_port(self): self.node.driver_info['redfish_address'] = 'example.com:42' self.parsed_driver_info['address'] = 'https://example.com:42' response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_missing_info(self): for prop in redfish_utils.REQUIRED_PROPERTIES: self.node.driver_info = INFO_DICT.copy() self.node.driver_info.pop(prop) self.assertRaises(exception.MissingParameterValue, redfish_utils.parse_driver_info, self.node) def test_parse_driver_info_invalid_address(self): for value in ['/banana!', 42]: self.node.driver_info['redfish_address'] = value self.assertRaisesRegex(exception.InvalidParameterValue, 'Invalid Redfish address', redfish_utils.parse_driver_info, self.node) @mock.patch.object(os.path, 'isdir', autospec=True) def test_parse_driver_info_path_verify_ca(self, mock_isdir): mock_isdir.return_value = True fake_path = '/path/to/a/valid/CA' self.node.driver_info['redfish_verify_ca'] = fake_path self.parsed_driver_info['verify_ca'] = fake_path response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) mock_isdir.assert_called_once_with(fake_path) @mock.patch.object(os.path, 'isfile', autospec=True) def test_parse_driver_info_valid_capath(self, mock_isfile): mock_isfile.return_value = True fake_path = '/path/to/a/valid/CA.pem' self.node.driver_info['redfish_verify_ca'] = fake_path self.parsed_driver_info['verify_ca'] = fake_path response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) mock_isfile.assert_called_once_with(fake_path) def test_parse_driver_info_invalid_value_verify_ca(self): # Integers are not supported self.node.driver_info['redfish_verify_ca'] = 123456 self.assertRaisesRegex(exception.InvalidParameterValue, 'Invalid value type', redfish_utils.parse_driver_info, self.node) def test_parse_driver_info_invalid_system_id(self): # Integers are not supported self.node.driver_info['redfish_system_id'] = 123 self.assertRaisesRegex(exception.InvalidParameterValue, 'The value should be a path', redfish_utils.parse_driver_info, self.node) def test_parse_driver_info_missing_system_id(self): self.node.driver_info.pop('redfish_system_id') redfish_utils.parse_driver_info(self.node) def test_parse_driver_info_valid_string_value_verify_ca(self): for value in ('0', 'f', 'false', 'off', 'n', 'no'): self.node.driver_info['redfish_verify_ca'] = value response = redfish_utils.parse_driver_info(self.node) parsed_driver_info = copy.deepcopy(self.parsed_driver_info) parsed_driver_info['verify_ca'] = False self.assertEqual(parsed_driver_info, response) for value in ('1', 't', 'true', 'on', 'y', 'yes'): self.node.driver_info['redfish_verify_ca'] = value response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_invalid_string_value_verify_ca(self): for value in ('xyz', '*', '!123', '123'): self.node.driver_info['redfish_verify_ca'] = value self.assertRaisesRegex(exception.InvalidParameterValue, 'The value should be a Boolean', redfish_utils.parse_driver_info, self.node) def test_parse_driver_info_valid_auth_type(self): for value in 'basic', 'session', 'auto': self.node.driver_info['redfish_auth_type'] = value response = redfish_utils.parse_driver_info(self.node) self.parsed_driver_info['auth_type'] = value self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_invalid_auth_type(self): for value in 'BasiC', 'SESSION', 'Auto': self.node.driver_info['redfish_auth_type'] = value self.assertRaisesRegex(exception.InvalidParameterValue, 'The value should be one of ', redfish_utils.parse_driver_info, self.node) def test_parse_driver_info_with_root_prefix(self): test_redfish_address = 'https://example.com/test/redfish/v0/' self.node.driver_info['redfish_address'] = test_redfish_address self.parsed_driver_info['root_prefix'] = '/test/redfish/v0/' response = redfish_utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_get_system(self, mock_sushy): fake_conn = mock_sushy.return_value fake_system = fake_conn.get_system.return_value response = redfish_utils.get_system(self.node) self.assertEqual(fake_system, response) fake_conn.get_system.assert_called_once_with( '/redfish/v1/Systems/FAKESYSTEM') @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_get_system_resource_not_found(self, mock_sushy): fake_conn = mock_sushy.return_value fake_conn.get_system.side_effect = ( sushy.exceptions.ResourceNotFoundError('GET', '/', requests.Response())) self.assertRaises(exception.RedfishError, redfish_utils.get_system, self.node) fake_conn.get_system.assert_called_once_with( '/redfish/v1/Systems/FAKESYSTEM') @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_get_system_multiple_systems(self, mock_sushy): self.node.driver_info.pop('redfish_system_id') fake_conn = mock_sushy.return_value redfish_utils.get_system(self.node) fake_conn.get_system.assert_called_once_with(None) @mock.patch('time.sleep', autospec=True) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_get_system_resource_connection_error_retry(self, mock_sushy, mock_sleep): # Redfish specific configurations self.config(connection_attempts=3, group='redfish') fake_conn = mock_sushy.return_value fake_conn.get_system.side_effect = sushy.exceptions.ConnectionError() self.assertRaises(exception.RedfishConnectionError, redfish_utils.get_system, self.node) expected_get_system_calls = [ mock.call(self.parsed_driver_info['system_id']), mock.call(self.parsed_driver_info['system_id']), mock.call(self.parsed_driver_info['system_id']), ] fake_conn.get_system.assert_has_calls(expected_get_system_calls) mock_sleep.assert_called_with( redfish_utils.CONF.redfish.connection_retry_interval) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_ensure_session_reuse(self, mock_sushy): redfish_utils.get_system(self.node) redfish_utils.get_system(self.node) self.assertEqual(1, mock_sushy.call_count) @mock.patch.object(sushy, 'Sushy', autospec=True) def test_ensure_new_session_address(self, mock_sushy): self.node.driver_info['redfish_address'] = 'http://bmc.foo' redfish_utils.get_system(self.node) self.node.driver_info['redfish_address'] = 'http://bmc.bar' redfish_utils.get_system(self.node) self.assertEqual(2, mock_sushy.call_count) @mock.patch.object(sushy, 'Sushy', autospec=True) def test_ensure_new_session_username(self, mock_sushy): self.node.driver_info['redfish_username'] = 'foo' redfish_utils.get_system(self.node) self.node.driver_info['redfish_username'] = 'bar' redfish_utils.get_system(self.node) self.assertEqual(2, mock_sushy.call_count) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache.AUTH_CLASSES', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.SessionCache._sessions', collections.OrderedDict()) def test_ensure_basic_session_caching(self, mock_auth, mock_sushy): self.node.driver_info['redfish_auth_type'] = 'basic' mock_session_or_basic_auth = mock_auth['auto'] redfish_utils.get_system(self.node) mock_sushy.assert_called_with( mock.ANY, verify=mock.ANY, auth=mock_session_or_basic_auth.return_value, ) self.assertEqual(len(redfish_utils.SessionCache._sessions), 1) @mock.patch.object(sushy, 'Sushy', autospec=True) def test_expire_old_sessions(self, mock_sushy): cfg.CONF.set_override('connection_cache_size', 10, 'redfish') for num in range(20): self.node.driver_info['redfish_username'] = 'foo-%d' % num redfish_utils.get_system(self.node) self.assertEqual(mock_sushy.call_count, 20) self.assertEqual(len(redfish_utils.SessionCache._sessions), 10) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_disabled_sessions_cache(self, mock_sushy): cfg.CONF.set_override('connection_cache_size', 0, 'redfish') for num in range(2): self.node.driver_info['redfish_username'] = 'foo-%d' % num redfish_utils.get_system(self.node) self.assertEqual(mock_sushy.call_count, 2) self.assertEqual(len(redfish_utils.SessionCache._sessions), 0) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache.AUTH_CLASSES', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_auth_auto(self, mock_auth, mock_sushy): redfish_utils.get_system(self.node) mock_session_or_basic_auth = mock_auth['auto'] mock_session_or_basic_auth.assert_called_with( username=self.parsed_driver_info['username'], password=self.parsed_driver_info['password'] ) mock_sushy.assert_called_with( self.parsed_driver_info['address'], auth=mock_session_or_basic_auth.return_value, verify=True) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache.AUTH_CLASSES', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_auth_session(self, mock_auth, mock_sushy): self.node.driver_info['redfish_auth_type'] = 'session' mock_session_auth = mock_auth['session'] redfish_utils.get_system(self.node) mock_session_auth.assert_called_with( username=self.parsed_driver_info['username'], password=self.parsed_driver_info['password'] ) mock_sushy.assert_called_with( mock.ANY, verify=mock.ANY, auth=mock_session_auth.return_value ) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache.AUTH_CLASSES', autospec=True) @mock.patch('ironic.drivers.modules.redfish.utils.' 'SessionCache._sessions', {}) def test_auth_basic(self, mock_auth, mock_sushy): self.node.driver_info['redfish_auth_type'] = 'basic' mock_basic_auth = mock_auth['basic'] redfish_utils.get_system(self.node) mock_basic_auth.assert_called_with( username=self.parsed_driver_info['username'], password=self.parsed_driver_info['password'] ) sushy.Sushy.assert_called_with( mock.ANY, verify=mock.ANY, auth=mock_basic_auth.return_value ) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_boot.py0000664000175000017500000013177413652514273025661 0ustar zuulzuul00000000000000# Copyright 2019 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import images from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.redfish import boot as redfish_boot from ironic.drivers.modules.redfish import utils as redfish_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class RedfishVirtualMediaBootTestCase(db_base.DbTestCase): def setUp(self): super(RedfishVirtualMediaBootTestCase, self).setUp() self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish'], enabled_inspect_interfaces=['redfish'], enabled_bios_interfaces=['redfish']) self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) @mock.patch.object(redfish_boot, 'sushy', None) def test_loading_error(self): self.assertRaisesRegex( exception.DriverLoadError, 'Unable to import the sushy library', redfish_boot.RedfishVirtualMediaBoot) def test_parse_driver_info_deploy(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) actual_driver_info = task.driver.boot._parse_driver_info(task.node) self.assertIn('kernel', actual_driver_info['deploy_kernel']) self.assertIn('ramdisk', actual_driver_info['deploy_ramdisk']) self.assertIn('bootloader', actual_driver_info['bootloader']) def test_parse_driver_info_rescue(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.RESCUING task.node.driver_info.update( {'rescue_kernel': 'kernel', 'rescue_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) actual_driver_info = task.driver.boot._parse_driver_info(task.node) self.assertIn('kernel', actual_driver_info['rescue_kernel']) self.assertIn('ramdisk', actual_driver_info['rescue_ramdisk']) self.assertIn('bootloader', actual_driver_info['bootloader']) def test_parse_driver_info_exc(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot._parse_driver_info, task.node) def _test_parse_driver_info_from_conf(self, mode='deploy'): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: if mode == 'rescue': task.node.provision_state = states.RESCUING expected = { '%s_ramdisk' % mode: 'glance://%s_ramdisk_uuid' % mode, '%s_kernel' % mode: 'glance://%s_kernel_uuid' % mode } self.config(group='conductor', **expected) image_info = task.driver.boot._parse_driver_info(task.node) for key, value in expected.items(): self.assertEqual(value, image_info[key]) def test_parse_driver_info_from_conf_deploy(self): self._test_parse_driver_info_from_conf() def test_parse_driver_info_from_conf_rescue(self): self._test_parse_driver_info_from_conf(mode='rescue') def _test_parse_driver_info_mixed_source(self, mode='deploy'): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: if mode == 'rescue': task.node.provision_state = states.RESCUING kernel_config = { '%s_kernel' % mode: 'glance://%s_kernel_uuid' % mode } ramdisk_config = { '%s_ramdisk' % mode: 'glance://%s_ramdisk_uuid' % mode, } self.config(group='conductor', **kernel_config) task.node.driver_info.update(ramdisk_config) self.assertRaises(exception.MissingParameterValue, task.driver.boot._parse_driver_info, task.node) def test_parse_driver_info_mixed_source_deploy(self): self._test_parse_driver_info_mixed_source() def test_parse_driver_info_mixed_source_rescue(self): self._test_parse_driver_info_mixed_source(mode='rescue') def test_parse_deploy_info(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) task.node.instance_info.update( {'image_source': 'http://boot/iso', 'kernel': 'http://kernel/img', 'ramdisk': 'http://ramdisk/img'}) actual_instance_info = task.driver.boot._parse_deploy_info( task.node) self.assertEqual( 'http://boot/iso', actual_instance_info['image_source']) self.assertEqual( 'http://kernel/img', actual_instance_info['kernel']) self.assertEqual( 'http://ramdisk/img', actual_instance_info['ramdisk']) def test_parse_deploy_info_exc(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot._parse_deploy_info, task.node) def test__append_filename_param_without_qs(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: res = task.driver.boot._append_filename_param( 'http://a.b/c', 'b.img') expected = 'http://a.b/c?filename=b.img' self.assertEqual(expected, res) def test__append_filename_param_with_qs(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: res = task.driver.boot._append_filename_param( 'http://a.b/c?d=e&f=g', 'b.img') expected = 'http://a.b/c?d=e&f=g&filename=b.img' self.assertEqual(expected, res) def test__append_filename_param_with_filename(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: res = task.driver.boot._append_filename_param( 'http://a.b/c?filename=bootme.img', 'b.img') expected = 'http://a.b/c?filename=bootme.img' self.assertEqual(expected, res) @mock.patch.object(redfish_boot, 'swift', autospec=True) def test__publish_image_swift(self, mock_swift): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_swift_api = mock_swift.SwiftAPI.return_value mock_swift_api.get_temp_url.return_value = 'https://a.b/c.f?e=f' url = task.driver.boot._publish_image('file.iso', 'boot.iso') self.assertEqual( 'https://a.b/c.f?e=f&filename=file.iso', url) mock_swift.SwiftAPI.assert_called_once_with() mock_swift_api.create_object.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY, mock.ANY) mock_swift_api.get_temp_url.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(redfish_boot, 'swift', autospec=True) def test__unpublish_image_swift(self, mock_swift): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: object_name = 'image-%s' % task.node.uuid task.driver.boot._unpublish_image(object_name) mock_swift.SwiftAPI.assert_called_once_with() mock_swift_api = mock_swift.SwiftAPI.return_value mock_swift_api.delete_object.assert_called_once_with( 'ironic_redfish_container', object_name) @mock.patch.object(redfish_boot, 'shutil', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(os, 'mkdir', autospec=True) def test__publish_image_local_link( self, mock_mkdir, mock_link, mock_shutil): self.config(use_swift=False, group='redfish') self.config(http_url='http://localhost', group='deploy') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: url = task.driver.boot._publish_image('file.iso', 'boot.iso') self.assertEqual( 'http://localhost/redfish/boot.iso?filename=file.iso', url) mock_mkdir.assert_called_once_with('/httpboot/redfish', 0x755) mock_link.assert_called_once_with( 'file.iso', '/httpboot/redfish/boot.iso') @mock.patch.object(redfish_boot, 'shutil', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(os, 'mkdir', autospec=True) def test__publish_image_local_copy( self, mock_mkdir, mock_link, mock_shutil): self.config(use_swift=False, group='redfish') self.config(http_url='http://localhost', group='deploy') mock_link.side_effect = OSError() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: url = task.driver.boot._publish_image('file.iso', 'boot.iso') self.assertEqual( 'http://localhost/redfish/boot.iso?filename=file.iso', url) mock_mkdir.assert_called_once_with('/httpboot/redfish', 0x755) mock_shutil.copyfile.assert_called_once_with( 'file.iso', '/httpboot/redfish/boot.iso') @mock.patch.object(redfish_boot, 'ironic_utils', autospec=True) def test__unpublish_image_local(self, mock_ironic_utils): self.config(use_swift=False, group='redfish') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: object_name = 'image-%s' % task.node.uuid expected_file = '/httpboot/redfish/' + object_name task.driver.boot._unpublish_image(object_name) mock_ironic_utils.unlink_without_raise.assert_called_once_with( expected_file) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_unpublish_image', autospec=True) def test__cleanup_floppy_image(self, mock_unpublish): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot._cleanup_floppy_image(task) object_name = 'image-%s' % task.node.uuid mock_unpublish.assert_called_once_with(object_name) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_publish_image', autospec=True) @mock.patch.object(images, 'create_vfat_image', autospec=True) def test__prepare_floppy_image( self, mock_create_vfat_image, mock__publish_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_url = 'https://a.b/c.f?e=f' mock__publish_image.return_value = expected_url url = task.driver.boot._prepare_floppy_image(task) object_name = 'image-%s' % task.node.uuid mock__publish_image.assert_called_once_with( mock.ANY, object_name) mock_create_vfat_image.assert_called_once_with( mock.ANY, parameters=mock.ANY) self.assertEqual(expected_url, url) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_unpublish_image', autospec=True) def test__cleanup_iso_image(self, mock_unpublish): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot._cleanup_iso_image(task) object_name = 'boot-%s' % task.node.uuid mock_unpublish.assert_called_once_with(object_name) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_publish_image', autospec=True) @mock.patch.object(images, 'create_boot_iso', autospec=True) def test__prepare_iso_image_uefi( self, mock_create_boot_iso, mock__publish_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info.update(deploy_boot_mode='uefi') expected_url = 'https://a.b/c.f?e=f' mock__publish_image.return_value = expected_url url = task.driver.boot._prepare_iso_image( task, 'http://kernel/img', 'http://ramdisk/img', 'http://bootloader/img', root_uuid=task.node.uuid) object_name = 'boot-%s' % task.node.uuid mock__publish_image.assert_called_once_with( mock.ANY, object_name) mock_create_boot_iso.assert_called_once_with( mock.ANY, mock.ANY, 'http://kernel/img', 'http://ramdisk/img', boot_mode='uefi', esp_image_href='http://bootloader/img', configdrive_href=mock.ANY, kernel_params='nofb nomodeset vga=normal', root_uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c123') self.assertEqual(expected_url, url) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_publish_image', autospec=True) @mock.patch.object(images, 'create_boot_iso', autospec=True) def test__prepare_iso_image_bios( self, mock_create_boot_iso, mock__publish_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_url = 'https://a.b/c.f?e=f' mock__publish_image.return_value = expected_url url = task.driver.boot._prepare_iso_image( task, 'http://kernel/img', 'http://ramdisk/img', bootloader_href=None, root_uuid=task.node.uuid) object_name = 'boot-%s' % task.node.uuid mock__publish_image.assert_called_once_with( mock.ANY, object_name) mock_create_boot_iso.assert_called_once_with( mock.ANY, mock.ANY, 'http://kernel/img', 'http://ramdisk/img', boot_mode=None, esp_image_href=None, configdrive_href=mock.ANY, kernel_params='nofb nomodeset vga=normal', root_uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c123') self.assertEqual(expected_url, url) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_publish_image', autospec=True) @mock.patch.object(images, 'create_boot_iso', autospec=True) def test__prepare_iso_image_kernel_params( self, mock_create_boot_iso, mock__publish_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: kernel_params = 'network-config=base64-cloudinit-blob' task.node.instance_info.update(kernel_append_params=kernel_params) task.driver.boot._prepare_iso_image( task, 'http://kernel/img', 'http://ramdisk/img', bootloader_href=None, root_uuid=task.node.uuid) mock_create_boot_iso.assert_called_once_with( mock.ANY, mock.ANY, 'http://kernel/img', 'http://ramdisk/img', boot_mode=None, esp_image_href=None, configdrive_href=mock.ANY, kernel_params=kernel_params, root_uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c123') @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_iso_image', autospec=True) def test__prepare_deploy_iso(self, mock__prepare_iso_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) task.node.instance_info.update(deploy_boot_mode='uefi') task.driver.boot._prepare_deploy_iso(task, {}, 'deploy') mock__prepare_iso_image.assert_called_once_with( mock.ANY, 'kernel', 'ramdisk', 'bootloader', params={}) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_iso_image', autospec=True) @mock.patch.object(images, 'create_boot_iso', autospec=True) def test__prepare_boot_iso(self, mock_create_boot_iso, mock__prepare_iso_image): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) task.node.instance_info.update( {'image_source': 'http://boot/iso', 'kernel': 'http://kernel/img', 'ramdisk': 'http://ramdisk/img'}) task.driver.boot._prepare_boot_iso( task, root_uuid=task.node.uuid) mock__prepare_iso_image.assert_called_once_with( mock.ANY, 'http://kernel/img', 'http://ramdisk/img', 'bootloader', root_uuid=task.node.uuid) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) @mock.patch.object(deploy_utils, 'validate_image_properties', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) def test_validate_uefi_boot(self, mock_get_boot_mode, mock_validate_image_properties, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info.update( {'kernel': 'kernel', 'ramdisk': 'ramdisk', 'image_source': 'http://image/source'} ) task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) mock_get_boot_mode.return_value = 'uefi' task.driver.boot.validate(task) mock_validate_image_properties.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) @mock.patch.object(deploy_utils, 'validate_image_properties', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) def test_validate_bios_boot(self, mock_get_boot_mode, mock_validate_image_properties, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info.update( {'kernel': 'kernel', 'ramdisk': 'ramdisk', 'image_source': 'http://image/source'} ) task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) mock_get_boot_mode.return_value = 'bios' task.driver.boot.validate(task) mock_validate_image_properties.assert_called_once_with( mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) @mock.patch.object(deploy_utils, 'validate_image_properties', autospec=True) def test_validate_missing(self, mock_validate_image_properties, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate_inspection(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update( {'deploy_kernel': 'kernel', 'deploy_ramdisk': 'ramdisk', 'bootloader': 'bootloader'} ) task.driver.boot.validate_inspection(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate_inspection_missing(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.UnsupportedDriverExtension, task.driver.boot.validate_inspection, task) @mock.patch.object(redfish_boot.manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_deploy_iso', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_insert_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) @mock.patch.object(redfish_boot.manager_utils, 'node_power_action', autospec=True) @mock.patch.object(redfish_boot, 'boot_mode_utils', autospec=True) def test_prepare_ramdisk_with_params( self, mock_boot_mode_utils, mock_node_power_action, mock__parse_driver_info, mock__insert_vmedia, mock__eject_vmedia, mock__prepare_deploy_iso, mock_node_set_boot_device): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYING mock__parse_driver_info.return_value = {} mock__prepare_deploy_iso.return_value = 'image-url' task.driver.boot.prepare_ramdisk(task, {}) mock_node_power_action.assert_called_once_with( task, states.POWER_OFF) mock__eject_vmedia.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_CD) mock__insert_vmedia.assert_called_once_with( task, 'image-url', sushy.VIRTUAL_MEDIA_CD) expected_params = { 'BOOTIF': None, 'ipa-agent-token': mock.ANY, 'ipa-debug': '1', } mock__prepare_deploy_iso.assert_called_once_with( task, expected_params, 'deploy') mock_node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, False) mock_boot_mode_utils.sync_boot_mode.assert_called_once_with(task) @mock.patch.object(redfish_boot.manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_deploy_iso', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_insert_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) @mock.patch.object(redfish_boot.manager_utils, 'node_power_action', autospec=True) @mock.patch.object(redfish_boot, 'boot_mode_utils', autospec=True) def test_prepare_ramdisk_no_debug( self, mock_boot_mode_utils, mock_node_power_action, mock__parse_driver_info, mock__insert_vmedia, mock__eject_vmedia, mock__prepare_deploy_iso, mock_node_set_boot_device): self.config(debug=False) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING mock__parse_driver_info.return_value = {} mock__prepare_deploy_iso.return_value = 'image-url' task.driver.boot.prepare_ramdisk(task, {}) mock_node_power_action.assert_called_once_with( task, states.POWER_OFF) mock__eject_vmedia.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_CD) mock__insert_vmedia.assert_called_once_with( task, 'image-url', sushy.VIRTUAL_MEDIA_CD) expected_params = { 'BOOTIF': None, 'ipa-agent-token': mock.ANY, } mock__prepare_deploy_iso.assert_called_once_with( task, expected_params, 'deploy') mock_node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, False) mock_boot_mode_utils.sync_boot_mode.assert_called_once_with(task) @mock.patch.object(redfish_boot.manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_floppy_image', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_deploy_iso', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_has_vmedia_device', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_insert_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) @mock.patch.object(redfish_boot.manager_utils, 'node_power_action', autospec=True) @mock.patch.object(redfish_boot, 'boot_mode_utils', autospec=True) def test_prepare_ramdisk_with_floppy( self, mock_boot_mode_utils, mock_node_power_action, mock__parse_driver_info, mock__insert_vmedia, mock__eject_vmedia, mock__has_vmedia_device, mock__prepare_deploy_iso, mock__prepare_floppy_image, mock_node_set_boot_device): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.provision_state = states.DEPLOYING mock__parse_driver_info.return_value = { 'config_via_floppy': True } mock__has_vmedia_device.return_value = True mock__prepare_floppy_image.return_value = 'floppy-image-url' mock__prepare_deploy_iso.return_value = 'cd-image-url' task.driver.boot.prepare_ramdisk(task, {}) mock_node_power_action.assert_called_once_with( task, states.POWER_OFF) mock__has_vmedia_device.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_FLOPPY) eject_calls = [ mock.call(task, sushy.VIRTUAL_MEDIA_FLOPPY), mock.call(task, sushy.VIRTUAL_MEDIA_CD) ] mock__eject_vmedia.assert_has_calls(eject_calls) insert_calls = [ mock.call(task, 'floppy-image-url', sushy.VIRTUAL_MEDIA_FLOPPY), mock.call(task, 'cd-image-url', sushy.VIRTUAL_MEDIA_CD), ] mock__insert_vmedia.assert_has_calls(insert_calls) expected_params = { 'BOOTIF': None, 'boot_method': 'vmedia', 'ipa-debug': '1', 'ipa-agent-token': mock.ANY, } mock__prepare_deploy_iso.assert_called_once_with( task, expected_params, 'deploy') mock_node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, False) mock_boot_mode_utils.sync_boot_mode.assert_called_once_with(task) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_has_vmedia_device', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_cleanup_iso_image', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_cleanup_floppy_image', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) def test_clean_up_ramdisk( self, mock__parse_driver_info, mock__cleanup_floppy_image, mock__cleanup_iso_image, mock__eject_vmedia, mock__has_vmedia_device): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING mock__parse_driver_info.return_value = {'config_via_floppy': True} mock__has_vmedia_device.return_value = True task.driver.boot.clean_up_ramdisk(task) mock__cleanup_iso_image.assert_called_once_with(task) mock__cleanup_floppy_image.assert_called_once_with(task) mock__has_vmedia_device.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_FLOPPY) eject_calls = [ mock.call(task, sushy.VIRTUAL_MEDIA_CD), mock.call(task, sushy.VIRTUAL_MEDIA_FLOPPY) ] mock__eject_vmedia.assert_has_calls(eject_calls) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, 'clean_up_instance', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_boot_iso', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_insert_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) @mock.patch.object(redfish_boot, 'manager_utils', autospec=True) @mock.patch.object(redfish_boot, 'deploy_utils', autospec=True) @mock.patch.object(redfish_boot, 'boot_mode_utils', autospec=True) def test_prepare_instance_normal_boot( self, mock_boot_mode_utils, mock_deploy_utils, mock_manager_utils, mock__parse_driver_info, mock__insert_vmedia, mock__eject_vmedia, mock__prepare_boot_iso, mock_clean_up_instance): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.driver_internal_info[ 'root_uuid_or_disk_id'] = self.node.uuid mock_deploy_utils.get_boot_option.return_value = 'net' mock__parse_driver_info.return_value = {} mock__prepare_boot_iso.return_value = 'image-url' task.driver.boot.prepare_instance(task) expected_params = { 'root_uuid': self.node.uuid } mock__prepare_boot_iso.assert_called_once_with( task, **expected_params) mock__eject_vmedia.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_CD) mock__insert_vmedia.assert_called_once_with( task, 'image-url', sushy.VIRTUAL_MEDIA_CD) mock_manager_utils.node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, persistent=True) mock_boot_mode_utils.sync_boot_mode.assert_called_once_with(task) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, 'clean_up_instance', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_prepare_boot_iso', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_insert_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_parse_driver_info', autospec=True) @mock.patch.object(redfish_boot, 'manager_utils', autospec=True) @mock.patch.object(redfish_boot, 'deploy_utils', autospec=True) @mock.patch.object(redfish_boot, 'boot_mode_utils', autospec=True) def test_prepare_instance_ramdisk_boot( self, mock_boot_mode_utils, mock_deploy_utils, mock_manager_utils, mock__parse_driver_info, mock__insert_vmedia, mock__eject_vmedia, mock__prepare_boot_iso, mock_clean_up_instance): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.driver_internal_info[ 'root_uuid_or_disk_id'] = self.node.uuid mock_deploy_utils.get_boot_option.return_value = 'ramdisk' mock__prepare_boot_iso.return_value = 'image-url' task.driver.boot.prepare_instance(task) mock__prepare_boot_iso.assert_called_once_with(task) mock__eject_vmedia.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_CD) mock__insert_vmedia.assert_called_once_with( task, 'image-url', sushy.VIRTUAL_MEDIA_CD) mock_manager_utils.node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, persistent=True) mock_boot_mode_utils.sync_boot_mode.assert_called_once_with(task) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_cleanup_iso_image', autospec=True) @mock.patch.object(redfish_boot, 'manager_utils', autospec=True) def _test_prepare_instance_local_boot( self, mock_manager_utils, mock__cleanup_iso_image, mock__eject_vmedia): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.driver_internal_info[ 'root_uuid_or_disk_id'] = self.node.uuid task.driver.boot.prepare_instance(task) mock_manager_utils.node_set_boot_device.assert_called_once_with( task, boot_devices.DISK, persistent=True) mock__cleanup_iso_image.assert_called_once_with(task) mock__eject_vmedia.assert_called_once_with( task, sushy.VIRTUAL_MEDIA_CD) def test_prepare_instance_local_whole_disk_image(self): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() self._test_prepare_instance_local_boot() def test_prepare_instance_local_boot_option(self): instance_info = self.node.instance_info instance_info['capabilities'] = '{"boot_option": "local"}' self.node.instance_info = instance_info self.node.save() self._test_prepare_instance_local_boot() @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_eject_vmedia', autospec=True) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, '_cleanup_iso_image', autospec=True) def _test_clean_up_instance(self, mock__cleanup_iso_image, mock__eject_vmedia): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.clean_up_instance(task) mock__cleanup_iso_image.assert_called_once_with(task) eject_calls = [mock.call(task, sushy.VIRTUAL_MEDIA_CD)] if task.node.driver_info.get('config_via_floppy'): eject_calls.append(mock.call(task, sushy.VIRTUAL_MEDIA_FLOPPY)) mock__eject_vmedia.assert_has_calls(eject_calls) def test_clean_up_instance_only_cdrom(self): self._test_clean_up_instance() def test_clean_up_instance_cdrom_and_floppy(self): driver_info = self.node.driver_info driver_info['config_via_floppy'] = True self.node.driver_info = driver_info self.node.save() self._test_clean_up_instance() @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__insert_vmedia_anew(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_vmedia_floppy = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_FLOPPY]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd, mock_vmedia_floppy] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._insert_vmedia( task, 'img-url', sushy.VIRTUAL_MEDIA_CD) mock_vmedia_cd.insert_media.assert_called_once_with( 'img-url', inserted=True, write_protected=True) self.assertFalse(mock_vmedia_floppy.insert_media.call_count) @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__insert_vmedia_already_inserted(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=True, image='img-url', media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._insert_vmedia( task, 'img-url', sushy.VIRTUAL_MEDIA_CD) self.assertFalse(mock_vmedia_cd.insert_media.call_count) @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__insert_vmedia_bad_device(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_floppy = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_FLOPPY]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_floppy] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] self.assertRaises( exception.InvalidParameterValue, task.driver.boot._insert_vmedia, task, 'img-url', sushy.VIRTUAL_MEDIA_CD) @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__eject_vmedia_everything(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=True, media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_vmedia_floppy = mock.MagicMock( inserted=True, media_types=[sushy.VIRTUAL_MEDIA_FLOPPY]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd, mock_vmedia_floppy] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._eject_vmedia(task) mock_vmedia_cd.eject_media.assert_called_once_with() mock_vmedia_floppy.eject_media.assert_called_once_with() @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__eject_vmedia_specific(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=True, media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_vmedia_floppy = mock.MagicMock( inserted=True, media_types=[sushy.VIRTUAL_MEDIA_FLOPPY]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd, mock_vmedia_floppy] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._eject_vmedia(task, sushy.VIRTUAL_MEDIA_CD) mock_vmedia_cd.eject_media.assert_called_once_with() self.assertFalse(mock_vmedia_floppy.eject_media.call_count) @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__eject_vmedia_not_inserted(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_vmedia_floppy = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_FLOPPY]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd, mock_vmedia_floppy] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._eject_vmedia(task) self.assertFalse(mock_vmedia_cd.eject_media.call_count) self.assertFalse(mock_vmedia_floppy.eject_media.call_count) @mock.patch.object(redfish_boot, 'redfish_utils', autospec=True) def test__eject_vmedia_unknown(self, mock_redfish_utils): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: mock_vmedia_cd = mock.MagicMock( inserted=False, media_types=[sushy.VIRTUAL_MEDIA_CD]) mock_manager = mock.MagicMock() mock_manager.virtual_media.get_members.return_value = [ mock_vmedia_cd] mock_redfish_utils.get_system.return_value.managers = [ mock_manager] task.driver.boot._eject_vmedia(task) self.assertFalse(mock_vmedia_cd.eject_media.call_count) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_inspect.py0000664000175000017500000002301413652514273026346 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from oslo_utils import units from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules import inspect_utils from ironic.drivers.modules.redfish import utils as redfish_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() class MockedSushyError(Exception): pass class RedfishInspectTestCase(db_base.DbTestCase): def setUp(self): super(RedfishInspectTestCase, self).setUp() self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish'], enabled_inspect_interfaces=['redfish']) self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) def init_system_mock(self, system_mock, **properties): system_mock.reset() system_mock.boot.mode = 'uefi' system_mock.memory_summary.size_gib = 2 system_mock.processors.summary = '8', 'MIPS' system_mock.simple_storage.disks_sizes_bytes = ( 1 * units.Gi, units.Gi * 3, units.Gi * 5) system_mock.storage.volumes_sizes_bytes = ( 2 * units.Gi, units.Gi * 4, units.Gi * 6) system_mock.ethernet_interfaces.summary = { '00:11:22:33:44:55': sushy.STATE_ENABLED, '66:77:88:99:AA:BB': sushy.STATE_DISABLED, } return system_mock def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in redfish_utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', autospec=True) def test_inspect_hardware_ok(self, mock_create_ports_if_not_exist, mock_get_system): expected_properties = { 'capabilities': 'boot_mode:uefi', 'cpu_arch': 'mips', 'cpus': '8', 'local_gb': '3', 'memory_mb': '2048' } self.init_system_mock(mock_get_system.return_value) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.inspect.inspect_hardware(task) self.assertEqual(1, mock_create_ports_if_not_exist.call_count) mock_get_system.assert_called_once_with(task.node) self.assertEqual(expected_properties, task.node.properties) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_fail_missing_cpu(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.processors.summary = None, None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties.pop('cpu_arch') self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_ignore_missing_cpu(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.processors.summary = None, None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_properties = { 'capabilities': 'boot_mode:uefi', 'cpu_arch': 'x86_64', 'cpus': '8', 'local_gb': '3', 'memory_mb': '2048' } task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_ignore_missing_local_gb(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.simple_storage.disks_sizes_bytes = None system_mock.storage.volumes_sizes_bytes = None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_properties = { 'capabilities': 'boot_mode:uefi', 'cpu_arch': 'mips', 'cpus': '8', 'local_gb': '0', 'memory_mb': '2048' } task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_fail_missing_memory_mb(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.memory_summary.size_gib = None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties.pop('memory_mb') self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_ignore_missing_memory_mb(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.memory_summary.size_gib = None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_properties = { 'capabilities': 'boot_mode:uefi', 'cpu_arch': 'mips', 'cpus': '8', 'local_gb': '3', 'memory_mb': '4096' } task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(inspect_utils, 'create_ports_if_not_exist', autospec=True) def test_inspect_hardware_ignore_missing_nics( self, mock_create_ports_if_not_exist, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.ethernet_interfaces.summary = None system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.inspect.inspect_hardware(task) self.assertFalse(mock_create_ports_if_not_exist.called) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_preserve_boot_mode(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.boot.mode = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties = { 'capabilities': 'boot_mode:bios' } expected_properties = { 'capabilities': 'boot_mode:bios', 'cpu_arch': 'mips', 'cpus': '8', 'local_gb': '3', 'memory_mb': '2048' } task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inspect_hardware_ignore_missing_boot_mode(self, mock_get_system): system_mock = self.init_system_mock(mock_get_system.return_value) system_mock.boot.mode = None with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_properties = { 'cpu_arch': 'mips', 'cpus': '8', 'local_gb': '3', 'memory_mb': '2048' } task.driver.inspect.inspect_hardware(task) self.assertEqual(expected_properties, task.node.properties) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_management.py0000664000175000017500000005565013652514273027030 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import components from ironic.common import exception from ironic.common import indicator_states from ironic.conductor import task_manager from ironic.drivers.modules.redfish import management as redfish_mgmt from ironic.drivers.modules.redfish import utils as redfish_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() class RedfishManagementTestCase(db_base.DbTestCase): def setUp(self): super(RedfishManagementTestCase, self).setUp() self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish'], enabled_inspect_interfaces=['redfish'], enabled_bios_interfaces=['redfish']) self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) self.system_uuid = 'ZZZ--XXX-YYY' self.chassis_uuid = 'XXX-YYY-ZZZ' self.drive_uuid = 'ZZZ-YYY-XXX' @mock.patch.object(redfish_mgmt, 'sushy', None) def test_loading_error(self): self.assertRaisesRegex( exception.DriverLoadError, 'Unable to import the sushy library', redfish_mgmt.RedfishManagement) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in redfish_utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) def test_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_boot_devices = ( task.driver.management.get_supported_boot_devices(task)) self.assertEqual(list(redfish_mgmt.BOOT_DEVICE_MAP_REV), supported_boot_devices) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_device(self, mock_get_system): fake_system = mock.Mock() mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (boot_devices.PXE, sushy.BOOT_SOURCE_TARGET_PXE), (boot_devices.DISK, sushy.BOOT_SOURCE_TARGET_HDD), (boot_devices.CDROM, sushy.BOOT_SOURCE_TARGET_CD), (boot_devices.BIOS, sushy.BOOT_SOURCE_TARGET_BIOS_SETUP) ] for target, expected in expected_values: task.driver.management.set_boot_device(task, target) # Asserts fake_system.set_system_boot_options.assert_called_once_with( expected, enabled=sushy.BOOT_SOURCE_ENABLED_ONCE) mock_get_system.assert_called_once_with(task.node) # Reset mocks fake_system.set_system_boot_options.reset_mock() mock_get_system.reset_mock() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_device_persistency(self, mock_get_system): fake_system = mock.Mock() mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (True, sushy.BOOT_SOURCE_ENABLED_CONTINUOUS), (False, sushy.BOOT_SOURCE_ENABLED_ONCE) ] for target, expected in expected_values: task.driver.management.set_boot_device( task, boot_devices.PXE, persistent=target) fake_system.set_system_boot_options.assert_called_once_with( sushy.BOOT_SOURCE_TARGET_PXE, enabled=expected) mock_get_system.assert_called_once_with(task.node) # Reset mocks fake_system.set_system_boot_options.reset_mock() mock_get_system.reset_mock() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_device_persistency_no_change(self, mock_get_system): fake_system = mock.Mock() mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (True, sushy.BOOT_SOURCE_ENABLED_CONTINUOUS), (False, sushy.BOOT_SOURCE_ENABLED_ONCE) ] for target, expected in expected_values: fake_system.boot.get.return_value = expected task.driver.management.set_boot_device( task, boot_devices.PXE, persistent=target) fake_system.set_system_boot_options.assert_called_once_with( sushy.BOOT_SOURCE_TARGET_PXE, enabled=None) mock_get_system.assert_called_once_with(task.node) # Reset mocks fake_system.set_system_boot_options.reset_mock() mock_get_system.reset_mock() @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_device_fail(self, mock_get_system, mock_sushy): fake_system = mock.Mock() fake_system.set_system_boot_options.side_effect = ( sushy.exceptions.SushyError() ) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.RedfishError, 'Redfish set boot device', task.driver.management.set_boot_device, task, boot_devices.PXE) fake_system.set_system_boot_options.assert_called_once_with( sushy.BOOT_SOURCE_TARGET_PXE, enabled=sushy.BOOT_SOURCE_ENABLED_ONCE) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_boot_device(self, mock_get_system): boot_attribute = { 'target': sushy.BOOT_SOURCE_TARGET_PXE, 'enabled': sushy.BOOT_SOURCE_ENABLED_CONTINUOUS } fake_system = mock.Mock(boot=boot_attribute) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: response = task.driver.management.get_boot_device(task) expected = {'boot_device': boot_devices.PXE, 'persistent': True} self.assertEqual(expected, response) def test_get_supported_boot_modes(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_boot_modes = ( task.driver.management.get_supported_boot_modes(task)) self.assertEqual(list(redfish_mgmt.BOOT_MODE_MAP_REV), supported_boot_modes) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_mode(self, mock_get_system): fake_system = mock.Mock() mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (boot_modes.LEGACY_BIOS, sushy.BOOT_SOURCE_MODE_BIOS), (boot_modes.UEFI, sushy.BOOT_SOURCE_MODE_UEFI) ] for mode, expected in expected_values: task.driver.management.set_boot_mode(task, mode=mode) # Asserts fake_system.set_system_boot_options.assert_called_once_with( mode=mode) mock_get_system.assert_called_once_with(task.node) # Reset mocks fake_system.set_system_boot_options.reset_mock() mock_get_system.reset_mock() @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_boot_mode_fail(self, mock_get_system, mock_sushy): fake_system = mock.Mock() fake_system.set_system_boot_options.side_effect = ( sushy.exceptions.SushyError) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.RedfishError, 'Setting boot mode', task.driver.management.set_boot_mode, task, boot_modes.UEFI) fake_system.set_system_boot_options.assert_called_once_with( mode=boot_modes.UEFI) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_boot_mode(self, mock_get_system): boot_attribute = { 'target': sushy.BOOT_SOURCE_TARGET_PXE, 'enabled': sushy.BOOT_SOURCE_ENABLED_CONTINUOUS, 'mode': sushy.BOOT_SOURCE_MODE_BIOS, } fake_system = mock.Mock(boot=boot_attribute) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: response = task.driver.management.get_boot_mode(task) expected = boot_modes.LEGACY_BIOS self.assertEqual(expected, response) def test__get_sensors_fan(self): attributes = { "identity": "XXX-YYY-ZZZ", "name": "CPU Fan", "status": { "state": "enabled", "health": "OK" }, "reading": 6000, "reading_units": "RPM", "lower_threshold_fatal": 2000, "min_reading_range": 0, "max_reading_range": 10000, "serial_number": "SN010203040506", "physical_context": "CPU" } mock_chassis = mock.MagicMock(identity='ZZZ-YYY-XXX') mock_fan = mock.MagicMock(**attributes) mock_fan.name = attributes['name'] mock_fan.status = mock.MagicMock(**attributes['status']) mock_chassis.thermal.fans = [mock_fan] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sensors = task.driver.management._get_sensors_fan(mock_chassis) expected = { 'XXX-YYY-ZZZ@ZZZ-YYY-XXX': { 'identity': 'XXX-YYY-ZZZ', 'max_reading_range': 10000, 'min_reading_range': 0, 'physical_context': 'CPU', 'reading': 6000, 'reading_units': 'RPM', 'serial_number': 'SN010203040506', 'health': 'OK', 'state': 'enabled' } } self.assertEqual(expected, sensors) def test__get_sensors_temperatures(self): attributes = { "identity": "XXX-YYY-ZZZ", "name": "CPU Temp", "status": { "state": "enabled", "health": "OK" }, "reading_celsius": 62, "upper_threshold_non_critical": 75, "upper_threshold_critical": 90, "upperThresholdFatal": 95, "min_reading_range_temp": 0, "max_reading_range_temp": 120, "physical_context": "CPU", "sensor_number": 1 } mock_chassis = mock.MagicMock(identity='ZZZ-YYY-XXX') mock_temperature = mock.MagicMock(**attributes) mock_temperature.name = attributes['name'] mock_temperature.status = mock.MagicMock(**attributes['status']) mock_chassis.thermal.temperatures = [mock_temperature] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sensors = task.driver.management._get_sensors_temperatures( mock_chassis) expected = { 'XXX-YYY-ZZZ@ZZZ-YYY-XXX': { 'identity': 'XXX-YYY-ZZZ', 'max_reading_range_temp': 120, 'min_reading_range_temp': 0, 'physical_context': 'CPU', 'reading_celsius': 62, 'sensor_number': 1, 'health': 'OK', 'state': 'enabled' } } self.assertEqual(expected, sensors) def test__get_sensors_power(self): attributes = { 'identity': 0, 'name': 'Power Supply 0', 'power_capacity_watts': 1450, 'last_power_output_watts': 650, 'line_input_voltage': 220, 'input_ranges': { 'minimum_voltage': 185, 'maximum_voltage': 250, 'minimum_frequency_hz': 47, 'maximum_frequency_hz': 63, 'output_wattage': 1450 }, 'serial_number': 'SN010203040506', "status": { "state": "enabled", "health": "OK" } } mock_chassis = mock.MagicMock(identity='ZZZ-YYY-XXX') mock_power = mock_chassis.power mock_power.identity = 'Power' mock_psu = mock.MagicMock(**attributes) mock_psu.name = attributes['name'] mock_psu.status = mock.MagicMock(**attributes['status']) mock_psu.input_ranges = mock.MagicMock(**attributes['input_ranges']) mock_power.power_supplies = [mock_psu] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sensors = task.driver.management._get_sensors_power(mock_chassis) expected = { '0:Power@ZZZ-YYY-XXX': { 'health': 'OK', 'last_power_output_watts': 650, 'line_input_voltage': 220, 'maximum_frequency_hz': 63, 'maximum_voltage': 250, 'minimum_frequency_hz': 47, 'minimum_voltage': 185, 'output_wattage': 1450, 'power_capacity_watts': 1450, 'serial_number': 'SN010203040506', 'state': 'enabled' } } self.assertEqual(expected, sensors) def test__get_sensors_data_drive(self): attributes = { 'name': '32ADF365C6C1B7BD', 'manufacturer': 'IBM', 'model': 'IBM 350A', 'capacity_bytes': 3750000000, 'status': { 'health': 'OK', 'state': 'enabled' } } mock_system = mock.MagicMock(identity='ZZZ-YYY-XXX') mock_drive = mock.MagicMock(**attributes) mock_drive.name = attributes['name'] mock_drive.status = mock.MagicMock(**attributes['status']) mock_storage = mock.MagicMock() mock_storage.devices = [mock_drive] mock_storage.identity = 'XXX-YYY-ZZZ' mock_system.simple_storage.get_members.return_value = [mock_storage] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sensors = task.driver.management._get_sensors_drive(mock_system) expected = { '32ADF365C6C1B7BD:XXX-YYY-ZZZ@ZZZ-YYY-XXX': { 'capacity_bytes': 3750000000, 'health': 'OK', 'name': '32ADF365C6C1B7BD', 'model': 'IBM 350A', 'state': 'enabled' } } self.assertEqual(expected, sensors) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_sensors_data(self, mock_system): mock_chassis = mock.MagicMock() mock_system.return_value.chassis = [mock_chassis] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sensors = task.driver.management.get_sensors_data(task) expected = { 'Fan': {}, 'Temperature': {}, 'Power': {}, 'Drive': {} } self.assertEqual(expected, sensors) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inject_nmi(self, mock_get_system): fake_system = mock.Mock() mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.inject_nmi(task) fake_system.reset_system.assert_called_once_with(sushy.RESET_NMI) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_inject_nmi_fail(self, mock_get_system, mock_sushy): fake_system = mock.Mock() fake_system.reset_system.side_effect = ( sushy.exceptions.SushyError) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.RedfishError, 'Redfish inject NMI', task.driver.management.inject_nmi, task) fake_system.reset_system.assert_called_once_with( sushy.RESET_NMI) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_supported_indicators(self, mock_get_system): fake_chassis = mock.Mock( uuid=self.chassis_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_drive = mock.Mock( uuid=self.drive_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_system = mock.Mock( uuid=self.system_uuid, chassis=[fake_chassis], simple_storage=mock.MagicMock(drives=[fake_drive]), indicator_led=sushy.INDICATOR_LED_LIT) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_indicators = ( task.driver.management.get_supported_indicators(task)) expected = { components.CHASSIS: { 'XXX-YYY-ZZZ': { "readonly": False, "states": [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } }, components.SYSTEM: { 'ZZZ--XXX-YYY': { "readonly": False, "states": [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } }, components.DISK: { 'ZZZ-YYY-XXX': { "readonly": False, "states": [ indicator_states.BLINKING, indicator_states.OFF, indicator_states.ON ] } } } self.assertEqual(expected, supported_indicators) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_indicator_state(self, mock_get_system): fake_chassis = mock.Mock( uuid=self.chassis_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_drive = mock.Mock( uuid=self.drive_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_system = mock.Mock( uuid=self.system_uuid, chassis=[fake_chassis], simple_storage=mock.MagicMock(drives=[fake_drive]), indicator_led=sushy.INDICATOR_LED_LIT) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_indicator_state( task, components.SYSTEM, self.system_uuid, indicator_states.ON) fake_system.set_indicator_led.assert_called_once_with( sushy.INDICATOR_LED_LIT) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_indicator_state(self, mock_get_system): fake_chassis = mock.Mock( uuid=self.chassis_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_drive = mock.Mock( uuid=self.drive_uuid, indicator_led=sushy.INDICATOR_LED_LIT) fake_system = mock.Mock( uuid=self.system_uuid, chassis=[fake_chassis], simple_storage=mock.MagicMock(drives=[fake_drive]), indicator_led=sushy.INDICATOR_LED_LIT) mock_get_system.return_value = fake_system with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.management.get_indicator_state( task, components.SYSTEM, self.system_uuid) mock_get_system.assert_called_once_with(task.node) self.assertEqual(indicator_states.ON, state) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_bios.py0000664000175000017500000004506213652514273025644 0ustar zuulzuul00000000000000# Copyright 2018 DMTF. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.redfish import bios as redfish_bios from ironic.drivers.modules.redfish import boot as redfish_boot from ironic.drivers.modules.redfish import utils as redfish_utils from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() class NoBiosSystem(object): identity = '/redfish/v1/Systems/1234' @property def bios(self): raise sushy.exceptions.MissingAttributeError(attribute='Bios', resource=self) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class RedfishBiosTestCase(db_base.DbTestCase): def setUp(self): super(RedfishBiosTestCase, self).setUp() self.config(enabled_bios_interfaces=['redfish'], enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish']) self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) @mock.patch.object(redfish_bios, 'sushy', None) def test_loading_error(self): self.assertRaisesRegex( exception.DriverLoadError, 'Unable to import the sushy library', redfish_bios.RedfishBIOS) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in redfish_utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.bios.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(objects, 'BIOSSettingList', autospec=True) def test_cache_bios_settings_noop(self, mock_setting_list, mock_get_system): create_list = [] update_list = [] delete_list = [] nochange_list = [{'name': 'EmbeddedSata', 'value': 'Raid'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] mock_setting_list.sync_node_setting.return_value = ( create_list, update_list, delete_list, nochange_list ) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: attributes = mock_get_system(task.node).bios.attributes settings = [{'name': k, 'value': v} for k, v in attributes.items()] mock_get_system.reset_mock() task.driver.bios.cache_bios_settings(task) mock_get_system.assert_called_once_with(task.node) mock_setting_list.sync_node_setting.assert_called_once_with( task.context, task.node.id, settings) mock_setting_list.create.assert_not_called() mock_setting_list.save.assert_not_called() mock_setting_list.delete.assert_not_called() @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(objects, 'BIOSSettingList', autospec=True) def test_cache_bios_settings_no_bios(self, mock_setting_list, mock_get_system): create_list = [] update_list = [] delete_list = [] nochange_list = [{'name': 'EmbeddedSata', 'value': 'Raid'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] mock_setting_list.sync_node_setting.return_value = ( create_list, update_list, delete_list, nochange_list ) mock_get_system.return_value = NoBiosSystem() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaisesRegex(exception.UnsupportedDriverExtension, 'BIOS settings are not supported', task.driver.bios.cache_bios_settings, task) mock_get_system.assert_called_once_with(task.node) mock_setting_list.sync_node_setting.assert_not_called() mock_setting_list.create.assert_not_called() mock_setting_list.save.assert_not_called() mock_setting_list.delete.assert_not_called() @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(objects, 'BIOSSettingList', autospec=True) def test_cache_bios_settings(self, mock_setting_list, mock_get_system): create_list = [{'name': 'DebugMode', 'value': 'enabled'}] update_list = [{'name': 'BootMode', 'value': 'Uefi'}, {'name': 'NicBoot2', 'value': 'NetworkBoot'}] delete_list = [{'name': 'AdminPhone', 'value': '555-867-5309'}] nochange_list = [{'name': 'EmbeddedSata', 'value': 'Raid'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] delete_names = [] for setting in delete_list: delete_names.append(setting.get('name')) mock_setting_list.sync_node_setting.return_value = ( create_list, update_list, delete_list, nochange_list ) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: attributes = mock_get_system(task.node).bios.attributes settings = [{'name': k, 'value': v} for k, v in attributes.items()] mock_get_system.reset_mock() task.driver.bios.cache_bios_settings(task) mock_get_system.assert_called_once_with(task.node) mock_setting_list.sync_node_setting.assert_called_once_with( task.context, task.node.id, settings) mock_setting_list.create.assert_called_once_with( task.context, task.node.id, create_list) mock_setting_list.save.assert_called_once_with( task.context, task.node.id, update_list) mock_setting_list.delete.assert_called_once_with( task.context, task.node.id, delete_names) @mock.patch.object(redfish_boot.RedfishVirtualMediaBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def _test_step_pre_reboot(self, mock_power_action, mock_get_system, mock_build_agent_options, mock_prepare): if self.node.clean_step: step_data = self.node.clean_step check_fields = ['cleaning_reboot', 'skip_current_clean_step'] expected_ret = states.CLEANWAIT else: step_data = self.node.deploy_step check_fields = ['deployment_reboot', 'skip_current_deploy_step'] expected_ret = states.DEPLOYWAIT data = step_data['argsinfo'].get('settings', None) step = step_data['step'] if step == 'factory_reset': check_fields.append('post_factory_reset_reboot_requested') elif step == 'apply_configuration': check_fields.append('post_config_reboot_requested') attributes = {s['name']: s['value'] for s in data} mock_build_agent_options.return_value = {'a': 'b'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: if step == 'factory_reset': ret = task.driver.bios.factory_reset(task) if step == 'apply_configuration': ret = task.driver.bios.apply_configuration(task, data) mock_get_system.assert_called_with(task.node) mock_power_action.assert_called_once_with(task, states.REBOOT) bios = mock_get_system(task.node).bios if step == 'factory_reset': bios.reset_bios.assert_called_once() if step == 'apply_configuration': bios.set_attributes.assert_called_once_with(attributes) mock_build_agent_options.assert_called_once_with(task.node) mock_prepare.assert_called_once_with(mock.ANY, task, {'a': 'b'}) info = task.node.driver_internal_info self.assertTrue(all(x in info for x in check_fields)) self.assertEqual(expected_ret, ret) def test_factory_reset_step_pre_reboot_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test_step_pre_reboot() def test_factory_reset_step_pre_reboot_deploying(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test_step_pre_reboot() def test_apply_conf_step_pre_reboot_cleaning(self): data = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test_step_pre_reboot() def test_apply_conf_step_pre_reboot_deploying(self): data = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} self.node.save() self._test_step_pre_reboot() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def _test_step_post_reboot(self, mock_get_system): if self.node.deploy_step: step_data = self.node.deploy_step else: step_data = self.node.clean_step data = step_data['argsinfo'].get('settings', None) step = step_data['step'] if step == 'factory_reset': check_fields = ['post_factory_reset_reboot_requested'] if step == 'apply_configuration': check_fields = ['post_config_reboot_requested', 'requested_bios_attrs'] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: if step == 'factory_reset': task.driver.bios.factory_reset(task) if step == 'apply_configuration': task.driver.bios.apply_configuration(task, data) mock_get_system.assert_called_with(task.node) info = task.node.driver_internal_info for field in check_fields: self.assertNotIn(field, info) def test_factory_reset_post_reboot_cleaning(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} node = self.node driver_internal_info = node.driver_internal_info driver_internal_info['post_factory_reset_reboot_requested'] = True node.driver_internal_info = driver_internal_info node.save() self._test_step_post_reboot() def test_factory_reset_post_reboot_deploying(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} node = self.node driver_internal_info = node.driver_internal_info driver_internal_info['post_factory_reset_reboot_requested'] = True node.driver_internal_info = driver_internal_info node.save() self._test_step_post_reboot() def test_apply_conf_post_reboot_cleaning(self): data = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} requested_attrs = {'ProcTurboMode': 'Enabled'} node = self.node driver_internal_info = node.driver_internal_info driver_internal_info['post_config_reboot_requested'] = True driver_internal_info['requested_bios_attrs'] = requested_attrs self.node.driver_internal_info = driver_internal_info self.node.save() self._test_step_post_reboot() def test_apply_conf_post_reboot_deploying(self): data = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': data}} requested_attrs = {'ProcTurboMode': 'Enabled'} node = self.node driver_internal_info = node.driver_internal_info driver_internal_info['post_config_reboot_requested'] = True driver_internal_info['requested_bios_attrs'] = requested_attrs self.node.driver_internal_info = driver_internal_info self.node.save() self._test_step_post_reboot() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_factory_reset_fail(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: bios = mock_get_system(task.node).bios bios.reset_bios.side_effect = sushy.exceptions.SushyError self.assertRaisesRegex( exception.RedfishError, 'BIOS factory reset failed', task.driver.bios.factory_reset, task) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_factory_reset_not_supported(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_system.return_value = NoBiosSystem() self.assertRaisesRegex( exception.RedfishError, 'BIOS factory reset failed', task.driver.bios.factory_reset, task) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_apply_configuration_not_supported(self, mock_get_system): settings = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_system.return_value = NoBiosSystem() self.assertRaisesRegex(exception.RedfishError, 'BIOS settings are not supported', task.driver.bios.apply_configuration, task, settings) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_check_bios_attrs(self, mock_get_system): settings = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] requested_attrs = {'ProcTurboMode': 'Enabled'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: attributes = mock_get_system(task.node).bios.attributes task.node.driver_internal_info[ 'post_config_reboot_requested'] = True task.node.driver_internal_info[ 'requested_bios_attrs'] = requested_attrs task.driver.bios._check_bios_attrs = mock.MagicMock() task.driver.bios.apply_configuration(task, settings) task.driver.bios._check_bios_attrs \ .assert_called_once_with(task, attributes, requested_attrs) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_apply_configuration_fail(self, mock_get_system): settings = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: bios = mock_get_system(task.node).bios bios.set_attributes.side_effect = sushy.exceptions.SushyError self.assertRaisesRegex( exception.RedfishError, 'BIOS apply configuration failed', task.driver.bios.apply_configuration, task, settings) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_post_configuration(self, mock_get_system): settings = [{'name': 'ProcTurboMode', 'value': 'Disabled'}, {'name': 'NicBoot1', 'value': 'NetworkBoot'}] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.bios.post_configuration = mock.MagicMock() task.driver.bios.apply_configuration(task, settings) task.driver.bios.post_configuration\ .assert_called_once_with(task, settings) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/test_power.py0000664000175000017500000002467013652514273026046 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.redfish import power as redfish_power from ironic.drivers.modules.redfish import utils as redfish_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = db_utils.get_test_redfish_info() @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class RedfishPowerTestCase(db_base.DbTestCase): def setUp(self): super(RedfishPowerTestCase, self).setUp() self.config(enabled_hardware_types=['redfish'], enabled_power_interfaces=['redfish'], enabled_boot_interfaces=['redfish-virtual-media'], enabled_management_interfaces=['redfish'], enabled_inspect_interfaces=['redfish'], enabled_bios_interfaces=['redfish']) self.node = obj_utils.create_test_node( self.context, driver='redfish', driver_info=INFO_DICT) @mock.patch.object(redfish_power, 'sushy', None) def test_loading_error(self): self.assertRaisesRegex( exception.DriverLoadError, 'Unable to import the sushy library', redfish_power.RedfishPower) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in redfish_utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(redfish_utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_get_power_state(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_values = [ (sushy.SYSTEM_POWER_STATE_ON, states.POWER_ON), (sushy.SYSTEM_POWER_STATE_POWERING_ON, states.POWER_ON), (sushy.SYSTEM_POWER_STATE_OFF, states.POWER_OFF), (sushy.SYSTEM_POWER_STATE_POWERING_OFF, states.POWER_OFF) ] for current, expected in expected_values: mock_get_system.return_value = mock.Mock(power_state=current) self.assertEqual(expected, task.driver.power.get_power_state(task)) mock_get_system.assert_called_once_with(task.node) mock_get_system.reset_mock() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_power_state(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (states.POWER_ON, sushy.RESET_ON), (states.POWER_OFF, sushy.RESET_FORCE_OFF), (states.REBOOT, sushy.RESET_FORCE_RESTART), (states.SOFT_REBOOT, sushy.RESET_GRACEFUL_RESTART), (states.SOFT_POWER_OFF, sushy.RESET_GRACEFUL_SHUTDOWN) ] for target, expected in expected_values: if target in (states.POWER_OFF, states.SOFT_POWER_OFF): final = sushy.SYSTEM_POWER_STATE_OFF transient = sushy.SYSTEM_POWER_STATE_ON else: final = sushy.SYSTEM_POWER_STATE_ON transient = sushy.SYSTEM_POWER_STATE_OFF system_result = [ mock.Mock(power_state=transient) ] * 3 + [mock.Mock(power_state=final)] mock_get_system.side_effect = system_result task.driver.power.set_power_state(task, target) # Asserts system_result[0].reset_system.assert_called_once_with(expected) mock_get_system.assert_called_with(task.node) self.assertEqual(4, mock_get_system.call_count) # Reset mocks mock_get_system.reset_mock() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_power_state_not_reached(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.config(power_state_change_timeout=2, group='conductor') expected_values = [ (states.POWER_ON, sushy.RESET_ON), (states.POWER_OFF, sushy.RESET_FORCE_OFF), (states.REBOOT, sushy.RESET_FORCE_RESTART), (states.SOFT_REBOOT, sushy.RESET_GRACEFUL_RESTART), (states.SOFT_POWER_OFF, sushy.RESET_GRACEFUL_SHUTDOWN) ] for target, expected in expected_values: fake_system = mock_get_system.return_value if target in (states.POWER_OFF, states.SOFT_POWER_OFF): fake_system.power_state = sushy.SYSTEM_POWER_STATE_ON else: fake_system.power_state = sushy.SYSTEM_POWER_STATE_OFF self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, target) # Asserts fake_system.reset_system.assert_called_once_with(expected) mock_get_system.assert_called_with(task.node) # Reset mocks mock_get_system.reset_mock() @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_set_power_state_fail(self, mock_get_system, mock_sushy): fake_system = mock_get_system.return_value fake_system.reset_system.side_effect = ( sushy.exceptions.SushyError()) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.RedfishError, 'Redfish set power state', task.driver.power.set_power_state, task, states.POWER_ON) fake_system.reset_system.assert_called_once_with( sushy.RESET_ON) mock_get_system.assert_called_once_with(task.node) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_reboot(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (sushy.SYSTEM_POWER_STATE_ON, sushy.RESET_FORCE_RESTART), (sushy.SYSTEM_POWER_STATE_OFF, sushy.RESET_ON) ] for current, expected in expected_values: system_result = [ # Initial state mock.Mock(power_state=current), # Transient state - powering off mock.Mock(power_state=sushy.SYSTEM_POWER_STATE_OFF), # Final state - down powering off mock.Mock(power_state=sushy.SYSTEM_POWER_STATE_ON) ] mock_get_system.side_effect = system_result task.driver.power.reboot(task) # Asserts system_result[0].reset_system.assert_called_once_with(expected) mock_get_system.assert_called_with(task.node) self.assertEqual(3, mock_get_system.call_count) # Reset mocks mock_get_system.reset_mock() @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_reboot_not_reached(self, mock_get_system): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: fake_system = mock_get_system.return_value fake_system.power_state = sushy.SYSTEM_POWER_STATE_OFF self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) # Asserts fake_system.reset_system.assert_called_once_with(sushy.RESET_ON) mock_get_system.assert_called_with(task.node) @mock.patch.object(sushy, 'Sushy', autospec=True) @mock.patch.object(redfish_utils, 'get_system', autospec=True) def test_reboot_fail(self, mock_get_system, mock_sushy): fake_system = mock_get_system.return_value fake_system.reset_system.side_effect = ( sushy.exceptions.SushyError()) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: fake_system.power_state = sushy.SYSTEM_POWER_STATE_ON self.assertRaisesRegex( exception.RedfishError, 'Redfish reboot failed', task.driver.power.reboot, task) fake_system.reset_system.assert_called_once_with( sushy.RESET_FORCE_RESTART) mock_get_system.assert_called_once_with(task.node) def test_get_supported_power_states(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_power_states = ( task.driver.power.get_supported_power_states(task)) self.assertEqual(list(redfish_power.SET_POWER_STATE_MAP), supported_power_states) ironic-15.0.0/ironic/tests/unit/drivers/modules/redfish/__init__.py0000664000175000017500000000000013652514273025367 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_console_utils.py0000664000175000017500000007762013652514273026153 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for console_utils driver module.""" import errno import fcntl import os import random import signal import string import subprocess import tempfile import time from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_service import loopingcall from oslo_utils import netutils import psutil from ironic.common import exception from ironic.drivers.modules import console_utils from ironic.drivers.modules import ipmitool as ipmi from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INFO_DICT = db_utils.get_test_ipmi_info() class ConsoleUtilsTestCase(db_base.DbTestCase): def setUp(self): super(ConsoleUtilsTestCase, self).setUp() self.node = obj_utils.get_test_node( self.context, driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) def test__get_console_pid_dir(self): pid_dir = '/tmp/pid_dir' self.config(terminal_pid_dir=pid_dir, group='console') dir = console_utils._get_console_pid_dir() self.assertEqual(pid_dir, dir) def test__get_console_pid_dir_tempdir(self): self.config(tempdir='/tmp/fake_dir') dir = console_utils._get_console_pid_dir() self.assertEqual(CONF.tempdir, dir) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__ensure_console_pid_dir_exists(self, mock_path_exists, mock_makedirs): mock_path_exists.return_value = True mock_makedirs.side_effect = OSError pid_dir = console_utils._get_console_pid_dir() console_utils._ensure_console_pid_dir_exists() mock_path_exists.assert_called_once_with(pid_dir) self.assertFalse(mock_makedirs.called) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__ensure_console_pid_dir_exists_fail(self, mock_path_exists, mock_makedirs): mock_path_exists.return_value = False mock_makedirs.side_effect = OSError pid_dir = console_utils._get_console_pid_dir() self.assertRaises(exception.ConsoleError, console_utils._ensure_console_pid_dir_exists) mock_path_exists.assert_called_once_with(pid_dir) mock_makedirs.assert_called_once_with(pid_dir) @mock.patch.object(console_utils, '_get_console_pid_dir', autospec=True) def test__get_console_pid_file(self, mock_dir): mock_dir.return_value = tempfile.gettempdir() expected_path = '%(tempdir)s/%(uuid)s.pid' % { 'tempdir': mock_dir.return_value, 'uuid': self.info['uuid']} path = console_utils._get_console_pid_file(self.info['uuid']) self.assertEqual(expected_path, path) mock_dir.assert_called_once_with() @mock.patch.object(console_utils, 'open', mock.mock_open(read_data='12345\n')) @mock.patch.object(console_utils, '_get_console_pid_file', autospec=True) def test__get_console_pid(self, mock_pid_file): tmp_file_handle = tempfile.NamedTemporaryFile() tmp_file = tmp_file_handle.name mock_pid_file.return_value = tmp_file pid = console_utils._get_console_pid(self.info['uuid']) mock_pid_file.assert_called_once_with(self.info['uuid']) self.assertEqual(pid, 12345) @mock.patch.object(console_utils, 'open', mock.mock_open(read_data='Hello World\n')) @mock.patch.object(console_utils, '_get_console_pid_file', autospec=True) def test__get_console_pid_not_a_num(self, mock_pid_file): tmp_file_handle = tempfile.NamedTemporaryFile() tmp_file = tmp_file_handle.name mock_pid_file.return_value = tmp_file self.assertRaises(exception.NoConsolePid, console_utils._get_console_pid, self.info['uuid']) mock_pid_file.assert_called_once_with(self.info['uuid']) def test__get_console_pid_file_not_found(self): self.assertRaises(exception.NoConsolePid, console_utils._get_console_pid, self.info['uuid']) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 console_utils._stop_console(self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) # a check if process still exist (signal 0) in a loop mock_kill.assert_any_call(mock_pid.return_value, signal.SIG_DFL) # and that it receives the SIGTERM mock_kill.assert_any_call(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True, return_value=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_forced_kill(self, mock_pid, mock_psutil, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 console_utils._stop_console(self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) # Make sure console process receives hard SIGKILL mock_kill.assert_any_call(mock_pid.return_value, signal.SIGKILL) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_nopid(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.side_effect = exception.NoConsolePid(pid_path="/tmp/blah") self.assertRaises(exception.NoConsolePid, console_utils._stop_console, self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) self.assertFalse(mock_kill.called) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_shellinabox_not_running(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 mock_kill.side_effect = OSError(errno.ESRCH, 'message') console_utils._stop_console(self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) mock_kill.assert_called_once_with(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(os, 'kill', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) def test__stop_console_exception(self, mock_pid, mock_kill, mock_unlink): pid_file = console_utils._get_console_pid_file(self.info['uuid']) mock_pid.return_value = 12345 mock_kill.side_effect = OSError(2, 'message') self.assertRaises(exception.ConsoleError, console_utils._stop_console, self.info['uuid']) mock_pid.assert_called_once_with(self.info['uuid']) mock_kill.assert_called_once_with(mock_pid.return_value, signal.SIGTERM) mock_unlink.assert_called_once_with(pid_file) def _get_shellinabox_console(self, scheme): generated_url = ( console_utils.get_shellinabox_console_url(self.info['port'])) console_host = CONF.my_ip if netutils.is_valid_ipv6(console_host): console_host = '[%s]' % console_host http_url = "%s://%s:%s" % (scheme, console_host, self.info['port']) self.assertEqual(http_url, generated_url) def test_get_shellinabox_console_url(self): self._get_shellinabox_console('http') def test_get_shellinabox_console_https_url(self): # specify terminal_cert_dir in /etc/ironic/ironic.conf self.config(terminal_cert_dir='/tmp', group='console') # use https self._get_shellinabox_console('https') def test_make_persistent_password_file(self): filepath = '%(tempdir)s/%(node_uuid)s' % { 'tempdir': tempfile.gettempdir(), 'node_uuid': self.info['uuid']} password = ''.join([random.choice(string.ascii_letters) for n in range(16)]) console_utils.make_persistent_password_file(filepath, password) # make sure file exists self.assertTrue(os.path.exists(filepath)) # make sure the content is correct with open(filepath) as file: content = file.read() self.assertEqual(password, content) # delete the file os.unlink(filepath) @mock.patch.object(os, 'chmod', autospec=True) def test_make_persistent_password_file_fail(self, mock_chmod): mock_chmod.side_effect = IOError() filepath = '%(tempdir)s/%(node_uuid)s' % { 'tempdir': tempfile.gettempdir(), 'node_uuid': self.info['uuid']} self.assertRaises(exception.PasswordFileFailedToCreate, console_utils.make_persistent_password_file, filepath, 'password') @mock.patch.object(fcntl, 'fcntl', autospec=True) @mock.patch.object(console_utils, 'open', mock.mock_open(read_data='12345\n')) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(subprocess, 'Popen') @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console(self, mock_stop, mock_dir_exists, mock_pid_exists, mock_popen, mock_path_exists, mock_fcntl): mock_popen.return_value.poll.return_value = 0 mock_popen.return_value.stdout.return_value.fileno.return_value = 0 mock_popen.return_value.stderr.return_value.fileno.return_value = 1 mock_pid_exists.return_value = True mock_path_exists.return_value = True console_utils.start_shellinabox_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_pid_exists.assert_called_once_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(fcntl, 'fcntl', autospec=True) @mock.patch.object(console_utils, 'open', mock.mock_open(read_data='12345\n')) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(subprocess, 'Popen') @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_nopid(self, mock_stop, mock_dir_exists, mock_pid_exists, mock_popen, mock_path_exists, mock_fcntl): # no existing PID file before starting mock_stop.side_effect = exception.NoConsolePid('/tmp/blah') mock_popen.return_value.poll.return_value = 0 mock_popen.return_value.stdout.return_value.fileno.return_value = 0 mock_popen.return_value.stderr.return_value.fileno.return_value = 1 mock_pid_exists.return_value = True mock_path_exists.return_value = True console_utils.start_shellinabox_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_pid_exists.assert_called_once_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_once_with() @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(os, 'read', autospec=True) @mock.patch.object(fcntl, 'fcntl', autospec=True) @mock.patch.object(subprocess, 'Popen') @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail( self, mock_stop, mock_dir_exists, mock_popen, mock_fcntl, mock_os_read, mock_sleep): mock_popen.return_value.poll.return_value = 1 stdout = mock_popen.return_value.stdout stderr = mock_popen.return_value.stderr stdout.return_value.fileno.return_value = 0 stderr.return_value.fileno.return_value = 1 err_output = b'error output' mock_os_read.side_effect = [err_output] * 2 + [OSError] * 2 mock_fcntl.side_effect = [1, mock.Mock()] * 2 self.assertRaisesRegex( exception.ConsoleSubprocessFailed, "Stdout: %r" % err_output, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_sleep.assert_has_calls([mock.call(1), mock.call(1)]) mock_dir_exists.assert_called_once_with() for obj in (stdout, stderr): mock_fcntl.assert_has_calls([ mock.call(obj, fcntl.F_GETFL), mock.call(obj, fcntl.F_SETFL, 1 | os.O_NONBLOCK)]) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_with() @mock.patch.object(fcntl, 'fcntl', autospec=True) @mock.patch.object(subprocess, 'Popen') @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_timeout( self, mock_stop, mock_dir_exists, mock_popen, mock_fcntl): self.config(subprocess_timeout=0, group='console') self.config(subprocess_checking_interval=0, group='console') mock_popen.return_value.poll.return_value = None mock_popen.return_value.stdout.return_value.fileno.return_value = 0 mock_popen.return_value.stderr.return_value.fileno.return_value = 1 self.assertRaisesRegex( exception.ConsoleSubprocessFailed, 'Timeout or error', console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_with() self.assertEqual(0, mock_popen.return_value.communicate.call_count) @mock.patch.object(time, 'sleep', autospec=True) @mock.patch.object(os, 'read', autospec=True) @mock.patch.object(fcntl, 'fcntl', autospec=True) @mock.patch.object(console_utils, 'open', mock.mock_open(read_data='12345\n')) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(subprocess, 'Popen') @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail_no_pid( self, mock_stop, mock_dir_exists, mock_pid_exists, mock_popen, mock_path_exists, mock_fcntl, mock_os_read, mock_sleep): mock_popen.return_value.poll.return_value = 0 stdout = mock_popen.return_value.stdout stderr = mock_popen.return_value.stderr stdout.return_value.fileno.return_value = 0 stderr.return_value.fileno.return_value = 1 mock_pid_exists.return_value = False mock_os_read.side_effect = [b'error output'] * 2 + [OSError] * 2 mock_fcntl.side_effect = [1, mock.Mock()] * 2 mock_path_exists.return_value = True self.assertRaises(exception.ConsoleSubprocessFailed, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_sleep.assert_has_calls([mock.call(1), mock.call(1)]) mock_dir_exists.assert_called_once_with() for obj in (stdout, stderr): mock_fcntl.assert_has_calls([ mock.call(obj, fcntl.F_GETFL), mock.call(obj, fcntl.F_SETFL, 1 | os.O_NONBLOCK)]) mock_pid_exists.assert_called_with(12345) mock_popen.assert_called_once_with(mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.PIPE) mock_popen.return_value.poll.assert_called_with() @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_shellinabox_console_fail_nopiddir(self, mock_stop, mock_dir_exists, mock_popen): mock_dir_exists.side_effect = exception.ConsoleError(message='fail') mock_popen.return_value.poll.return_value = 0 self.assertRaises(exception.ConsoleError, console_utils.start_shellinabox_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() self.assertFalse(mock_popen.called) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_shellinabox_console(self, mock_stop): console_utils.stop_shellinabox_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_shellinabox_console_fail_nopid(self, mock_stop): mock_stop.side_effect = exception.NoConsolePid('/tmp/blah') console_utils.stop_shellinabox_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) def test_get_socat_console_url_tcp(self): self.config(my_ip="10.0.0.1") url = console_utils.get_socat_console_url(self.info['port']) self.assertEqual("tcp://10.0.0.1:%s" % self.info['port'], url) def test_get_socat_console_url_tcp6(self): self.config(my_ip='::1') url = console_utils.get_socat_console_url(self.info['port']) self.assertEqual("tcp://[::1]:%s" % self.info['port'], url) def test_get_socat_console_url_tcp_with_address_conf(self): self.config(socat_address="10.0.0.1", group='console') url = console_utils.get_socat_console_url(self.info['port']) self.assertEqual("tcp://10.0.0.1:%s" % self.info['port'], url) @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_get_console_pid_file', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) @mock.patch.object(loopingcall.FixedIntervalLoopingCall, 'start', autospec=True) def _test_start_socat_console_check_arg(self, mock_timer_start, mock_stop, mock_dir_exists, mock_get_pid, mock_popen): mock_timer_start.return_value = mock.Mock() mock_get_pid.return_value = '/tmp/%s.pid' % self.info['uuid'] console_utils.start_socat_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_get_pid.assert_called_once_with(self.info['uuid']) mock_timer_start.assert_called_once_with(mock.ANY, interval=mock.ANY) mock_popen.assert_called_once_with(mock.ANY, stderr=subprocess.PIPE) return mock_popen.call_args[0][0] def test_start_socat_console_check_arg_default_timeout(self): args = self._test_start_socat_console_check_arg() self.assertIn('-T600', args) def test_start_socat_console_check_arg_timeout(self): self.config(terminal_timeout=1, group='console') args = self._test_start_socat_console_check_arg() self.assertIn('-T1', args) def test_start_socat_console_check_arg_timeout_disabled(self): self.config(terminal_timeout=0, group='console') args = self._test_start_socat_console_check_arg() self.assertNotIn('-T0', args) def test_start_socat_console_check_arg_bind_addr_default_ipv4(self): self.config(my_ip='10.0.0.1') args = self._test_start_socat_console_check_arg() self.assertIn('TCP4-LISTEN:%s,bind=10.0.0.1,reuseaddr,fork,' 'max-children=1' % self.info['port'], args) def test_start_socat_console_check_arg_bind_addr_ipv4(self): self.config(socat_address='10.0.0.1', group='console') args = self._test_start_socat_console_check_arg() self.assertIn('TCP4-LISTEN:%s,bind=10.0.0.1,reuseaddr,fork,' 'max-children=1' % self.info['port'], args) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_socat_console(self, mock_stop, mock_dir_exists, mock_get_pid, mock_pid_exists, mock_popen, mock_path_exists): mock_popen.return_value.pid = 23456 mock_popen.return_value.poll.return_value = None mock_popen.return_value.communicate.return_value = (None, None) mock_get_pid.return_value = 23456 mock_path_exists.return_value = True console_utils.start_socat_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_get_pid.assert_called_with(self.info['uuid']) mock_path_exists.assert_called_with(mock.ANY) mock_popen.assert_called_once_with(mock.ANY, stderr=subprocess.PIPE) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(psutil, 'pid_exists', autospec=True) @mock.patch.object(console_utils, '_get_console_pid', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_socat_console_nopid(self, mock_stop, mock_dir_exists, mock_get_pid, mock_pid_exists, mock_popen, mock_path_exists): # no existing PID file before starting mock_stop.side_effect = exception.NoConsolePid('/tmp/blah') mock_popen.return_value.pid = 23456 mock_popen.return_value.poll.return_value = None mock_popen.return_value.communicate.return_value = (None, None) mock_get_pid.return_value = 23456 mock_path_exists.return_value = True console_utils.start_socat_console(self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_get_pid.assert_called_with(self.info['uuid']) mock_path_exists.assert_called_with(mock.ANY) mock_popen.assert_called_once_with(mock.ANY, stderr=subprocess.PIPE) @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_socat_console_fail(self, mock_stop, mock_dir_exists, mock_popen): mock_popen.side_effect = OSError() mock_popen.return_value.pid = 23456 mock_popen.return_value.poll.return_value = 1 mock_popen.return_value.communicate.return_value = (None, 'error') self.assertRaises(exception.ConsoleSubprocessFailed, console_utils.start_socat_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() mock_popen.assert_called_once_with(mock.ANY, stderr=subprocess.PIPE) @mock.patch.object(subprocess, 'Popen', autospec=True) @mock.patch.object(console_utils, '_ensure_console_pid_dir_exists', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_start_socat_console_fail_nopiddir(self, mock_stop, mock_dir_exists, mock_popen): mock_dir_exists.side_effect = exception.ConsoleError(message='fail') self.assertRaises(exception.ConsoleError, console_utils.start_socat_console, self.info['uuid'], self.info['port'], 'ls&') mock_stop.assert_called_once_with(self.info['uuid']) mock_dir_exists.assert_called_once_with() self.assertEqual(0, mock_popen.call_count) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_socat_console(self, mock_stop): console_utils.stop_socat_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) @mock.patch.object(console_utils.LOG, 'warning', autospec=True) @mock.patch.object(console_utils, '_stop_console', autospec=True) def test_stop_socat_console_fail_nopid(self, mock_stop, mock_log_warning): mock_stop.side_effect = exception.NoConsolePid('/tmp/blah') console_utils.stop_socat_console(self.info['uuid']) mock_stop.assert_called_once_with(self.info['uuid']) # LOG.warning() is called when _stop_console() raises NoConsolePid self.assertTrue(mock_log_warning.called) def test_valid_console_port_range(self): self.config(port_range='10000:20000', group='console') start, stop = console_utils._get_port_range() self.assertEqual((start, stop), (10000, 20000)) def test_invalid_console_port_range(self): self.config(port_range='20000:10000', group='console') self.assertRaises(exception.InvalidParameterValue, console_utils._get_port_range) @mock.patch.object(console_utils, 'ALLOCATED_PORTS', autospec=True) @mock.patch.object(console_utils, '_verify_port', autospec=True) def test_allocate_port_success(self, mock_verify, mock_ports): self.config(port_range='10000:10001', group='console') port = console_utils.acquire_port() mock_verify.assert_called_once_with(10000) self.assertEqual(port, 10000) mock_ports.add.assert_called_once_with(10000) @mock.patch.object(console_utils, 'ALLOCATED_PORTS', autospec=True) @mock.patch.object(console_utils, '_verify_port', autospec=True) def test_allocate_port_range_retry(self, mock_verify, mock_ports): self.config(port_range='10000:10003', group='console') mock_verify.side_effect = (exception.Conflict, exception.Conflict, None) port = console_utils.acquire_port() verify_calls = [mock.call(10000), mock.call(10001), mock.call(10002)] mock_verify.assert_has_calls(verify_calls) self.assertEqual(port, 10002) mock_ports.add.assert_called_once_with(10002) @mock.patch.object(console_utils, 'ALLOCATED_PORTS', autospec=True) @mock.patch.object(console_utils, '_verify_port', autospec=True) def test_allocate_port_no_free_ports(self, mock_verify, mock_ports): self.config(port_range='10000:10005', group='console') mock_verify.side_effect = exception.Conflict self.assertRaises(exception.NoFreeIPMITerminalPorts, console_utils.acquire_port) verify_calls = [mock.call(p) for p in range(10000, 10005)] mock_verify.assert_has_calls(verify_calls) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_iscsi_deploy.py0000664000175000017500000026622713652514273025762 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iSCSI deploy mechanism.""" import os import tempfile import time import types from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import mock from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import fileutils import testtools from ironic.common import boot_devices from ironic.common import dhcp_factory from ironic.common import exception from ironic.common import pxe_utils from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import agent_base from ironic.drivers.modules import agent_client from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules.network import flat as flat_network from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import noop as noop_storage from ironic.drivers import utils as driver_utils from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() class IscsiDeployPrivateMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IscsiDeployPrivateMethodsTestCase, self).setUp() n = { 'boot_interface': 'pxe', 'deploy_interface': 'iscsi', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.node = obj_utils.create_test_node(self.context, **n) def test__save_disk_layout(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 10 info['preserve_ephemeral'] = False self.node.instance_info = info iscsi_deploy._save_disk_layout(self.node, info) self.node.refresh() for param in ('ephemeral_gb', 'swap_mb', 'root_gb'): self.assertEqual( info[param], self.node.driver_internal_info['instance'][param] ) def test__get_image_dir_path(self): self.assertEqual(os.path.join(CONF.pxe.images_path, self.node.uuid), deploy_utils._get_image_dir_path(self.node.uuid)) def test__get_image_file_path(self): self.assertEqual(os.path.join(CONF.pxe.images_path, self.node.uuid, 'disk'), deploy_utils._get_image_file_path(self.node.uuid)) class IscsiDeployMethodsTestCase(db_base.DbTestCase): def setUp(self): super(IscsiDeployMethodsTestCase, self).setUp() instance_info = dict(INST_INFO_DICT) instance_info['deploy_key'] = 'fake-56789' n = { 'boot_interface': 'pxe', 'deploy_interface': 'iscsi', 'instance_info': instance_info, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.node = obj_utils.create_test_node(self.context, **n) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size(self, get_image_mb_mock): get_image_mb_mock.return_value = 1000 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['root_gb'] = 1 iscsi_deploy.check_image_size(task) get_image_mb_mock.assert_called_once_with( deploy_utils._get_image_file_path(task.node.uuid)) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size_whole_disk_image(self, get_image_mb_mock): get_image_mb_mock.return_value = 1025 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['root_gb'] = 1 task.node.driver_internal_info['is_whole_disk_image'] = True # No error for whole disk images iscsi_deploy.check_image_size(task) self.assertFalse(get_image_mb_mock.called) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size_whole_disk_image_no_root(self, get_image_mb_mock): get_image_mb_mock.return_value = 1025 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: del task.node.instance_info['root_gb'] task.node.driver_internal_info['is_whole_disk_image'] = True # No error for whole disk images iscsi_deploy.check_image_size(task) self.assertFalse(get_image_mb_mock.called) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) def test_check_image_size_fails(self, get_image_mb_mock): get_image_mb_mock.return_value = 1025 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['root_gb'] = 1 self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.check_image_size, task) get_image_mb_mock.assert_called_once_with( deploy_utils._get_image_file_path(task.node.uuid)) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test_cache_instance_images_master_path(self, mock_fetch_image): temp_dir = tempfile.mkdtemp() self.config(images_path=temp_dir, group='pxe') self.config(instance_master_path=os.path.join(temp_dir, 'instance_master_path'), group='pxe') fileutils.ensure_tree(CONF.pxe.instance_master_path) (uuid, image_path) = deploy_utils.cache_instance_image(None, self.node) mock_fetch_image.assert_called_once_with(None, mock.ANY, [(uuid, image_path)], True) self.assertEqual('glance://image_uuid', uuid) self.assertEqual(os.path.join(temp_dir, self.node.uuid, 'disk'), image_path) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) def test_destroy_images(self, mock_cache, mock_rmtree, mock_unlink): self.config(images_path='/path', group='pxe') deploy_utils.destroy_images('uuid') mock_cache.return_value.clean_up.assert_called_once_with() mock_unlink.assert_called_once_with('/path/uuid/disk') mock_rmtree.assert_called_once_with('/path/uuid') @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout, mock_collect_logs): kwargs = {'address': '123456', 'iqn': 'aaa-bbb', 'conv_flags': None} deploy_mock.side_effect = exception.InstanceDeployFailure( "test deploy error") self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) # Ironic exceptions are preserved as they are self.assertRaisesRegex(exception.InstanceDeployFailure, '^test deploy error$', iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) mock_collect_logs.assert_called_once_with(task.node) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_partition_image', autospec=True) def test_continue_deploy_unexpected_fail( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout, mock_collect_logs): kwargs = {'address': '123456', 'iqn': 'aaa-bbb'} deploy_mock.side_effect = KeyError('boom') self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaisesRegex(exception.InstanceDeployFailure, "Deploy failed.*Error: 'boom'", iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) mock_collect_logs.assert_called_once_with(task.node) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail_no_root_uuid_or_disk_id( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout, mock_collect_logs): kwargs = {'address': '123456', 'iqn': 'aaa-bbb'} deploy_mock.return_value = {} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) mock_collect_logs.assert_called_once_with(task.node) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_partition_image', autospec=True) def test_continue_deploy_fail_empty_root_uuid( self, deploy_mock, power_mock, mock_image_cache, mock_disk_layout, mock_collect_logs): kwargs = {'address': '123456', 'iqn': 'aaa-bbb'} deploy_mock.return_value = {'root uuid': ''} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: params = iscsi_deploy.get_deploy_info(task.node, **kwargs) self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.continue_deploy, task, **kwargs) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNotNone(task.node.last_error) deploy_mock.assert_called_once_with(**params) power_mock.assert_called_once_with(task, states.POWER_OFF) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertFalse(mock_disk_layout.called) mock_collect_logs.assert_called_once_with(task.node) @mock.patch.object(iscsi_deploy, '_save_disk_layout', autospec=True) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(iscsi_deploy, 'get_deploy_info', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_partition_image', autospec=True) def test_continue_deploy(self, deploy_mock, power_mock, mock_image_cache, mock_deploy_info, mock_log, mock_disk_layout): kwargs = {'address': '123456', 'iqn': 'aaa-bbb'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() mock_deploy_info.return_value = { 'address': '123456', 'boot_option': 'netboot', 'configdrive': "I've got the power", 'ephemeral_format': None, 'ephemeral_mb': 0, 'image_path': (u'/var/lib/ironic/images/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/disk'), 'iqn': 'aaa-bbb', 'lun': '1', 'node_uuid': u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123', 'port': '3260', 'preserve_ephemeral': True, 'root_mb': 102400, 'swap_mb': 0, } log_params = mock_deploy_info.return_value.copy() # Make sure we don't log the full content of the configdrive log_params['configdrive'] = '***' expected_dict = { 'node': self.node.uuid, 'params': log_params, } uuid_dict_returned = {'root uuid': '12345678-87654321'} deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_log.isEnabledFor.return_value = True retval = iscsi_deploy.continue_deploy(task, **kwargs) mock_log.debug.assert_called_once_with( mock.ANY, expected_dict) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNone(task.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertEqual(uuid_dict_returned, retval) mock_disk_layout.assert_called_once_with(task.node, mock.ANY) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(iscsi_deploy, 'get_deploy_info', autospec=True) @mock.patch.object(deploy_utils, 'InstanceImageCache', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'deploy_disk_image', autospec=True) def test_continue_deploy_whole_disk_image( self, deploy_mock, power_mock, mock_image_cache, mock_deploy_info, mock_log): kwargs = {'address': '123456', 'iqn': 'aaa-bbb'} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() mock_deploy_info.return_value = { 'address': '123456', 'image_path': (u'/var/lib/ironic/images/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/disk'), 'iqn': 'aaa-bbb', 'lun': '1', 'node_uuid': u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123', 'port': '3260', } log_params = mock_deploy_info.return_value.copy() expected_dict = { 'node': self.node.uuid, 'params': log_params, } uuid_dict_returned = {'disk identifier': '87654321'} deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = True mock_log.isEnabledFor.return_value = True retval = iscsi_deploy.continue_deploy(task, **kwargs) mock_log.debug.assert_called_once_with( mock.ANY, expected_dict) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertIsNone(task.node.last_error) mock_image_cache.assert_called_once_with() mock_image_cache.return_value.clean_up.assert_called_once_with() self.assertEqual(uuid_dict_returned, retval) def _test_get_deploy_info(self, extra_instance_info=None): if extra_instance_info is None: extra_instance_info = {} instance_info = self.node.instance_info instance_info.update(extra_instance_info) self.node.instance_info = instance_info kwargs = {'address': '1.1.1.1', 'iqn': 'target-iqn'} ret_val = iscsi_deploy.get_deploy_info(self.node, **kwargs) self.assertEqual('1.1.1.1', ret_val['address']) self.assertEqual('target-iqn', ret_val['iqn']) return ret_val def test_get_deploy_info_boot_option_default(self): ret_val = self._test_get_deploy_info() self.assertEqual('local', ret_val['boot_option']) def test_get_deploy_info_netboot_specified(self): capabilities = {'capabilities': {'boot_option': 'netboot'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('netboot', ret_val['boot_option']) def test_get_deploy_info_localboot(self): capabilities = {'capabilities': {'boot_option': 'local'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('local', ret_val['boot_option']) def test_get_deploy_info_cpu_arch(self): ret_val = self._test_get_deploy_info() self.assertEqual('x86_64', ret_val['cpu_arch']) def test_get_deploy_info_cpu_arch_none(self): self.node.properties['cpu_arch'] = None ret_val = self._test_get_deploy_info() self.assertNotIn('cpu_arch', ret_val) def test_get_deploy_info_disk_label(self): capabilities = {'capabilities': {'disk_label': 'msdos'}} ret_val = self._test_get_deploy_info(extra_instance_info=capabilities) self.assertEqual('msdos', ret_val['disk_label']) def test_get_deploy_info_not_specified(self): ret_val = self._test_get_deploy_info() self.assertNotIn('disk_label', ret_val) def test_get_deploy_info_portal_port(self): self.config(portal_port=3266, group='iscsi') ret_val = self._test_get_deploy_info() self.assertEqual(3266, ret_val['port']) def test_get_deploy_info_whole_disk_image(self): instance_info = self.node.instance_info instance_info['configdrive'] = 'My configdrive' self.node.instance_info = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True kwargs = {'address': '1.1.1.1', 'iqn': 'target-iqn'} ret_val = iscsi_deploy.get_deploy_info(self.node, **kwargs) self.assertEqual('1.1.1.1', ret_val['address']) self.assertEqual('target-iqn', ret_val['iqn']) self.assertEqual('My configdrive', ret_val['configdrive']) def test_get_deploy_info_whole_disk_image_no_root(self): instance_info = self.node.instance_info instance_info['configdrive'] = 'My configdrive' del instance_info['root_gb'] self.node.instance_info = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True kwargs = {'address': '1.1.1.1', 'iqn': 'target-iqn'} ret_val = iscsi_deploy.get_deploy_info(self.node, **kwargs) self.assertEqual('1.1.1.1', ret_val['address']) self.assertEqual('target-iqn', ret_val['iqn']) self.assertEqual('My configdrive', ret_val['configdrive']) @mock.patch.object(iscsi_deploy, 'continue_deploy', autospec=True) def test_do_agent_iscsi_deploy_okay(self, continue_deploy_mock): agent_client_mock = mock.MagicMock(spec_set=agent_client.AgentClient) agent_client_mock.start_iscsi_target.return_value = { 'command_status': 'SUCCESS', 'command_error': None} driver_internal_info = {'agent_url': 'http://1.2.3.4:1234'} self.node.driver_internal_info = driver_internal_info self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} continue_deploy_mock.return_value = uuid_dict_returned expected_iqn = 'iqn.2008-10.org.openstack:%s' % self.node.uuid with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret_val = iscsi_deploy.do_agent_iscsi_deploy( task, agent_client_mock) agent_client_mock.start_iscsi_target.assert_called_once_with( task.node, expected_iqn, 3260, wipe_disk_metadata=True) continue_deploy_mock.assert_called_once_with( task, iqn=expected_iqn, address='1.2.3.4', conv_flags=None) self.assertEqual( 'some-root-uuid', task.node.driver_internal_info['root_uuid_or_disk_id']) self.assertEqual(ret_val, uuid_dict_returned) @mock.patch.object(iscsi_deploy, 'continue_deploy', autospec=True) def test_do_agent_iscsi_deploy_preserve_ephemeral(self, continue_deploy_mock): """Ensure the disk is not wiped if preserve_ephemeral is True.""" agent_client_mock = mock.MagicMock(spec_set=agent_client.AgentClient) agent_client_mock.start_iscsi_target.return_value = { 'command_status': 'SUCCESS', 'command_error': None} driver_internal_info = { 'agent_url': 'http://1.2.3.4:1234'} self.node.driver_internal_info = driver_internal_info self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} continue_deploy_mock.return_value = uuid_dict_returned expected_iqn = 'iqn.2008-10.org.openstack:%s' % self.node.uuid with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['preserve_ephemeral'] = True iscsi_deploy.do_agent_iscsi_deploy( task, agent_client_mock) agent_client_mock.start_iscsi_target.assert_called_once_with( task.node, expected_iqn, 3260, wipe_disk_metadata=False) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_do_agent_iscsi_deploy_start_iscsi_failure( self, mock_collect_logs): agent_client_mock = mock.MagicMock(spec_set=agent_client.AgentClient) agent_client_mock.start_iscsi_target.return_value = { 'command_status': 'FAILED', 'command_error': 'booom'} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() expected_iqn = 'iqn.2008-10.org.openstack:%s' % self.node.uuid with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.do_agent_iscsi_deploy, task, agent_client_mock) agent_client_mock.start_iscsi_target.assert_called_once_with( task.node, expected_iqn, 3260, wipe_disk_metadata=True) self.node.refresh() self.assertEqual(states.DEPLOYFAIL, self.node.provision_state) self.assertEqual(states.ACTIVE, self.node.target_provision_state) self.assertIsNotNone(self.node.last_error) mock_collect_logs.assert_called_once_with(task.node) @mock.patch('ironic.drivers.modules.deploy_utils.get_ironic_api_url') def test_validate_good_api_url(self, mock_get_url): mock_get_url.return_value = 'http://127.0.0.1:1234' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: iscsi_deploy.validate(task) mock_get_url.assert_called_once_with() @mock.patch('ironic.drivers.modules.deploy_utils.get_ironic_api_url') def test_validate_fail_no_api_url(self, mock_get_url): mock_get_url.side_effect = exception.InvalidParameterValue('Ham!') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate, task) mock_get_url.assert_called_once_with() @mock.patch('ironic.drivers.modules.deploy_utils.get_ironic_api_url') def test_validate_invalid_root_device_hints(self, mock_get_url): mock_get_url.return_value = 'http://spam.ham/baremetal' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate, task) @mock.patch('ironic.drivers.modules.deploy_utils.get_ironic_api_url') def test_validate_invalid_root_device_hints_iinfo(self, mock_get_url): mock_get_url.return_value = 'http://spam.ham/baremetal' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 42} task.node.instance_info['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, iscsi_deploy.validate, task) class ISCSIDeployTestCase(db_base.DbTestCase): def setUp(self): super(ISCSIDeployTestCase, self).setUp() # NOTE(TheJulia): We explicitly set the noop storage interface as the # default below for deployment tests in order to raise any change # in the default which could be a breaking behavior change # as the storage interface is explicitly an "opt-in" interface. self.node = obj_utils.create_test_node( self.context, boot_interface='pxe', deploy_interface='iscsi', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, storage_interface='noop', ) self.node.driver_internal_info['agent_url'] = 'http://1.2.3.4:1234' dhcp_factory.DHCPFactory._dhcp_provider = None def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: props = task.driver.deploy.get_properties() self.assertEqual(['deploy_forces_oob_reboot'], list(props)) @mock.patch.object(iscsi_deploy, 'validate', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate(self, pxe_validate_mock, validate_capabilities_mock, validate_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.deploy.validate(task) pxe_validate_mock.assert_called_once_with(task.driver.boot, task) validate_capabilities_mock.assert_called_once_with(task.node) validate_mock.assert_called_once_with(task) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(iscsi_deploy, 'validate', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_storage_should_write_image_false( self, pxe_validate_mock, validate_capabilities_mock, validate_mock, should_write_image_mock): should_write_image_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.deploy.validate(task) pxe_validate_mock.assert_called_once_with(task.driver.boot, task) validate_capabilities_mock.assert_called_once_with(task.node) self.assertFalse(validate_mock.called) should_write_image_mock.assert_called_once_with( task.driver.storage, task) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_node_active(self, prepare_instance_mock, add_provisioning_net_mock, storage_driver_info_mock, storage_attach_volumes_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.ACTIVE task.driver.deploy.prepare(task) prepare_instance_mock.assert_called_once_with( task.driver.boot, task) self.assertEqual(0, add_provisioning_net_mock.call_count) storage_driver_info_mock.assert_called_once_with(task) self.assertFalse(storage_attach_volumes_mock.called) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_node_adopting(self, prepare_instance_mock, add_provisioning_net_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.ADOPTING task.driver.deploy.prepare(task) prepare_instance_mock.assert_called_once_with( task.driver.boot, task) self.assertEqual(0, add_provisioning_net_mock.call_count) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) def test_prepare_node_deploying( self, unconfigure_tenant_net_mock, add_provisioning_net_mock, mock_prepare_ramdisk, mock_agent_options, storage_driver_info_mock, storage_attach_volumes_mock): mock_agent_options.return_value = {'c': 'd'} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.DEPLOYING task.driver.deploy.prepare(task) mock_agent_options.assert_called_once_with(task.node) mock_prepare_ramdisk.assert_called_once_with( task.driver.boot, task, {'c': 'd'}) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_driver_info_mock.assert_called_once_with(task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) def test_prepare_node_deploying_storage_should_write_false( self, unconfigure_tenant_net_mock, add_provisioning_net_mock, mock_prepare_ramdisk, mock_agent_options, storage_driver_info_mock, storage_attach_volumes_mock, storage_should_write_mock): storage_should_write_mock.return_value = False mock_agent_options.return_value = {'c': 'd'} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.DEPLOYING task.driver.deploy.prepare(task) self.assertFalse(mock_agent_options.called) self.assertFalse(mock_prepare_ramdisk.called) self.assertFalse(add_provisioning_net_mock.called) self.assertFalse(unconfigure_tenant_net_mock.called) storage_driver_info_mock.assert_called_once_with(task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) self.assertEqual(2, storage_should_write_mock.call_count) @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_fast_track( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock, is_fast_track_mock): # TODO(TheJulia): We should revisit this test. Smartnic # support didn't wire in tightly on testing for power in # these tests, and largely fast_track impacts power operations. node = self.node node.network_interface = 'flat' node.save() is_fast_track_mock.return_value = True with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_options_mock.return_value = {'a': 'b'} task.driver.deploy.prepare(task) storage_driver_info_mock.assert_called_once_with(task) # NOTE: Validate is the primary difference between agent/iscsi self.assertFalse(validate_net_mock.called) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertTrue(storage_attach_volumes_mock.called) self.assertFalse(build_instance_info_mock.called) # TODO(TheJulia): We should likely consider executing the # next two methods at some point in order to facilitate # continuity. While not explicitly required for this feature # to work, reboots as part of deployment would need the ramdisk # present and ready. self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(deploy_utils, 'cache_instance_image', autospec=True) def test_deploy(self, mock_cache_instance_image, mock_check_image_size, mock_node_power_action): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.deploy.deploy(task) self.assertEqual(state, states.DEPLOYWAIT) mock_cache_instance_image.assert_called_once_with( self.context, task.node) mock_check_image_size.assert_called_once_with(task) mock_node_power_action.assert_called_once_with(task, states.REBOOT) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(deploy_utils, 'cache_instance_image', autospec=True) def test_deploy_with_deployment_reboot(self, mock_cache_instance_image, mock_check_image_size, mock_node_power_action): driver_internal_info = self.node.driver_internal_info driver_internal_info['deployment_reboot'] = True self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.deploy.deploy(task) self.assertEqual(state, states.DEPLOYWAIT) mock_cache_instance_image.assert_called_once_with( self.context, task.node) mock_check_image_size.assert_called_once_with(task) self.assertFalse(mock_node_power_action.called) self.assertNotIn( 'deployment_reboot', task.node.driver_internal_info) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(deploy_utils, 'cache_instance_image', autospec=True) def test_deploy_storage_check_write_image_false(self, mock_cache_instance_image, mock_check_image_size, mock_node_power_action, mock_prepare_instance, mock_remove_network, mock_tenant_network, mock_write): mock_write.return_value = False self.node.provision_state = states.DEPLOYING self.node.deploy_step = { 'step': 'deploy', 'priority': 50, 'interface': 'deploy'} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ret = task.driver.deploy.deploy(task) self.assertIsNone(ret) self.assertFalse(mock_cache_instance_image.called) self.assertFalse(mock_check_image_size.called) mock_remove_network.assert_called_once_with(mock.ANY, task) mock_tenant_network.assert_called_once_with(mock.ANY, task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) self.assertEqual(2, mock_node_power_action.call_count) self.assertEqual(states.DEPLOYING, task.node.provision_state) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(deploy_utils, 'cache_instance_image', autospec=True) @mock.patch.object(iscsi_deploy.ISCSIDeploy, 'continue_deploy', autospec=True) @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_deploy_fast_track(self, power_mock, mock_pxe_instance, mock_is_fast_track, continue_deploy_mock, cache_image_mock, check_image_size_mock): mock_is_fast_track.return_value = True self.node.target_provision_state = states.ACTIVE self.node.provision_state = states.DEPLOYING i_info = self.node.driver_internal_info i_info['agent_url'] = 'http://1.2.3.4:1234' self.node.driver_internal_info = i_info self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.driver.deploy.deploy(task) self.assertFalse(power_mock.called) self.assertFalse(mock_pxe_instance.called) task.node.refresh() self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) cache_image_mock.assert_called_with(mock.ANY, task.node) check_image_size_mock.assert_called_with(task) continue_deploy_mock.assert_called_with(mock.ANY, task) @mock.patch.object(noop_storage.NoopStorage, 'detach_volumes', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_tear_down(self, node_power_action_mock, unconfigure_tenant_nets_mock, remove_provisioning_net_mock, storage_detach_volumes_mock): obj_utils.create_test_volume_target( self.context, node_id=self.node.id) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state = task.driver.deploy.tear_down(task) self.assertEqual(state, states.DELETED) node_power_action_mock.assert_called_once_with(task, states.POWER_OFF) unconfigure_tenant_nets_mock.assert_called_once_with(mock.ANY, task) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) storage_detach_volumes_mock.assert_called_once_with( task.driver.storage, task) # Verify no volumes exist for new task instances. with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertEqual(0, len(task.volume_targets)) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(deploy_utils, 'destroy_images', autospec=True) def test_clean_up(self, destroy_images_mock, clean_up_ramdisk_mock, clean_up_instance_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy.clean_up(task) destroy_images_mock.assert_called_once_with(task.node.uuid) clean_up_ramdisk_mock.assert_called_once_with( task.driver.boot, task) clean_up_instance_mock.assert_called_once_with( task.driver.boot, task) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning(self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, task.driver.deploy.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning(self, tear_down_cleaning_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(agent_base, 'get_steps', autospec=True) def test_get_clean_steps(self, mock_get_clean_steps): # Test getting clean steps self.config(group='deploy', erase_devices_priority=10) self.config(group='deploy', erase_devices_metadata_priority=5) mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] self.node.driver_internal_info = {'agent_url': 'foo'} self.node.save() mock_get_clean_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = task.driver.deploy.get_clean_steps(task) mock_get_clean_steps.assert_called_once_with( task, 'clean', interface='deploy', override_priorities={ 'erase_devices': 10, 'erase_devices_metadata': 5}) self.assertEqual(mock_steps, steps) @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_execute_clean_step(self, agent_execute_clean_step_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.execute_clean_step( task, {'some-step': 'step-info'}) agent_execute_clean_step_mock.assert_called_once_with( task, {'some-step': 'step-info'}, 'clean') @mock.patch.object(agent_base.AgentDeployMixin, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_netboot(self, do_agent_iscsi_deploy_mock, reboot_and_finish_deploy_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'netboot'}} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: with mock.patch.object( task.driver.boot, 'prepare_instance') as m_prep_instance: task.driver.deploy.continue_deploy(task) do_agent_iscsi_deploy_mock.assert_called_once_with( task, task.driver.deploy._client) reboot_and_finish_deploy_mock.assert_called_once_with( mock.ANY, task) m_prep_instance.assert_called_once_with(task) @mock.patch.object(fake.FakeManagement, 'set_boot_device', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_localboot(self, do_agent_iscsi_deploy_mock, configure_local_boot_mock, reboot_and_finish_deploy_mock, set_boot_device_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'local'}} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.continue_deploy(task) do_agent_iscsi_deploy_mock.assert_called_once_with( task, task.driver.deploy._client) configure_local_boot_mock.assert_called_once_with( task.driver.deploy, task, root_uuid='some-root-uuid', efi_system_part_uuid=None, prep_boot_part_uuid=None) reboot_and_finish_deploy_mock.assert_called_once_with( task.driver.deploy, task) set_boot_device_mock.assert_called_once_with( mock.ANY, task, device=boot_devices.DISK, persistent=True) @mock.patch.object(fake.FakeManagement, 'set_boot_device', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'reboot_and_finish_deploy', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) @mock.patch.object(iscsi_deploy, 'do_agent_iscsi_deploy', autospec=True) def test_continue_deploy_localboot_uefi(self, do_agent_iscsi_deploy_mock, configure_local_boot_mock, reboot_and_finish_deploy_mock, set_boot_device_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'local'}} self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() uuid_dict_returned = {'root uuid': 'some-root-uuid', 'efi system partition uuid': 'efi-part-uuid'} do_agent_iscsi_deploy_mock.return_value = uuid_dict_returned with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.continue_deploy(task) do_agent_iscsi_deploy_mock.assert_called_once_with( task, task.driver.deploy._client) configure_local_boot_mock.assert_called_once_with( task.driver.deploy, task, root_uuid='some-root-uuid', efi_system_part_uuid='efi-part-uuid', prep_boot_part_uuid=None) reboot_and_finish_deploy_mock.assert_called_once_with( task.driver.deploy, task) set_boot_device_mock.assert_called_once_with( mock.ANY, task, device=boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) def test_prepare_node_deploying_with_smartnic_port( self, unconfigure_tenant_net_mock, add_provisioning_net_mock, mock_prepare_ramdisk, mock_agent_options, storage_driver_info_mock, storage_attach_volumes_mock, power_on_node_if_needed_mock, restore_power_state_mock): mock_agent_options.return_value = {'c': 'd'} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.DEPLOYING power_on_node_if_needed_mock.return_value = states.POWER_OFF task.driver.deploy.prepare(task) mock_agent_options.assert_called_once_with(task.node) mock_prepare_ramdisk.assert_called_once_with( task.driver.boot, task, {'c': 'd'}) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_driver_info_mock.assert_called_once_with(task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'detach_volumes', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_tear_down_with_smartnic_port( self, node_power_action_mock, unconfigure_tenant_nets_mock, remove_provisioning_net_mock, storage_detach_volumes_mock, power_on_node_if_needed_mock, restore_power_state_mock): obj_utils.create_test_volume_target( self.context, node_id=self.node.id) with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF state = task.driver.deploy.tear_down(task) self.assertEqual(state, states.DELETED) node_power_action_mock.assert_called_once_with( task, states.POWER_OFF) unconfigure_tenant_nets_mock.assert_called_once_with( mock.ANY, task) remove_provisioning_net_mock.assert_called_once_with( mock.ANY, task) storage_detach_volumes_mock.assert_called_once_with( task.driver.storage, task) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) # Verify no volumes exist for new task instances. with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertEqual(0, len(task.volume_targets)) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(iscsi_deploy, 'check_image_size', autospec=True) @mock.patch.object(deploy_utils, 'cache_instance_image', autospec=True) def test_deploy_storage_check_write_image_false_with_smartnic_port( self, mock_cache_instance_image, mock_check_image_size, mock_node_power_action, mock_prepare_instance, mock_remove_network, mock_tenant_network, mock_write, power_on_node_if_needed_mock, restore_power_state_mock): mock_write.return_value = False self.node.provision_state = states.DEPLOYING self.node.deploy_step = { 'step': 'deploy', 'priority': 50, 'interface': 'deploy'} self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF ret = task.driver.deploy.deploy(task) self.assertIsNone(ret) self.assertFalse(mock_cache_instance_image.called) self.assertFalse(mock_check_image_size.called) mock_remove_network.assert_called_once_with(mock.ANY, task) mock_tenant_network.assert_called_once_with(mock.ANY, task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) self.assertEqual(2, mock_node_power_action.call_count) self.assertEqual(states.DEPLOYING, task.node.provision_state) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) # Cleanup of iscsi_deploy with pxe boot interface class CleanUpFullFlowTestCase(db_base.DbTestCase): def setUp(self): super(CleanUpFullFlowTestCase, self).setUp() self.config(image_cache_size=0, group='pxe') # Configure node instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.node = obj_utils.create_test_node( self.context, boot_interface='pxe', deploy_interface='iscsi', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) # Configure temporary directories pxe_temp_dir = tempfile.mkdtemp() self.config(tftp_root=pxe_temp_dir, group='pxe') tftp_master_dir = os.path.join(CONF.pxe.tftp_root, 'tftp_master') self.config(tftp_master_path=tftp_master_dir, group='pxe') os.makedirs(tftp_master_dir) instance_temp_dir = tempfile.mkdtemp() self.config(images_path=instance_temp_dir, group='pxe') instance_master_dir = os.path.join(CONF.pxe.images_path, 'instance_master') self.config(instance_master_path=instance_master_dir, group='pxe') os.makedirs(instance_master_dir) self.pxe_config_dir = os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg') os.makedirs(self.pxe_config_dir) # Populate some file names self.master_kernel_path = os.path.join(CONF.pxe.tftp_master_path, 'kernel') self.master_instance_path = os.path.join(CONF.pxe.instance_master_path, 'image_uuid') self.node_tftp_dir = os.path.join(CONF.pxe.tftp_root, self.node.uuid) os.makedirs(self.node_tftp_dir) self.kernel_path = os.path.join(self.node_tftp_dir, 'kernel') self.node_image_dir = deploy_utils._get_image_dir_path(self.node.uuid) os.makedirs(self.node_image_dir) self.image_path = deploy_utils._get_image_file_path(self.node.uuid) self.config_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) self.mac_path = pxe_utils._get_pxe_mac_path(self.port.address) # Create files self.files = [self.config_path, self.master_kernel_path, self.master_instance_path] for fname in self.files: # NOTE(dtantsur): files with 0 size won't be cleaned up with open(fname, 'w') as fp: fp.write('test') os.link(self.config_path, self.mac_path) os.link(self.master_kernel_path, self.kernel_path) os.link(self.master_instance_path, self.image_path) dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) @mock.patch.object(pxe_utils, 'get_image_info', autospec=True) def test_clean_up_with_master(self, mock_get_deploy_image_info, mock_get_instance_image_info, clean_dhcp_mock, set_dhcp_provider_mock): image_info = {'kernel': ('kernel_uuid', self.kernel_path)} mock_get_instance_image_info.return_value = image_info mock_get_deploy_image_info.return_value = {} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.deploy.clean_up(task) mock_get_instance_image_info.assert_called_with(task, ipxe_enabled=False) mock_get_deploy_image_info.assert_called_with( task.node, mode='deploy', ipxe_enabled=False) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) for path in ([self.kernel_path, self.image_path, self.config_path] + self.files): self.assertFalse(os.path.exists(path), '%s is not expected to exist' % path) @mock.patch.object(time, 'sleep', lambda seconds: None) class PhysicalWorkTestCase(tests_base.TestCase): def setUp(self): super(PhysicalWorkTestCase, self).setUp() self.address = '127.0.0.1' self.port = 3306 self.iqn = 'iqn.xyz' self.lun = 1 self.image_path = '/tmp/xyz/image' self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" self.dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s" % (self.address, self.port, self.iqn, self.lun)) def _mock_calls(self, name_list, module): patch_list = [mock.patch.object(module, name, spec_set=types.FunctionType) for name in name_list] mock_list = [patcher.start() for patcher in patch_list] for patcher in patch_list: self.addCleanup(patcher.stop) parent_mock = mock.MagicMock(spec=[]) for mocker, name in zip(mock_list, name_list): parent_mock.attach_mock(mocker, name) return parent_mock @mock.patch.object(disk_utils, 'work_on_disk', autospec=True) @mock.patch.object(disk_utils, 'is_block_device', autospec=True) @mock.patch.object(disk_utils, 'get_image_mb', autospec=True) @mock.patch.object(iscsi_deploy, 'logout_iscsi', autospec=True) @mock.patch.object(iscsi_deploy, 'login_iscsi', autospec=True) @mock.patch.object(iscsi_deploy, 'discovery', autospec=True) @mock.patch.object(iscsi_deploy, 'delete_iscsi', autospec=True) def _test_deploy_partition_image(self, mock_delete_iscsi, mock_discovery, mock_login_iscsi, mock_logout_iscsi, mock_get_image_mb, mock_is_block_device, mock_work_on_disk, **kwargs): # Below are the only values we allow callers to modify for testing. # Check that values other than this aren't passed in. deploy_args = { 'boot_mode': None, 'boot_option': None, 'configdrive': None, 'cpu_arch': None, 'disk_label': None, 'ephemeral_format': None, 'ephemeral_mb': None, 'image_mb': 1, 'preserve_ephemeral': False, 'root_mb': 128, 'swap_mb': 64 } disallowed_values = set(kwargs) - set(deploy_args) if disallowed_values: raise ValueError("Only the following kwargs are allowed in " "_test_deploy_partition_image: %(allowed)s. " "Disallowed values: %(disallowed)s." % {"allowed": ", ".join(deploy_args), "disallowed": ", ".join(disallowed_values)}) deploy_args.update(kwargs) root_uuid = '12345678-1234-1234-12345678-12345678abcdef' mock_is_block_device.return_value = True mock_get_image_mb.return_value = deploy_args['image_mb'] mock_work_on_disk.return_value = { 'root uuid': root_uuid, 'efi system partition uuid': None } deploy_kwargs = { 'boot_mode': deploy_args['boot_mode'], 'boot_option': deploy_args['boot_option'], 'configdrive': deploy_args['configdrive'], 'disk_label': deploy_args['disk_label'], 'cpu_arch': deploy_args['cpu_arch'] or '', 'preserve_ephemeral': deploy_args['preserve_ephemeral'] } iscsi_deploy.deploy_partition_image( self.address, self.port, self.iqn, self.lun, self.image_path, deploy_args['root_mb'], deploy_args['swap_mb'], deploy_args['ephemeral_mb'], deploy_args['ephemeral_format'], self.node_uuid, **deploy_kwargs) mock_discovery.assert_called_once_with(self.address, self.port) mock_login_iscsi.assert_called_once_with(self.address, self.port, self.iqn) mock_logout_iscsi.assert_called_once_with(self.address, self.port, self.iqn) mock_delete_iscsi.assert_called_once_with(self.address, self.port, self.iqn) mock_get_image_mb.assert_called_once_with(self.image_path) mock_is_block_device.assert_called_once_with(self.dev) work_on_disk_kwargs = { 'preserve_ephemeral': deploy_args['preserve_ephemeral'], 'configdrive': deploy_args['configdrive'], # boot_option defaults to 'netboot' if # not set 'boot_option': deploy_args['boot_option'] or 'local', 'boot_mode': deploy_args['boot_mode'], 'disk_label': deploy_args['disk_label'], 'cpu_arch': deploy_args['cpu_arch'] or '' } mock_work_on_disk.assert_called_once_with( self.dev, deploy_args['root_mb'], deploy_args['swap_mb'], deploy_args['ephemeral_mb'], deploy_args['ephemeral_format'], self.image_path, self.node_uuid, **work_on_disk_kwargs) def test_deploy_partition_image_without_boot_option(self): self._test_deploy_partition_image() def test_deploy_partition_image_netboot(self): self._test_deploy_partition_image(boot_option="netboot") def test_deploy_partition_image_localboot(self): self._test_deploy_partition_image(boot_option="local") def test_deploy_partition_image_wo_boot_option_and_wo_boot_mode(self): self._test_deploy_partition_image() def test_deploy_partition_image_netboot_bios(self): self._test_deploy_partition_image(boot_option="netboot", boot_mode="bios") def test_deploy_partition_image_localboot_bios(self): self._test_deploy_partition_image(boot_option="local", boot_mode="bios") def test_deploy_partition_image_netboot_uefi(self): self._test_deploy_partition_image(boot_option="netboot", boot_mode="uefi") def test_deploy_partition_image_disk_label(self): self._test_deploy_partition_image(disk_label='gpt') def test_deploy_partition_image_image_exceeds_root_partition(self): self.assertRaises(exception.InstanceDeployFailure, self._test_deploy_partition_image, image_mb=129, root_mb=128) def test_deploy_partition_image_localboot_uefi(self): self._test_deploy_partition_image(boot_option="local", boot_mode="uefi") def test_deploy_partition_image_without_swap(self): self._test_deploy_partition_image(swap_mb=0) def test_deploy_partition_image_with_ephemeral(self): self._test_deploy_partition_image(ephemeral_format='exttest', ephemeral_mb=256) def test_deploy_partition_image_preserve_ephemeral(self): self._test_deploy_partition_image(ephemeral_format='exttest', ephemeral_mb=256, preserve_ephemeral=True) def test_deploy_partition_image_with_configdrive(self): self._test_deploy_partition_image(configdrive='http://1.2.3.4/cd') def test_deploy_partition_image_with_cpu_arch(self): self._test_deploy_partition_image(cpu_arch='generic') @mock.patch.object(disk_utils, 'create_config_drive_partition', autospec=True) @mock.patch.object(disk_utils, 'get_disk_identifier', autospec=True) def test_deploy_whole_disk_image(self, mock_gdi, create_config_drive_mock): """Check loosely all functions are called with right args.""" name_list = ['discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi'] disk_utils_name_list = ['is_block_device', 'populate_image'] iscsi_mock = self._mock_calls(name_list, iscsi_deploy) disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.is_block_device.return_value = True mock_gdi.return_value = '0x12345678' utils_calls_expected = [mock.call.discovery(self.address, self.port), mock.call.login_iscsi(self.address, self.port, self.iqn), mock.call.logout_iscsi(self.address, self.port, self.iqn), mock.call.delete_iscsi(self.address, self.port, self.iqn)] disk_utils_calls_expected = [mock.call.is_block_device(self.dev), mock.call.populate_image(self.image_path, self.dev, conv_flags=None)] uuid_dict_returned = iscsi_deploy.deploy_disk_image( self.address, self.port, self.iqn, self.lun, self.image_path, self.node_uuid) self.assertEqual(utils_calls_expected, iscsi_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertFalse(create_config_drive_mock.called) self.assertEqual('0x12345678', uuid_dict_returned['disk identifier']) @mock.patch.object(disk_utils, 'create_config_drive_partition', autospec=True) @mock.patch.object(disk_utils, 'get_disk_identifier', autospec=True) def test_deploy_whole_disk_image_with_config_drive(self, mock_gdi, create_partition_mock): """Check loosely all functions are called with right args.""" config_url = 'http://1.2.3.4/cd' iscsi_list = ['discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi'] disk_utils_list = ['is_block_device', 'populate_image'] iscsi_mock = self._mock_calls(iscsi_list, iscsi_deploy) disk_utils_mock = self._mock_calls(disk_utils_list, disk_utils) disk_utils_mock.is_block_device.return_value = True mock_gdi.return_value = '0x12345678' utils_calls_expected = [mock.call.discovery(self.address, self.port), mock.call.login_iscsi(self.address, self.port, self.iqn), mock.call.logout_iscsi(self.address, self.port, self.iqn), mock.call.delete_iscsi(self.address, self.port, self.iqn)] disk_utils_calls_expected = [mock.call.is_block_device(self.dev), mock.call.populate_image(self.image_path, self.dev, conv_flags=None)] uuid_dict_returned = iscsi_deploy.deploy_disk_image( self.address, self.port, self.iqn, self.lun, self.image_path, self.node_uuid, configdrive=config_url) iscsi_mock.assert_has_calls(utils_calls_expected) disk_utils_mock.assert_has_calls(disk_utils_calls_expected) create_partition_mock.assert_called_once_with(self.node_uuid, self.dev, config_url) self.assertEqual('0x12345678', uuid_dict_returned['disk identifier']) @mock.patch.object(disk_utils, 'create_config_drive_partition', autospec=True) @mock.patch.object(disk_utils, 'get_disk_identifier', autospec=True) def test_deploy_whole_disk_image_sparse(self, mock_gdi, create_config_drive_mock): """Check loosely all functions are called with right args.""" iscsi_name_list = ['discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi'] disk_utils_name_list = ['is_block_device', 'populate_image'] iscsi_mock = self._mock_calls(iscsi_name_list, iscsi_deploy) disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.is_block_device.return_value = True mock_gdi.return_value = '0x12345678' utils_calls_expected = [mock.call.discovery(self.address, self.port), mock.call.login_iscsi(self.address, self.port, self.iqn), mock.call.logout_iscsi(self.address, self.port, self.iqn), mock.call.delete_iscsi(self.address, self.port, self.iqn)] disk_utils_calls_expected = [mock.call.is_block_device(self.dev), mock.call.populate_image( self.image_path, self.dev, conv_flags='sparse')] uuid_dict_returned = iscsi_deploy.deploy_disk_image( self.address, self.port, self.iqn, self.lun, self.image_path, self.node_uuid, configdrive=None, conv_flags='sparse') self.assertEqual(utils_calls_expected, iscsi_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) self.assertFalse(create_config_drive_mock.called) self.assertEqual('0x12345678', uuid_dict_returned['disk identifier']) @mock.patch.object(utils, 'execute', autospec=True) def test_verify_iscsi_connection_raises(self, mock_exec): iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.abc', ''] self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.verify_iscsi_connection, iqn) self.assertEqual(3, mock_exec.call_count) @mock.patch.object(utils, 'execute', autospec=True) def test_verify_iscsi_connection_override_attempts(self, mock_exec): utils.CONF.set_override('verify_attempts', 2, group='iscsi') iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.abc', ''] self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.verify_iscsi_connection, iqn) self.assertEqual(2, mock_exec.call_count) @mock.patch.object(os.path, 'exists', autospec=True) def test_check_file_system_for_iscsi_device_raises(self, mock_os): iqn = 'iqn.xyz' ip = "127.0.0.1" port = "22" mock_os.return_value = False self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.check_file_system_for_iscsi_device, ip, port, iqn) self.assertEqual(3, mock_os.call_count) @mock.patch.object(os.path, 'exists', autospec=True) def test_check_file_system_for_iscsi_device(self, mock_os): iqn = 'iqn.xyz' ip = "127.0.0.1" port = "22" check_dir = "/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-1" % (ip, port, iqn) mock_os.return_value = True iscsi_deploy.check_file_system_for_iscsi_device(ip, port, iqn) mock_os.assert_called_once_with(check_dir) @mock.patch.object(utils, 'execute', autospec=True) def test_verify_iscsi_connection(self, mock_exec): iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] iscsi_deploy.verify_iscsi_connection(iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-S', run_as_root=True, check_exit_code=[0]) @mock.patch.object(utils, 'execute', autospec=True) def test_force_iscsi_lun_update(self, mock_exec): iqn = 'iqn.xyz' iscsi_deploy.force_iscsi_lun_update(iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-T', iqn, '-R', run_as_root=True, check_exit_code=[0]) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(iscsi_deploy, 'verify_iscsi_connection', autospec=True) @mock.patch.object(iscsi_deploy, 'force_iscsi_lun_update', autospec=True) @mock.patch.object(iscsi_deploy, 'check_file_system_for_iscsi_device', autospec=True) def test_login_iscsi_calls_verify_and_update(self, mock_check_dev, mock_update, mock_verify, mock_exec): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] iscsi_deploy.login_iscsi(address, port, iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-p', '%s:%s' % (address, port), '-T', iqn, '--login', run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) mock_verify.assert_called_once_with(iqn) mock_update.assert_called_once_with(iqn) mock_check_dev.assert_called_once_with(address, port, iqn) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(iscsi_deploy, 'verify_iscsi_connection', autospec=True) @mock.patch.object(iscsi_deploy, 'force_iscsi_lun_update', autospec=True) @mock.patch.object(iscsi_deploy, 'check_file_system_for_iscsi_device', autospec=True) @mock.patch.object(iscsi_deploy, 'delete_iscsi', autospec=True) @mock.patch.object(iscsi_deploy, 'logout_iscsi', autospec=True) def test_login_iscsi_calls_raises( self, mock_loiscsi, mock_discsi, mock_check_dev, mock_update, mock_verify, mock_exec, mock_log): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] mock_check_dev.side_effect = exception.InstanceDeployFailure('boom') self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.login_iscsi, address, port, iqn) mock_verify.assert_called_once_with(iqn) mock_update.assert_called_once_with(iqn) mock_loiscsi.assert_called_once_with(address, port, iqn) mock_discsi.assert_called_once_with(address, port, iqn) self.assertIsInstance(mock_log.error.call_args[0][1], exception.InstanceDeployFailure) @mock.patch.object(iscsi_deploy, 'LOG', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(iscsi_deploy, 'verify_iscsi_connection', autospec=True) @mock.patch.object(iscsi_deploy, 'force_iscsi_lun_update', autospec=True) @mock.patch.object(iscsi_deploy, 'check_file_system_for_iscsi_device', autospec=True) @mock.patch.object(iscsi_deploy, 'delete_iscsi', autospec=True) @mock.patch.object(iscsi_deploy, 'logout_iscsi', autospec=True) def test_login_iscsi_calls_raises_during_cleanup( self, mock_loiscsi, mock_discsi, mock_check_dev, mock_update, mock_verify, mock_exec, mock_log): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] mock_check_dev.side_effect = exception.InstanceDeployFailure('boom') mock_discsi.side_effect = processutils.ProcessExecutionError('boom') self.assertRaises(exception.InstanceDeployFailure, iscsi_deploy.login_iscsi, address, port, iqn) mock_verify.assert_called_once_with(iqn) mock_update.assert_called_once_with(iqn) mock_loiscsi.assert_called_once_with(address, port, iqn) mock_discsi.assert_called_once_with(address, port, iqn) self.assertIsInstance(mock_log.error.call_args[0][1], exception.InstanceDeployFailure) self.assertIsInstance(mock_log.warning.call_args[0][1], processutils.ProcessExecutionError) @mock.patch.object(disk_utils, 'is_block_device', lambda d: True) def test_always_logout_and_delete_iscsi(self): """Check if logout_iscsi() and delete_iscsi() are called. Make sure that logout_iscsi() and delete_iscsi() are called once login_iscsi() is invoked. """ address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 image_path = '/tmp/xyz/image' root_mb = 128 swap_mb = 64 ephemeral_mb = 256 ephemeral_format = 'exttest' node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" class TestException(Exception): pass iscsi_name_list = ['discovery', 'login_iscsi', 'logout_iscsi', 'delete_iscsi'] disk_utils_name_list = ['get_image_mb', 'work_on_disk'] iscsi_mock = self._mock_calls(iscsi_name_list, iscsi_deploy) disk_utils_mock = self._mock_calls(disk_utils_name_list, disk_utils) disk_utils_mock.get_image_mb.return_value = 1 disk_utils_mock.work_on_disk.side_effect = TestException utils_calls_expected = [mock.call.discovery(address, port), mock.call.login_iscsi(address, port, iqn), mock.call.logout_iscsi(address, port, iqn), mock.call.delete_iscsi(address, port, iqn)] disk_utils_calls_expected = [mock.call.get_image_mb(image_path), mock.call.work_on_disk( self.dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, image_path, node_uuid, configdrive=None, preserve_ephemeral=False, boot_option="local", boot_mode="bios", disk_label=None, cpu_arch="")] self.assertRaises(TestException, iscsi_deploy.deploy_partition_image, address, port, iqn, lun, image_path, root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid) self.assertEqual(utils_calls_expected, iscsi_mock.mock_calls) self.assertEqual(disk_utils_calls_expected, disk_utils_mock.mock_calls) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(iscsi_deploy, 'verify_iscsi_connection', autospec=True) @mock.patch.object(iscsi_deploy, 'force_iscsi_lun_update', autospec=True) @mock.patch.object(iscsi_deploy, 'check_file_system_for_iscsi_device', autospec=True) def test_ipv6_address_wrapped(self, mock_check_dev, mock_update, mock_verify, mock_exec): address = '2001:DB8::1111' port = 3306 iqn = 'iqn.xyz' mock_exec.return_value = ['iqn.xyz', ''] iscsi_deploy.login_iscsi(address, port, iqn) mock_exec.assert_called_once_with( 'iscsiadm', '-m', 'node', '-p', '[%s]:%s' % (address, port), '-T', iqn, '--login', run_as_root=True, check_exit_code=[0], attempts=5, delay_on_retry=True) @mock.patch.object(disk_utils, 'is_block_device', autospec=True) @mock.patch.object(iscsi_deploy, 'login_iscsi', lambda *_: None) @mock.patch.object(iscsi_deploy, 'discovery', lambda *_: None) @mock.patch.object(iscsi_deploy, 'logout_iscsi', lambda *_: None) @mock.patch.object(iscsi_deploy, 'delete_iscsi', lambda *_: None) class ISCSISetupAndHandleErrorsTestCase(tests_base.TestCase): def test_no_parent_device(self, mock_ibd): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 mock_ibd.return_value = False expected_dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s" % (address, port, iqn, lun)) with testtools.ExpectedException(exception.InstanceDeployFailure): with iscsi_deploy._iscsi_setup_and_handle_errors( address, port, iqn, lun) as dev: self.assertEqual(expected_dev, dev) mock_ibd.assert_called_once_with(expected_dev) def test_parent_device_yield(self, mock_ibd): address = '127.0.0.1' port = 3306 iqn = 'iqn.xyz' lun = 1 expected_dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s" % (address, port, iqn, lun)) mock_ibd.return_value = True with iscsi_deploy._iscsi_setup_and_handle_errors( address, port, iqn, lun) as dev: self.assertEqual(expected_dev, dev) mock_ibd.assert_called_once_with(expected_dev) ironic-15.0.0/ironic/tests/unit/drivers/modules/ansible/0000775000175000017500000000000013652514443023260 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/ansible/test_deploy.py0000664000175000017500000015251713652514273026201 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from ironic_lib import utils as irlib_utils import mock from oslo_concurrency import processutils from ironic.common import exception from ironic.common import states from ironic.common import utils as com_utils from ironic.conductor import steps from ironic.conductor import task_manager from ironic.conductor import utils from ironic.drivers.modules.ansible import deploy as ansible_deploy from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules.network import flat as flat_network from ironic.drivers.modules import pxe from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils INSTANCE_INFO = { 'image_source': 'fake-image', 'image_url': 'http://image', 'image_checksum': 'checksum', 'image_disk_format': 'qcow2', 'root_mb': 5120, 'swap_mb': 0, 'ephemeral_mb': 0 } DRIVER_INFO = { 'deploy_kernel': 'glance://deploy_kernel_uuid', 'deploy_ramdisk': 'glance://deploy_ramdisk_uuid', 'ansible_username': 'test', 'ansible_key_file': '/path/key', 'ipmi_address': '127.0.0.1', } DRIVER_INTERNAL_INFO = { 'is_whole_disk_image': True, 'clean_steps': [] } class AnsibleDeployTestCaseBase(db_base.DbTestCase): def setUp(self): super(AnsibleDeployTestCaseBase, self).setUp() self.config(enabled_hardware_types=['manual-management'], enabled_deploy_interfaces=['ansible'], enabled_power_interfaces=['fake'], enabled_management_interfaces=['fake']) node = { 'driver': 'manual-management', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, } self.node = object_utils.create_test_node(self.context, **node) class TestAnsibleMethods(AnsibleDeployTestCaseBase): def test__parse_ansible_driver_info(self): self.node.driver_info['ansible_deploy_playbook'] = 'spam.yaml' playbook, user, key = ansible_deploy._parse_ansible_driver_info( self.node, 'deploy') self.assertEqual('spam.yaml', playbook) self.assertEqual('test', user) self.assertEqual('/path/key', key) def test__parse_ansible_driver_info_defaults(self): self.node.driver_info.pop('ansible_username') self.node.driver_info.pop('ansible_key_file') self.config(group='ansible', default_username='spam', default_key_file='/ham/eggs', default_deploy_playbook='parrot.yaml') playbook, user, key = ansible_deploy._parse_ansible_driver_info( self.node, 'deploy') # testing absolute path to the playbook self.assertEqual('parrot.yaml', playbook) self.assertEqual('spam', user) self.assertEqual('/ham/eggs', key) def test__parse_ansible_driver_info_no_playbook(self): self.assertRaises(exception.IronicException, ansible_deploy._parse_ansible_driver_info, self.node, 'test') def test__get_node_ip(self): di_info = self.node.driver_internal_info di_info['agent_url'] = 'http://1.2.3.4:5678' self.node.driver_internal_info = di_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual('1.2.3.4', ansible_deploy._get_node_ip(task)) @mock.patch.object(com_utils, 'execute', return_value=('out', 'err'), autospec=True) def test__run_playbook(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(group='ansible', verbosity=3) self.config(group='ansible', ansible_extra_args='--timeout=100') extra_vars = {'foo': 'bar'} ansible_deploy._run_playbook(self.node, 'deploy', extra_vars, '/path/to/key', tags=['spam'], notags=['ham']) execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', '{"ironic": {"foo": "bar"}}', '--tags=spam', '--skip-tags=ham', '--private-key=/path/to/key', '-vvv', '--timeout=100') @mock.patch.object(com_utils, 'execute', return_value=('out', 'err'), autospec=True) def test__run_playbook_default_verbosity_nodebug(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(debug=False) extra_vars = {'foo': 'bar'} ansible_deploy._run_playbook(self.node, 'deploy', extra_vars, '/path/to/key') execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', '{"ironic": {"foo": "bar"}}', '--private-key=/path/to/key') @mock.patch.object(com_utils, 'execute', return_value=('out', 'err'), autospec=True) def test__run_playbook_default_verbosity_debug(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(debug=True) extra_vars = {'foo': 'bar'} ansible_deploy._run_playbook(self.node, 'deploy', extra_vars, '/path/to/key') execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', '{"ironic": {"foo": "bar"}}', '--private-key=/path/to/key', '-vvvv') @mock.patch.object(com_utils, 'execute', return_value=('out', 'err'), autospec=True) def test__run_playbook_ansible_interpreter_python3(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(group='ansible', verbosity=3) self.config(group='ansible', default_python_interpreter='/usr/bin/python3') self.config(group='ansible', ansible_extra_args='--timeout=100') extra_vars = {'foo': 'bar'} ansible_deploy._run_playbook(self.node, 'deploy', extra_vars, '/path/to/key', tags=['spam'], notags=['ham']) execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', mock.ANY, '--tags=spam', '--skip-tags=ham', '--private-key=/path/to/key', '-vvv', '--timeout=100') all_vars = execute_mock.call_args[0][7] self.assertEqual({"ansible_python_interpreter": "/usr/bin/python3", "ironic": {"foo": "bar"}}, json.loads(all_vars)) @mock.patch.object(com_utils, 'execute', return_value=('out', 'err'), autospec=True) def test__run_playbook_ansible_interpreter_override(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(group='ansible', verbosity=3) self.config(group='ansible', default_python_interpreter='/usr/bin/python3') self.config(group='ansible', ansible_extra_args='--timeout=100') self.node.driver_info['ansible_python_interpreter'] = ( '/usr/bin/python4') extra_vars = {'foo': 'bar'} ansible_deploy._run_playbook(self.node, 'deploy', extra_vars, '/path/to/key', tags=['spam'], notags=['ham']) execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', mock.ANY, '--tags=spam', '--skip-tags=ham', '--private-key=/path/to/key', '-vvv', '--timeout=100') all_vars = execute_mock.call_args[0][7] self.assertEqual({"ansible_python_interpreter": "/usr/bin/python4", "ironic": {"foo": "bar"}}, json.loads(all_vars)) @mock.patch.object(com_utils, 'execute', side_effect=processutils.ProcessExecutionError( description='VIKINGS!'), autospec=True) def test__run_playbook_fail(self, execute_mock): self.config(group='ansible', playbooks_path='/path/to/playbooks') self.config(group='ansible', config_file_path='/path/to/config') self.config(debug=False) extra_vars = {'foo': 'bar'} exc = self.assertRaises(exception.InstanceDeployFailure, ansible_deploy._run_playbook, self.node, 'deploy', extra_vars, '/path/to/key') self.assertIn('VIKINGS!', str(exc)) execute_mock.assert_called_once_with( 'env', 'ANSIBLE_CONFIG=/path/to/config', 'ansible-playbook', '/path/to/playbooks/deploy', '-i', '/path/to/playbooks/inventory', '-e', '{"ironic": {"foo": "bar"}}', '--private-key=/path/to/key') def test__parse_partitioning_info_root_msdos(self): expected_info = { 'partition_info': { 'label': 'msdos', 'partitions': { 'root': {'number': 1, 'part_start': '1MiB', 'part_end': '5121MiB', 'flags': ['boot']} }}} i_info = ansible_deploy._parse_partitioning_info(self.node) self.assertEqual(expected_info, i_info) def test__parse_partitioning_info_all_gpt(self): in_info = dict(INSTANCE_INFO) in_info['swap_mb'] = 128 in_info['ephemeral_mb'] = 256 in_info['ephemeral_format'] = 'ext4' in_info['preserve_ephemeral'] = True in_info['configdrive'] = 'some-fake-user-data' in_info['capabilities'] = {'disk_label': 'gpt'} self.node.instance_info = in_info self.node.save() expected_info = { 'partition_info': { 'label': 'gpt', 'ephemeral_format': 'ext4', 'preserve_ephemeral': 'yes', 'partitions': { 'bios': {'number': 1, 'name': 'bios', 'part_start': '1MiB', 'part_end': '2MiB', 'flags': ['bios_grub']}, 'ephemeral': {'number': 2, 'part_start': '2MiB', 'part_end': '258MiB', 'name': 'ephemeral'}, 'swap': {'number': 3, 'part_start': '258MiB', 'part_end': '386MiB', 'name': 'swap'}, 'configdrive': {'number': 4, 'part_start': '386MiB', 'part_end': '450MiB', 'name': 'configdrive'}, 'root': {'number': 5, 'part_start': '450MiB', 'part_end': '5570MiB', 'name': 'root'} }}} i_info = ansible_deploy._parse_partitioning_info(self.node) self.assertEqual(expected_info, i_info) @mock.patch.object(ansible_deploy.images, 'download_size', autospec=True) def test__calculate_memory_req(self, image_mock): self.config(group='ansible', extra_memory=1) image_mock.return_value = 2000000 # < 2MiB with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(2, ansible_deploy._calculate_memory_req(task)) image_mock.assert_called_once_with(task.context, 'fake-image') def test__get_python_interpreter(self): self.config(group='ansible', default_python_interpreter='/usr/bin/python3') self.node.driver_info['ansible_python_interpreter'] = ( '/usr/bin/python4') python_interpreter = ansible_deploy._get_python_interpreter(self.node) self.assertEqual('/usr/bin/python4', python_interpreter) def test__get_configdrive_path(self): self.config(tempdir='/path/to/tmpdir') self.assertEqual('/path/to/tmpdir/spam.cndrive', ansible_deploy._get_configdrive_path('spam')) def test__prepare_extra_vars(self): host_list = [('fake-uuid', '1.2.3.4', 'spam', 'ham'), ('other-uuid', '5.6.7.8', 'eggs', 'vikings')] ansible_vars = {"foo": "bar"} self.assertEqual( {"nodes": [ {"name": "fake-uuid", "ip": '1.2.3.4', "user": "spam", "extra": "ham"}, {"name": "other-uuid", "ip": '5.6.7.8', "user": "eggs", "extra": "vikings"}], "foo": "bar"}, ansible_deploy._prepare_extra_vars(host_list, ansible_vars)) def test__parse_root_device_hints(self): hints = {"wwn": "fake wwn", "size": "12345", "rotational": True, "serial": "HELLO"} expected = {"wwn": "fake wwn", "size": 12345, "rotational": True, "serial": "hello"} props = self.node.properties props['root_device'] = hints self.node.properties = props self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( expected, ansible_deploy._parse_root_device_hints(task.node)) def test__parse_root_device_hints_iinfo(self): hints = {"wwn": "fake wwn", "size": "12345", "rotational": True, "serial": "HELLO"} expected = {"wwn": "fake wwn", "size": 12345, "rotational": True, "serial": "hello"} iinfo = self.node.instance_info iinfo['root_device'] = hints self.node.instance_info = iinfo self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( expected, ansible_deploy._parse_root_device_hints(task.node)) def test__parse_root_device_hints_override(self): hints = {"wwn": "fake wwn", "size": "12345", "rotational": True, "serial": "HELLO"} expected = {"wwn": "fake wwn", "size": 12345, "rotational": True, "serial": "hello"} props = self.node.properties props['root_device'] = {'size': 'no idea'} self.node.properties = props iinfo = self.node.instance_info iinfo['root_device'] = hints self.node.instance_info = iinfo self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( expected, ansible_deploy._parse_root_device_hints(task.node)) def test__parse_root_device_hints_fail_advanced(self): hints = {"wwn": "s!= fake wwn", "size": ">= 12345", "name": " spam ham", "rotational": True} expected = {"wwn": "s!= fake%20wwn", "name": " spam ham", "size": ">= 12345"} props = self.node.properties props['root_device'] = hints self.node.properties = props self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: exc = self.assertRaises( exception.InvalidParameterValue, ansible_deploy._parse_root_device_hints, task.node) for key, value in expected.items(): self.assertIn(str(key), str(exc)) self.assertIn(str(value), str(exc)) def test__prepare_variables(self): i_info = self.node.instance_info i_info['image_mem_req'] = 3000 i_info['image_whatever'] = 'hello' self.node.instance_info = i_info self.node.save() expected = {"image": {"url": "http://image", "validate_certs": "yes", "source": "fake-image", "mem_req": 3000, "disk_format": "qcow2", "checksum": "md5:checksum", "whatever": "hello"}} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, ansible_deploy._prepare_variables(task)) def test__prepare_variables_root_device_hints(self): props = self.node.properties props['root_device'] = {"wwn": "fake-wwn"} self.node.properties = props self.node.save() expected = {"image": {"url": "http://image", "validate_certs": "yes", "source": "fake-image", "disk_format": "qcow2", "checksum": "md5:checksum"}, "root_device_hints": {"wwn": "fake-wwn"}} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, ansible_deploy._prepare_variables(task)) def test__prepare_variables_insecure_activated(self): self.config(image_store_insecure=True, group='ansible') i_info = self.node.instance_info i_info['image_checksum'] = 'sha256:checksum' self.node.instance_info = i_info self.node.save() expected = {"image": {"url": "http://image", "validate_certs": "no", "source": "fake-image", "disk_format": "qcow2", "checksum": "sha256:checksum"}} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, ansible_deploy._prepare_variables(task)) def test__prepare_variables_configdrive_url(self): i_info = self.node.instance_info i_info['configdrive'] = 'http://configdrive_url' self.node.instance_info = i_info self.node.save() expected = {"image": {"url": "http://image", "validate_certs": "yes", "source": "fake-image", "disk_format": "qcow2", "checksum": "md5:checksum"}, 'configdrive': {'type': 'url', 'location': 'http://configdrive_url'}} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, ansible_deploy._prepare_variables(task)) def test__prepare_variables_configdrive_file(self): i_info = self.node.instance_info i_info['configdrive'] = 'fake-content' self.node.instance_info = i_info self.node.save() configdrive_path = ('%(tempdir)s/%(node)s.cndrive' % {'tempdir': ansible_deploy.CONF.tempdir, 'node': self.node.uuid}) expected = {"image": {"url": "http://image", "validate_certs": "yes", "source": "fake-image", "disk_format": "qcow2", "checksum": "md5:checksum"}, 'configdrive': {'type': 'file', 'location': configdrive_path}} with mock.patch.object(ansible_deploy, 'open', mock.mock_open(), create=True) as open_mock: with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, ansible_deploy._prepare_variables(task)) open_mock.assert_has_calls(( mock.call(configdrive_path, 'w'), mock.call().__enter__(), mock.call().write('fake-content'), mock.call().__exit__(None, None, None))) def test__validate_clean_steps(self): steps = [{"interface": "deploy", "name": "foo", "args": {"spam": {"required": True, "value": "ham"}}}, {"name": "bar", "interface": "deploy"}] self.assertIsNone(ansible_deploy._validate_clean_steps( steps, self.node.uuid)) def test__validate_clean_steps_missing(self): steps = [{"name": "foo", "interface": "deploy", "args": {"spam": {"value": "ham"}, "ham": {"required": True}}}, {"name": "bar"}, {"interface": "deploy"}] exc = self.assertRaises(exception.NodeCleaningFailure, ansible_deploy._validate_clean_steps, steps, self.node.uuid) self.assertIn("name foo, field ham.value", str(exc)) self.assertIn("name bar, field interface", str(exc)) self.assertIn("name undefined, field name", str(exc)) def test__validate_clean_steps_names_not_unique(self): steps = [{"name": "foo", "interface": "deploy"}, {"name": "foo", "interface": "deploy"}] exc = self.assertRaises(exception.NodeCleaningFailure, ansible_deploy._validate_clean_steps, steps, self.node.uuid) self.assertIn("unique names", str(exc)) @mock.patch.object(ansible_deploy.yaml, 'safe_load', autospec=True) def test__get_clean_steps(self, load_mock): steps = [{"interface": "deploy", "name": "foo", "args": {"spam": {"required": True, "value": "ham"}}}, {"name": "bar", "interface": "deploy", "priority": 100}] load_mock.return_value = steps expected = [{"interface": "deploy", "step": "foo", "priority": 10, "abortable": False, "argsinfo": {"spam": {"required": True}}, "args": {"spam": "ham"}}, {"interface": "deploy", "step": "bar", "priority": 100, "abortable": False, "argsinfo": {}, "args": {}}] d_info = self.node.driver_info d_info['ansible_clean_steps_config'] = 'custom_clean' self.node.driver_info = d_info self.node.save() self.config(group='ansible', playbooks_path='/path/to/playbooks') with mock.patch.object(ansible_deploy, 'open', mock.mock_open(), create=True) as open_mock: self.assertEqual( expected, ansible_deploy._get_clean_steps( self.node, interface="deploy", override_priorities={"foo": 10})) open_mock.assert_has_calls(( mock.call('/path/to/playbooks/custom_clean'),)) load_mock.assert_called_once_with( open_mock().__enter__.return_value) class TestAnsibleDeploy(AnsibleDeployTestCaseBase): def setUp(self): super(TestAnsibleDeploy, self).setUp() self.driver = ansible_deploy.AnsibleDeploy() def test_get_properties(self): self.assertEqual( set(list(ansible_deploy.COMMON_PROPERTIES) + ['deploy_forces_oob_reboot']), set(self.driver.get_properties())) @mock.patch.object(deploy_utils, 'check_for_missing_params', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate(self, pxe_boot_validate_mock, check_params_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.validate(task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) check_params_mock.assert_called_once_with( {'instance_info.image_source': INSTANCE_INFO['image_source']}, mock.ANY) @mock.patch.object(deploy_utils, 'get_boot_option', return_value='netboot', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_not_iwdi_netboot(self, pxe_boot_validate_mock, get_boot_mock): driver_internal_info = dict(DRIVER_INTERNAL_INFO) driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InvalidParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) get_boot_mock.assert_called_once_with(task.node) @mock.patch.object(ansible_deploy, '_calculate_memory_req', autospec=True, return_value=2000) @mock.patch.object(utils, 'node_power_action', autospec=True) def test_deploy(self, power_mock, mem_req_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.deploy(task) self.assertEqual(driver_return, states.DEPLOYWAIT) power_mock.assert_called_once_with(task, states.REBOOT) mem_req_mock.assert_called_once_with(task) i_info = task.node.instance_info self.assertEqual(i_info['image_mem_req'], 2000) @mock.patch.object(utils, 'node_power_action', autospec=True) def test_tear_down(self, power_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.tear_down(task) power_mock.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(driver_return, states.DELETED) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) @mock.patch('ironic.drivers.modules.deploy_utils.build_agent_options', return_value={'op1': 'test1'}, autospec=True) @mock.patch('ironic.drivers.modules.deploy_utils.' 'build_instance_info_for_deploy', return_value={'test': 'test'}, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') def test_prepare(self, pxe_prepare_ramdisk_mock, build_instance_info_mock, build_options_mock, power_action_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING with mock.patch.object(task.driver.network, 'add_provisioning_network', autospec=True) as net_mock: self.driver.prepare(task) net_mock.assert_called_once_with(task) power_action_mock.assert_called_once_with(task, states.POWER_OFF) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'op1': 'test1'}) self.node.refresh() self.assertEqual('test', self.node.instance_info['test']) @mock.patch.object(ansible_deploy, '_get_configdrive_path', return_value='/path/test', autospec=True) @mock.patch.object(irlib_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_clean_up(self, pxe_clean_up_mock, unlink_mock, get_cfdrive_path_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.clean_up(task) pxe_clean_up_mock.assert_called_once_with(task) get_cfdrive_path_mock.assert_called_once_with(self.node['uuid']) unlink_mock.assert_called_once_with('/path/test') @mock.patch.object(ansible_deploy, '_get_clean_steps', autospec=True) def test_get_clean_steps(self, get_clean_steps_mock): mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}, {'priority': 99, 'interface': 'deploy', 'step': 'erase_devices_metadata'}, ] get_clean_steps_mock.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = self.driver.get_clean_steps(task) get_clean_steps_mock.assert_called_once_with( task.node, interface='deploy', override_priorities={ 'erase_devices': None, 'erase_devices_metadata': None}) self.assertEqual(mock_steps, steps) @mock.patch.object(ansible_deploy, '_get_clean_steps', autospec=True) def test_get_clean_steps_priority(self, mock_get_clean_steps): self.config(erase_devices_priority=9, group='deploy') self.config(erase_devices_metadata_priority=98, group='deploy') mock_steps = [{'priority': 9, 'interface': 'deploy', 'step': 'erase_devices'}, {'priority': 98, 'interface': 'deploy', 'step': 'erase_devices_metadata'}, ] mock_get_clean_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = self.driver.get_clean_steps(task) mock_get_clean_steps.assert_called_once_with( task.node, interface='deploy', override_priorities={'erase_devices': 9, 'erase_devices_metadata': 98}) self.assertEqual(mock_steps, steps) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(ansible_deploy, '_prepare_extra_vars', autospec=True) @mock.patch.object(ansible_deploy, '_parse_ansible_driver_info', return_value=('test_pl', 'test_u', 'test_k'), autospec=True) def test_execute_clean_step(self, parse_driver_info_mock, prepare_extra_mock, run_playbook_mock): step = {'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'args': {'tags': ['clean']}} ironic_nodes = { 'ironic_nodes': [(self.node['uuid'], '127.0.0.1', 'test_u', {})]} prepare_extra_mock.return_value = ironic_nodes di_info = self.node.driver_internal_info di_info['agent_url'] = 'http://127.0.0.1' self.node.driver_internal_info = di_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.execute_clean_step(task, step) parse_driver_info_mock.assert_called_once_with( task.node, action='clean') prepare_extra_mock.assert_called_once_with( ironic_nodes['ironic_nodes']) run_playbook_mock.assert_called_once_with( task.node, 'test_pl', ironic_nodes, 'test_k', tags=['clean']) @mock.patch.object(ansible_deploy, '_parse_ansible_driver_info', return_value=('test_pl', 'test_u', 'test_k'), autospec=True) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(ansible_deploy, 'LOG', autospec=True) def test_execute_clean_step_no_success_log( self, log_mock, run_mock, parse_driver_info_mock): run_mock.side_effect = exception.InstanceDeployFailure('Boom') step = {'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'args': {'tags': ['clean']}} di_info = self.node.driver_internal_info di_info['agent_url'] = 'http://127.0.0.1' self.node.driver_internal_info = di_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InstanceDeployFailure, self.driver.execute_clean_step, task, step) self.assertFalse(log_mock.info.called) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(utils, 'node_power_action', autospec=True) @mock.patch('ironic.drivers.modules.deploy_utils.build_agent_options', return_value={'op1': 'test1'}, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') def test_prepare_cleaning( self, prepare_ramdisk_mock, buid_options_mock, power_action_mock, set_node_cleaning_steps, run_playbook_mock): step = {'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'tags': ['clean']} driver_internal_info = dict(DRIVER_INTERNAL_INFO) driver_internal_info['clean_steps'] = [step] self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.network.add_cleaning_network = mock.Mock() state = self.driver.prepare_cleaning(task) set_node_cleaning_steps.assert_called_once_with(task) task.driver.network.add_cleaning_network.assert_called_once_with( task) buid_options_mock.assert_called_once_with(task.node) prepare_ramdisk_mock.assert_called_once_with( task, {'op1': 'test1'}) power_action_mock.assert_called_once_with(task, states.REBOOT) self.assertFalse(run_playbook_mock.called) self.assertEqual(states.CLEANWAIT, state) @mock.patch.object(steps, 'set_node_cleaning_steps', autospec=True) def test_prepare_cleaning_callback_no_steps(self, set_node_cleaning_steps): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.network.add_cleaning_network = mock.Mock() self.driver.prepare_cleaning(task) set_node_cleaning_steps.assert_called_once_with(task) self.assertFalse(task.driver.network.add_cleaning_network.called) @mock.patch.object(utils, 'node_power_action', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_tear_down_cleaning(self, clean_ramdisk_mock, power_action_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.network.remove_cleaning_network = mock.Mock() self.driver.tear_down_cleaning(task) power_action_mock.assert_called_once_with(task, states.POWER_OFF) clean_ramdisk_mock.assert_called_once_with(task) (task.driver.network.remove_cleaning_network .assert_called_once_with(task)) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(ansible_deploy, '_prepare_extra_vars', autospec=True) @mock.patch.object(ansible_deploy, '_parse_ansible_driver_info', return_value=('test_pl', 'test_u', 'test_k'), autospec=True) @mock.patch.object(ansible_deploy, '_parse_partitioning_info', autospec=True) @mock.patch.object(ansible_deploy, '_prepare_variables', autospec=True) def test__ansible_deploy(self, prepare_vars_mock, parse_part_info_mock, parse_dr_info_mock, prepare_extra_mock, run_playbook_mock): ironic_nodes = { 'ironic_nodes': [(self.node['uuid'], '127.0.0.1', 'test_u')]} prepare_extra_mock.return_value = ironic_nodes _vars = { 'url': 'image_url', 'checksum': 'aa'} prepare_vars_mock.return_value = _vars driver_internal_info = dict(DRIVER_INTERNAL_INFO) driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.extra = {'ham': 'spam'} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.driver._ansible_deploy(task, '127.0.0.1') prepare_vars_mock.assert_called_once_with(task) parse_part_info_mock.assert_called_once_with(task.node) parse_dr_info_mock.assert_called_once_with(task.node) prepare_extra_mock.assert_called_once_with( [(self.node['uuid'], '127.0.0.1', 'test_u', {'ham': 'spam'})], variables=_vars) run_playbook_mock.assert_called_once_with( task.node, 'test_pl', ironic_nodes, 'test_k') @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(ansible_deploy, '_prepare_extra_vars', autospec=True) @mock.patch.object(ansible_deploy, '_parse_ansible_driver_info', return_value=('test_pl', 'test_u', 'test_k'), autospec=True) @mock.patch.object(ansible_deploy, '_parse_partitioning_info', autospec=True) @mock.patch.object(ansible_deploy, '_prepare_variables', autospec=True) def test__ansible_deploy_iwdi(self, prepare_vars_mock, parse_part_info_mock, parse_dr_info_mock, prepare_extra_mock, run_playbook_mock): ironic_nodes = { 'ironic_nodes': [(self.node['uuid'], '127.0.0.1', 'test_u')]} prepare_extra_mock.return_value = ironic_nodes _vars = { 'url': 'image_url', 'checksum': 'aa'} prepare_vars_mock.return_value = _vars driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True instance_info = self.node.instance_info del instance_info['root_mb'] self.node.driver_internal_info = driver_internal_info self.node.instance_info = instance_info self.node.extra = {'ham': 'spam'} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.driver._ansible_deploy(task, '127.0.0.1') prepare_vars_mock.assert_called_once_with(task) self.assertFalse(parse_part_info_mock.called) parse_dr_info_mock.assert_called_once_with(task.node) prepare_extra_mock.assert_called_once_with( [(self.node['uuid'], '127.0.0.1', 'test_u', {'ham': 'spam'})], variables=_vars) run_playbook_mock.assert_called_once_with( task.node, 'test_pl', ironic_nodes, 'test_k') @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', return_value=states.POWER_OFF) @mock.patch.object(utils, 'node_power_action', autospec=True) def test_reboot_and_finish_deploy_force_reboot( self, power_action_mock, get_pow_state_mock, power_on_node_if_needed_mock): d_info = self.node.driver_info d_info['deploy_forces_oob_reboot'] = True self.node.driver_info = d_info self.node.save() self.config(group='ansible', post_deploy_get_power_state_retry_interval=0) self.node.provision_state = states.DEPLOYING self.node.save() power_on_node_if_needed_mock.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: with mock.patch.object(task.driver, 'network') as net_mock: self.driver.reboot_and_finish_deploy(task) net_mock.remove_provisioning_network.assert_called_once_with( task) net_mock.configure_tenant_networks.assert_called_once_with( task) expected_power_calls = [((task, states.POWER_OFF),), ((task, states.POWER_ON),)] self.assertEqual(expected_power_calls, power_action_mock.call_args_list) get_pow_state_mock.assert_not_called() @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(utils, 'node_power_action', autospec=True) def test_reboot_and_finish_deploy_soft_poweroff_retry( self, power_action_mock, run_playbook_mock, power_on_node_if_needed_mock): self.config(group='ansible', post_deploy_get_power_state_retry_interval=0) self.config(group='ansible', post_deploy_get_power_state_retries=1) self.node.provision_state = states.DEPLOYING di_info = self.node.driver_internal_info di_info['agent_url'] = 'http://127.0.0.1' self.node.driver_internal_info = di_info self.node.save() power_on_node_if_needed_mock.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: with mock.patch.object(task.driver, 'network') as net_mock: with mock.patch.object(task.driver.power, 'get_power_state', return_value=states.POWER_ON) as p_mock: self.driver.reboot_and_finish_deploy(task) p_mock.assert_called_with(task) self.assertEqual(2, len(p_mock.mock_calls)) net_mock.remove_provisioning_network.assert_called_once_with( task) net_mock.configure_tenant_networks.assert_called_once_with( task) power_action_mock.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) expected_power_calls = [((task, states.POWER_OFF),), ((task, states.POWER_ON),)] self.assertEqual(expected_power_calls, power_action_mock.call_args_list) run_playbook_mock.assert_called_once_with( task.node, 'shutdown.yaml', mock.ANY, mock.ANY) @mock.patch.object(ansible_deploy, '_get_node_ip', autospec=True, return_value='1.2.3.4') def test_continue_deploy(self, getip_mock): self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: with mock.patch.multiple(self.driver, autospec=True, _ansible_deploy=mock.DEFAULT, reboot_to_instance=mock.DEFAULT): self.driver.continue_deploy(task) getip_mock.assert_called_once_with(task) self.driver._ansible_deploy.assert_called_once_with( task, '1.2.3.4') self.driver.reboot_to_instance.assert_called_once_with(task) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertEqual(states.DEPLOYING, task.node.provision_state) @mock.patch.object(utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(utils, 'node_set_boot_device', autospec=True) def test_reboot_to_instance(self, bootdev_mock, resume_mock): self.node.provision_state = states.DEPLOYING self.node.deploy_step = { 'step': 'deploy', 'priority': 100, 'interface': 'deploy'} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: with mock.patch.object(self.driver, 'reboot_and_finish_deploy', autospec=True): task.driver.boot = mock.Mock() self.driver.reboot_to_instance(task) bootdev_mock.assert_called_once_with(task, 'disk', persistent=True) resume_mock.assert_called_once_with(task) self.driver.reboot_and_finish_deploy.assert_called_once_with( task) task.driver.boot.clean_up_ramdisk.assert_called_once_with( task) @mock.patch.object(utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(utils, 'power_on_node_if_needed') @mock.patch.object(utils, 'node_power_action', autospec=True) def test_tear_down_with_smartnic_port( self, power_mock, power_on_node_if_needed_mock, restore_power_state_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF driver_return = self.driver.tear_down(task) power_mock.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(driver_return, states.DELETED) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', autospec=True) @mock.patch.object(utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') def test_prepare_with_smartnic_port( self, pxe_prepare_ramdisk_mock, build_instance_info_mock, build_options_mock, power_action_mock, power_on_node_if_needed_mock, restore_power_state_mock, net_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'test': 'test'} build_options_mock.return_value = {'op1': 'test1'} power_on_node_if_needed_mock.return_value = states.POWER_OFF self.driver.prepare(task) power_action_mock.assert_called_once_with( task, states.POWER_OFF) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'op1': 'test1'}) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) self.node.refresh() self.assertEqual('test', self.node.instance_info['test']) @mock.patch.object(utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(ansible_deploy, '_run_playbook', autospec=True) @mock.patch.object(steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(utils, 'node_power_action', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') def test_prepare_cleaning_with_smartnic_port( self, prepare_ramdisk_mock, build_options_mock, power_action_mock, set_node_cleaning_steps, run_playbook_mock, power_on_node_if_needed_mock, restore_power_state_mock): step = {'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'tags': ['clean']} driver_internal_info = dict(DRIVER_INTERNAL_INFO) driver_internal_info['clean_steps'] = [step] self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.network.add_cleaning_network = mock.Mock() build_options_mock.return_value = {'op1': 'test1'} power_on_node_if_needed_mock.return_value = states.POWER_OFF state = self.driver.prepare_cleaning(task) set_node_cleaning_steps.assert_called_once_with(task) task.driver.network.add_cleaning_network.assert_called_once_with( task) build_options_mock.assert_called_once_with(task.node) prepare_ramdisk_mock.assert_called_once_with( task, {'op1': 'test1'}) power_action_mock.assert_called_once_with(task, states.REBOOT) self.assertFalse(run_playbook_mock.called) self.assertEqual(states.CLEANWAIT, state) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) @mock.patch.object(utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(utils, 'node_power_action', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_tear_down_cleaning_with_smartnic_port( self, clean_ramdisk_mock, power_action_mock, power_on_node_if_needed_mock, restore_power_state_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.network.remove_cleaning_network = mock.Mock() power_on_node_if_needed_mock.return_value = states.POWER_OFF self.driver.tear_down_cleaning(task) power_action_mock.assert_called_once_with(task, states.POWER_OFF) clean_ramdisk_mock.assert_called_once_with(task) (task.driver.network.remove_cleaning_network .assert_called_once_with(task)) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', autospec=True) @mock.patch.object(utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', return_value=states.POWER_OFF) @mock.patch.object(utils, 'node_power_action', autospec=True) def test_reboot_and_finish_deploy_with_smartnic_port( self, power_action_mock, get_pow_state_mock, power_on_node_if_needed_mock, restore_power_state_mock, configure_tenant_networks_mock, remove_provisioning_network_mock): d_info = self.node.driver_info d_info['deploy_forces_oob_reboot'] = True self.node.driver_info = d_info self.node.save() self.config(group='ansible', post_deploy_get_power_state_retry_interval=0) self.node.provision_state = states.DEPLOYING self.node.save() power_on_node_if_needed_mock.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.reboot_and_finish_deploy(task) expected_power_calls = [((task, states.POWER_OFF),), ((task, states.POWER_ON),)] self.assertEqual( expected_power_calls, power_action_mock.call_args_list) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) get_pow_state_mock.assert_not_called() ironic-15.0.0/ironic/tests/unit/drivers/modules/ansible/__init__.py0000664000175000017500000000000013652514273025360 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_agent.py0000664000175000017500000035240613652514273024365 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import types import mock from oslo_config import cfg from ironic.common import dhcp_factory from ironic.common import exception from ironic.common import image_service from ironic.common import images from ironic.common import raid from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base as drivers_base from ironic.drivers.modules import agent from ironic.drivers.modules import agent_base from ironic.drivers.modules import agent_client from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules.network import flat as flat_network from ironic.drivers.modules.network import neutron as neutron_network from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import noop as noop_storage from ironic.drivers import utils as driver_utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils INSTANCE_INFO = db_utils.get_test_agent_instance_info() DRIVER_INFO = db_utils.get_test_agent_driver_info() DRIVER_INTERNAL_INFO = db_utils.get_test_agent_driver_internal_info() CONF = cfg.CONF class TestAgentMethods(db_base.DbTestCase): def setUp(self): super(TestAgentMethods, self).setUp() self.node = object_utils.create_test_node(self.context, boot_interface='pxe', deploy_interface='direct') dhcp_factory.DHCPFactory._dhcp_provider = None @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size(self, show_mock): show_mock.return_value = { 'size': 10 * 1024 * 1024, 'disk_format': 'qcow2', } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 agent.check_image_size(task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_without_memory_mb(self, show_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties.pop('memory_mb', None) agent.check_image_size(task, 'fake-image') self.assertFalse(show_mock.called) @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_fail(self, show_mock): show_mock.return_value = { 'size': 11 * 1024 * 1024, 'disk_format': 'qcow2', } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_fail_by_agent_consumed_memory(self, show_mock): self.config(memory_consumed_by_agent=2, group='agent') show_mock.return_value = { 'size': 9 * 1024 * 1024, 'disk_format': 'qcow2', } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_raw_stream_enabled(self, show_mock): CONF.set_override('stream_raw_images', True, 'agent') # Image is bigger than memory but it's raw and will be streamed # so the test should pass show_mock.return_value = { 'size': 15 * 1024 * 1024, 'disk_format': 'raw', } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 agent.check_image_size(task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(images, 'image_show', autospec=True) def test_check_image_size_raw_stream_disabled(self, show_mock): CONF.set_override('stream_raw_images', False, 'agent') show_mock.return_value = { 'size': 15 * 1024 * 1024, 'disk_format': 'raw', } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.properties['memory_mb'] = 10 # Image is raw but stream is disabled, so test should fail since # the image is bigger than the RAM size self.assertRaises(exception.InvalidParameterValue, agent.check_image_size, task, 'fake-image') show_mock.assert_called_once_with(self.context, 'fake-image') @mock.patch.object(deploy_utils, 'check_for_missing_params') def test_validate_http_provisioning_not_glance(self, utils_mock): agent.validate_http_provisioning_configuration(self.node) utils_mock.assert_not_called() @mock.patch.object(deploy_utils, 'check_for_missing_params') def test_validate_http_provisioning_not_http(self, utils_mock): i_info = self.node.instance_info i_info['image_source'] = '0448fa34-4db1-407b-a051-6357d5f86c59' self.node.instance_info = i_info agent.validate_http_provisioning_configuration(self.node) utils_mock.assert_not_called() def test_validate_http_provisioning_missing_args(self): CONF.set_override('image_download_source', 'http', group='agent') CONF.set_override('http_url', None, group='deploy') i_info = self.node.instance_info i_info['image_source'] = '0448fa34-4db1-407b-a051-6357d5f86c59' self.node.instance_info = i_info self.assertRaisesRegex(exception.MissingParameterValue, 'failed to validate http provisoning', agent.validate_http_provisioning_configuration, self.node) class TestAgentDeploy(db_base.DbTestCase): def setUp(self): super(TestAgentDeploy, self).setUp() self.driver = agent.AgentDeploy() # NOTE(TheJulia): We explicitly set the noop storage interface as the # default below for deployment tests in order to raise any change # in the default which could be a breaking behavior change # as the storage interface is explicitly an "opt-in" interface. n = { 'boot_interface': 'pxe', 'deploy_interface': 'direct', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, 'storage_interface': 'noop', 'network_interface': 'noop' } self.node = object_utils.create_test_node(self.context, **n) self.ports = [ object_utils.create_test_port(self.context, node_id=self.node.id)] dhcp_factory.DHCPFactory._dhcp_provider = None def test_get_properties(self): expected = agent.COMMON_PROPERTIES self.assertEqual(expected, self.driver.get_properties()) @mock.patch.object(agent, 'validate_http_provisioning_configuration', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', spec_set=True, autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate(self, pxe_boot_validate_mock, show_mock, validate_capability_mock, validate_http_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.validate(task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') validate_capability_mock.assert_called_once_with(task.node) validate_http_mock.assert_called_once_with(task.node) @mock.patch.object(agent, 'validate_http_provisioning_configuration', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', spec_set=True, autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_driver_info_manage_agent_boot_false( self, pxe_boot_validate_mock, show_mock, validate_capability_mock, validate_http_mock): self.config(manage_agent_boot=False, group='agent') self.node.driver_info = {} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.validate(task) self.assertFalse(pxe_boot_validate_mock.called) show_mock.assert_called_once_with(self.context, 'fake-image') validate_capability_mock.assert_called_once_with(task.node) validate_http_mock.assert_called_once_with(task.node) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_instance_info_missing_params( self, pxe_boot_validate_mock): self.node.instance_info = {} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: e = self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) self.assertIn('instance_info.image_source', str(e)) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_checksum( self, pxe_boot_validate_mock): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' del i_info['image_checksum'] self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_checksum_os_algo( self, pxe_boot_validate_mock): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' i_info['image_os_hash_value'] = 'az' del i_info['image_checksum'] self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_os_image_hash( self, pxe_boot_validate_mock, autospec=True): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' i_info['image_os_hash_algo'] = 'magicalgo' self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_os_algo( self, pxe_boot_validate_mock): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' i_info['image_os_hash_value'] = 'az' self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.MissingParameterValue, self.driver.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_nonglance_image_no_os_checksum( self, pxe_boot_validate_mock, show_mock): i_info = self.node.instance_info i_info['image_source'] = 'http://image-ref' del i_info['image_checksum'] i_info['image_os_hash_algo'] = 'whacky-algo-1' i_info['image_os_hash_value'] = '1234567890' self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.driver.validate(task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'http://image-ref') @mock.patch.object(agent, 'validate_http_provisioning_configuration', autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_invalid_root_device_hints( self, pxe_boot_validate_mock, show_mock, validate_http_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') validate_http_mock.assert_called_once_with(task.node) @mock.patch.object(agent, 'validate_http_provisioning_configuration', autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_invalid_root_device_hints_iinfo( self, pxe_boot_validate_mock, show_mock, validate_http_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties['root_device'] = {'size': 42} task.node.instance_info['root_device'] = {'size': 'not-int'} self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') validate_http_mock.assert_called_once_with(task.node) @mock.patch.object(agent, 'validate_http_provisioning_configuration', autospec=True) @mock.patch.object(images, 'image_show', autospec=True) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_invalid_proxies(self, pxe_boot_validate_mock, show_mock, validate_http_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_info.update({ 'image_https_proxy': 'git://spam.ni', 'image_http_proxy': 'http://spam.ni', 'image_no_proxy': '1' * 500}) self.assertRaisesRegex(exception.InvalidParameterValue, 'image_https_proxy.*image_no_proxy', task.driver.deploy.validate, task) pxe_boot_validate_mock.assert_called_once_with( task.driver.boot, task) show_mock.assert_called_once_with(self.context, 'fake-image') validate_http_mock.assert_called_once_with(task.node) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) @mock.patch.object(deploy_utils, 'check_for_missing_params', autospec=True) @mock.patch.object(deploy_utils, 'validate_capabilities', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_validate_storage_should_write_image_false(self, mock_write, mock_capabilities, mock_params, mock_pxe_validate): mock_write.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.driver.validate(task) mock_capabilities.assert_called_once_with(task.node) self.assertFalse(mock_params.called) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_deploy(self, power_mock, mock_pxe_instance): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.deploy(task) self.assertEqual(driver_return, states.DEPLOYWAIT) power_mock.assert_called_once_with(task, states.REBOOT) self.assertFalse(mock_pxe_instance.called) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_deploy_with_deployment_reboot(self, power_mock, mock_pxe_instance): driver_internal_info = self.node.driver_internal_info driver_internal_info['deployment_reboot'] = True self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.deploy(task) self.assertEqual(driver_return, states.DEPLOYWAIT) self.assertFalse(power_mock.called) self.assertFalse(mock_pxe_instance.called) self.assertNotIn( 'deployment_reboot', task.node.driver_internal_info) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_deploy_storage_should_write_image_false(self, mock_write, mock_pxe_instance): mock_write.return_value = False self.node.provision_state = states.DEPLOYING self.node.deploy_step = { 'step': 'deploy', 'priority': 50, 'interface': 'deploy'} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.deploy(task) self.assertIsNone(driver_return) self.assertTrue(mock_pxe_instance.called) @mock.patch.object(agent_client.AgentClient, 'prepare_image', autospec=True) @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_deploy_fast_track(self, power_mock, mock_pxe_instance, mock_is_fast_track, prepare_image_mock): mock_is_fast_track.return_value = True self.node.target_provision_state = states.ACTIVE self.node.provision_state = states.DEPLOYING test_temp_url = 'http://image' expected_image_info = { 'urls': [test_temp_url], 'id': 'fake-image', 'node_uuid': self.node.uuid, 'checksum': 'checksum', 'disk_format': 'qcow2', 'container_format': 'bare', 'stream_raw_images': CONF.agent.stream_raw_images, } self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.deploy(task) self.assertFalse(power_mock.called) self.assertFalse(mock_pxe_instance.called) task.node.refresh() prepare_image_mock.assert_called_with(mock.ANY, task.node, expected_image_info) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(noop_storage.NoopStorage, 'detach_volumes', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_tear_down(self, power_mock, unconfigure_tenant_nets_mock, remove_provisioning_net_mock, storage_detach_volumes_mock): object_utils.create_test_volume_target( self.context, node_id=self.node.id) node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: driver_return = self.driver.tear_down(task) power_mock.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(driver_return, states.DELETED) unconfigure_tenant_nets_mock.assert_called_once_with(mock.ANY, task) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) storage_detach_volumes_mock.assert_called_once_with( task.driver.storage, task) # Verify no volumes exist for new task instances. with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.assertEqual(0, len(task.volume_targets)) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) def test_prepare( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) validate_net_mock.assert_called_once_with(mock.ANY, task) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(neutron_network.NeutronNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_neutron_net( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'neutron' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) validate_net_mock.assert_called_once_with(mock.ANY, task) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_neutron_net_capabilities_as_string( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, validate_href_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'neutron' instance_info = node.instance_info instance_info['capabilities'] = '{"lion": "roar"}' node.instance_info = instance_info node.save() validate_net_mock.side_effect = [ exception.InvalidParameterValue('invalid'), None] with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) self.assertEqual(2, validate_net_mock.call_count) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) validate_href_mock.assert_called_once_with(mock.ANY, 'fake-image', secret=False) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() capabilities = self.node.instance_info['capabilities'] self.assertEqual('local', capabilities['boot_option']) self.assertEqual('roar', capabilities['lion']) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_neutron_net_exc_no_capabilities( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, validate_href_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'neutron' node.save() validate_net_mock.side_effect = [ exception.InvalidParameterValue('invalid'), None] with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) self.assertEqual(2, validate_net_mock.call_count) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) validate_href_mock.assert_called_once_with(mock.ANY, 'fake-image', secret=False) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() capabilities = self.node.instance_info['capabilities'] self.assertEqual('local', capabilities['boot_option']) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_neutron_net_exc_no_capabilities_overwrite( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, validate_href_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'neutron' instance_info = node.instance_info instance_info['capabilities'] = {"cat": "meow"} node.instance_info = instance_info node.save() validate_net_mock.side_effect = [ exception.InvalidParameterValue('invalid'), None] with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) self.assertEqual(2, validate_net_mock.call_count) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) validate_href_mock.assert_called_once_with(mock.ANY, 'fake-image', secret=False) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) self.node.refresh() capabilities = self.node.instance_info['capabilities'] self.assertEqual('local', capabilities['boot_option']) self.assertEqual('meow', capabilities['cat']) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(neutron_network.NeutronNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(neutron_network.NeutronNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_neutron_net_exc_reraise( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock): node = self.node node.network_interface = 'neutron' instance_info = node.instance_info instance_info['capabilities'] = {"boot_option": "netboot"} node.instance_info = instance_info node.save() validate_net_mock.side_effect = ( exception.InvalidParameterValue('invalid')) with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.prepare, task) storage_driver_info_mock.assert_called_once_with(task) validate_net_mock.assert_called_once_with(mock.ANY, task) self.assertFalse(add_provisioning_net_mock.called) self.assertFalse(unconfigure_tenant_net_mock.called) self.assertFalse(storage_attach_volumes_mock.called) self.assertFalse(build_instance_info_mock.called) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.node.refresh() capabilities = self.node.instance_info['capabilities'] self.assertEqual('netboot', capabilities['boot_option']) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') def test_prepare_manage_agent_boot_false( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, validate_net_mock, add_provisioning_net_mock): self.config(group='agent', manage_agent_boot=False) node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} self.driver.prepare(task) validate_net_mock.assert_called_once_with(mock.ANY, task) build_instance_info_mock.assert_called_once_with(task) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') def _test_prepare_rescue_states( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, prov_state): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = prov_state build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) self.assertFalse(build_instance_info_mock.called) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) def test_prepare_rescue_states(self): for state in (states.RESCUING, states.RESCUEWAIT, states.RESCUE, states.RESCUEFAIL): self._test_prepare_rescue_states(prov_state=state) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy', spec_set=True, autospec=True) def _test_prepare_conductor_takeover( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, pxe_prepare_instance_mock, add_provisioning_net_mock, storage_driver_info_mock, storage_attach_volumes_mock, prov_state): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = prov_state self.driver.prepare(task) self.assertFalse(build_instance_info_mock.called) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.assertTrue(pxe_prepare_instance_mock.called) self.assertFalse(add_provisioning_net_mock.called) self.assertTrue(storage_driver_info_mock.called) self.assertFalse(storage_attach_volumes_mock.called) def test_prepare_active_and_unrescue_states(self): for prov_state in (states.ACTIVE, states.UNRESCUING): self._test_prepare_conductor_takeover( prov_state=prov_state) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy', autospec=True) def test_prepare_storage_write_false( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, pxe_prepare_instance_mock, validate_net_mock, remove_tenant_net_mock, add_provisioning_net_mock, storage_driver_info_mock, storage_attach_volumes_mock, should_write_image_mock): should_write_image_mock.return_value = False node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING self.driver.prepare(task) validate_net_mock.assert_called_once_with(mock.ANY, task) self.assertFalse(build_instance_info_mock.called) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.assertFalse(pxe_prepare_instance_mock.called) self.assertFalse(add_provisioning_net_mock.called) self.assertTrue(storage_driver_info_mock.called) self.assertTrue(storage_attach_volumes_mock.called) self.assertEqual(2, should_write_image_mock.call_count) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') def test_prepare_adopting( self, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, add_provisioning_net_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.ADOPTING self.driver.prepare(task) self.assertFalse(build_instance_info_mock.called) self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) self.assertFalse(add_provisioning_net_mock.called) @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_prepare_boot_from_volume(self, mock_write, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, validate_net_mock, add_provisioning_net_mock): mock_write.return_value = False node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) validate_net_mock.assert_called_once_with(mock.ANY, task) build_instance_info_mock.assert_not_called() build_options_mock.assert_not_called() pxe_prepare_ramdisk_mock.assert_not_called() @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_fast_track( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock, is_fast_track_mock): # TODO(TheJulia): We should revisit this test. Smartnic # support didn't wire in tightly on testing for power in # these tests, and largely fast_track impacts power operations. node = self.node node.network_interface = 'flat' node.save() is_fast_track_mock.return_value = True with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_options_mock.return_value = {'a': 'b'} self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) validate_net_mock.assert_called_once_with(mock.ANY, task) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertTrue(storage_attach_volumes_mock.called) self.assertTrue(build_instance_info_mock.called) # TODO(TheJulia): We should likely consider executing the # next two methods at some point in order to facilitate # continuity. While not explicitly required for this feature # to work, reboots as part of deployment would need the ramdisk # present and ready. self.assertFalse(build_options_mock.called) self.assertFalse(pxe_prepare_ramdisk_mock.called) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_instance') @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_clean_up(self, pxe_clean_up_ramdisk_mock, pxe_clean_up_instance_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.driver.clean_up(task) pxe_clean_up_ramdisk_mock.assert_called_once_with(task) pxe_clean_up_instance_mock.assert_called_once_with(task) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch('ironic.common.dhcp_factory.DHCPFactory._set_dhcp_provider') @mock.patch('ironic.common.dhcp_factory.DHCPFactory.clean_dhcp') @mock.patch.object(pxe.PXEBoot, 'clean_up_instance') @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk') def test_clean_up_manage_agent_boot_false(self, pxe_clean_up_ramdisk_mock, pxe_clean_up_instance_mock, clean_dhcp_mock, set_dhcp_provider_mock): with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.config(group='agent', manage_agent_boot=False) self.driver.clean_up(task) self.assertFalse(pxe_clean_up_ramdisk_mock.called) pxe_clean_up_instance_mock.assert_called_once_with(task) set_dhcp_provider_mock.assert_called_once_with() clean_dhcp_mock.assert_called_once_with(task) @mock.patch.object(agent_base, 'get_steps', autospec=True) def test_get_clean_steps(self, mock_get_steps): # Test getting clean steps mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] mock_get_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: steps = self.driver.get_clean_steps(task) mock_get_steps.assert_called_once_with( task, 'clean', interface='deploy', override_priorities={'erase_devices': None, 'erase_devices_metadata': None}) self.assertEqual(mock_steps, steps) @mock.patch.object(agent_base, 'get_steps', autospec=True) def test_get_clean_steps_config_priority(self, mock_get_steps): # Test that we can override the priority of get clean steps # Use 0 because it is an edge case (false-y) and used in devstack self.config(erase_devices_priority=0, group='deploy') self.config(erase_devices_metadata_priority=0, group='deploy') mock_steps = [{'priority': 10, 'interface': 'deploy', 'step': 'erase_devices'}] mock_get_steps.return_value = mock_steps with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.get_clean_steps(task) mock_get_steps.assert_called_once_with( task, 'clean', interface='deploy', override_priorities={'erase_devices': 0, 'erase_devices_metadata': 0}) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning(self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, self.driver.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'prepare_inband_cleaning', autospec=True) def test_prepare_cleaning_manage_agent_boot_false( self, prepare_inband_cleaning_mock): prepare_inband_cleaning_mock.return_value = states.CLEANWAIT self.config(group='agent', manage_agent_boot=False) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual( states.CLEANWAIT, self.driver.prepare_cleaning(task)) prepare_inband_cleaning_mock.assert_called_once_with( task, manage_boot=False) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning(self, tear_down_cleaning_mock): with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=True) @mock.patch.object(deploy_utils, 'tear_down_inband_cleaning', autospec=True) def test_tear_down_cleaning_manage_agent_boot_false( self, tear_down_cleaning_mock): self.config(group='agent', manage_agent_boot=False) with task_manager.acquire(self.context, self.node.uuid) as task: self.driver.tear_down_cleaning(task) tear_down_cleaning_mock.assert_called_once_with( task, manage_boot=False) def _test_continue_deploy(self, additional_driver_info=None, additional_expected_image_info=None): self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE driver_info = self.node.driver_info driver_info.update(additional_driver_info or {}) self.node.driver_info = driver_info self.node.save() test_temp_url = 'http://image' expected_image_info = { 'urls': [test_temp_url], 'id': 'fake-image', 'node_uuid': self.node.uuid, 'checksum': 'checksum', 'disk_format': 'qcow2', 'container_format': 'bare', 'stream_raw_images': CONF.agent.stream_raw_images, } expected_image_info.update(additional_expected_image_info or {}) client_mock = mock.MagicMock(spec_set=['prepare_image']) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy._client = client_mock task.driver.deploy.continue_deploy(task) client_mock.prepare_image.assert_called_with(task.node, expected_image_info) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) def test_continue_deploy(self): self._test_continue_deploy() def test_continue_deploy_with_proxies(self): self._test_continue_deploy( additional_driver_info={'image_https_proxy': 'https://spam.ni', 'image_http_proxy': 'spam.ni', 'image_no_proxy': '.eggs.com'}, additional_expected_image_info={ 'proxies': {'https': 'https://spam.ni', 'http': 'spam.ni'}, 'no_proxy': '.eggs.com'} ) def test_continue_deploy_with_no_proxy_without_proxies(self): self._test_continue_deploy( additional_driver_info={'image_no_proxy': '.eggs.com'} ) def test_continue_deploy_image_source_is_url(self): instance_info = self.node.instance_info instance_info['image_source'] = 'http://example.com/woof.img' self.node.instance_info = instance_info self._test_continue_deploy( additional_expected_image_info={ 'id': 'woof.img' } ) def test_continue_deploy_partition_image(self): self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE i_info = self.node.instance_info i_info['kernel'] = 'kernel' i_info['ramdisk'] = 'ramdisk' i_info['root_gb'] = 10 i_info['swap_mb'] = 10 i_info['ephemeral_mb'] = 0 i_info['ephemeral_format'] = 'abc' i_info['configdrive'] = 'configdrive' i_info['preserve_ephemeral'] = False i_info['image_type'] = 'partition' i_info['root_mb'] = 10240 i_info['deploy_boot_mode'] = 'bios' i_info['capabilities'] = {"boot_option": "local", "disk_label": "msdos"} self.node.instance_info = i_info driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.save() test_temp_url = 'http://image' expected_image_info = { 'urls': [test_temp_url], 'id': 'fake-image', 'node_uuid': self.node.uuid, 'checksum': 'checksum', 'disk_format': 'qcow2', 'container_format': 'bare', 'stream_raw_images': True, 'kernel': 'kernel', 'ramdisk': 'ramdisk', 'root_gb': 10, 'swap_mb': 10, 'ephemeral_mb': 0, 'ephemeral_format': 'abc', 'configdrive': 'configdrive', 'preserve_ephemeral': False, 'image_type': 'partition', 'root_mb': 10240, 'boot_option': 'local', 'deploy_boot_mode': 'bios', 'disk_label': 'msdos' } client_mock = mock.MagicMock(spec_set=['prepare_image']) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.deploy._client = client_mock task.driver.deploy.continue_deploy(task) client_mock.prepare_image.assert_called_with(task.node, expected_image_info) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(deploy_utils, 'remove_http_instance_symlink', autospec=True) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance(self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, log_mock, remove_symlink_mock, power_on_node_if_needed_mock, resume_mock): self.config(manage_agent_boot=True, group='agent') self.config(image_download_source='http', group='agent') check_deploy_mock.return_value = None uuid_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF power_on_node_if_needed_mock.return_value = None task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) uuid_mock.assert_called_once_with(mock.ANY, task, 'root_uuid') self.assertNotIn('root_uuid_or_disk_id', task.node.driver_internal_info) self.assertTrue(log_mock.called) self.assertIn("Ironic Python Agent version 3.1.0 and beyond", log_mock.call_args[0][0]) prepare_instance_mock.assert_called_once_with(mock.ANY, task, None, None, None) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertTrue(remove_symlink_mock.called) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance_no_manage_agent_boot( self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, bootdev_mock, log_mock, power_on_node_if_needed_mock, resume_mock): self.config(manage_agent_boot=False, group='agent') check_deploy_mock.return_value = None uuid_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) uuid_mock.assert_called_once_with(mock.ANY, task, 'root_uuid') self.assertNotIn('root_uuid_or_disk_id', task.node.driver_internal_info) self.assertFalse(log_mock.called) self.assertFalse(prepare_instance_mock.called) bootdev_mock.assert_called_once_with(task, 'disk', persistent=True) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance_partition_image(self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, boot_mode_mock, log_mock, power_on_node_if_needed_mock, resume_mock): check_deploy_mock.return_value = None self.node.instance_info = { 'capabilities': {'boot_option': 'netboot'}} uuid_mock.return_value = 'root_uuid' self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() boot_mode_mock.return_value = 'bios' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_OFF driver_internal_info = task.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False task.node.driver_internal_info = driver_internal_info task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) uuid_mock.assert_called_once_with(mock.ANY, task, 'root_uuid') driver_int_info = task.node.driver_internal_info self.assertEqual('root_uuid', driver_int_info['root_uuid_or_disk_id']), boot_mode_mock.assert_called_once_with(task.node) self.assertFalse(log_mock.called) prepare_instance_mock.assert_called_once_with(mock.ANY, task, 'root_uuid', None, None) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance_partition_localboot_ppc64( self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, boot_mode_mock, log_mock, power_on_node_if_needed_mock, resume_mock): check_deploy_mock.return_value = None uuid_mock.side_effect = ['root_uuid', 'prep_boot_part_uuid'] self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_OFF driver_internal_info = task.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False task.node.driver_internal_info = driver_internal_info boot_option = {'capabilities': '{"boot_option": "local"}'} task.node.instance_info = boot_option properties = task.node.properties properties.update(cpu_arch='ppc64le') task.node.properties = properties boot_mode_mock.return_value = 'bios' task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) driver_int_info = task.node.driver_internal_info self.assertEqual('root_uuid', driver_int_info['root_uuid_or_disk_id']), uuid_mock_calls = [ mock.call(mock.ANY, task, 'root_uuid'), mock.call(mock.ANY, task, 'PReP_Boot_partition_uuid')] uuid_mock.assert_has_calls(uuid_mock_calls) boot_mode_mock.assert_called_once_with(task.node) self.assertFalse(log_mock.called) prepare_instance_mock.assert_called_once_with( mock.ANY, task, 'root_uuid', None, 'prep_boot_part_uuid') power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance_boot_error( self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, collect_ramdisk_logs_mock, log_mock): check_deploy_mock.return_value = "Error" uuid_mock.return_value = None self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: get_power_state_mock.return_value = states.POWER_OFF task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) self.assertFalse(prepare_instance_mock.called) self.assertFalse(log_mock.called) self.assertFalse(power_off_mock.called) collect_ramdisk_logs_mock.assert_called_once_with(task.node) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(agent.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) @mock.patch.object(agent.AgentDeployMixin, '_get_uuid_from_result', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch.object(agent.AgentDeployMixin, 'prepare_instance_to_boot', autospec=True) @mock.patch('ironic.drivers.modules.agent.AgentDeployMixin' '.check_deploy_success', autospec=True) def test_reboot_to_instance_localboot(self, check_deploy_mock, prepare_instance_mock, power_off_mock, get_power_state_mock, node_power_action_mock, uuid_mock, boot_mode_mock, log_mock, power_on_node_if_needed_mock, resume_mock): check_deploy_mock.return_value = None uuid_mock.side_effect = ['root_uuid', 'efi_uuid'] self.node.provision_state = states.DEPLOYWAIT self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_OFF driver_internal_info = task.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False task.node.driver_internal_info = driver_internal_info boot_option = {'capabilities': '{"boot_option": "local"}'} task.node.instance_info = boot_option boot_mode_mock.return_value = 'uefi' task.driver.deploy.reboot_to_instance(task) check_deploy_mock.assert_called_once_with(mock.ANY, task.node) driver_int_info = task.node.driver_internal_info self.assertEqual('root_uuid', driver_int_info['root_uuid_or_disk_id']), uuid_mock_calls = [ mock.call(mock.ANY, task, 'root_uuid'), mock.call(mock.ANY, task, 'efi_system_partition_uuid')] uuid_mock.assert_has_calls(uuid_mock_calls) boot_mode_mock.assert_called_once_with(task.node) self.assertFalse(log_mock.called) prepare_instance_mock.assert_called_once_with( mock.ANY, task, 'root_uuid', 'efi_uuid', None) power_off_mock.assert_called_once_with(task.node) get_power_state_mock.assert_called_once_with(task) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) resume_mock.assert_called_once_with(task) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [] self.assertFalse(task.driver.deploy.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_is_done(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'SUCCESS'}] self.assertTrue(task.driver.deploy.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_did_start(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertTrue(task.driver.deploy.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_multiple_commands(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'cache_image', 'command_status': 'SUCCESS'}, {'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertTrue(task.driver.deploy.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_has_started_other_commands(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'cache_image', 'command_status': 'SUCCESS'}] self.assertFalse(task.driver.deploy.deploy_has_started(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'SUCCESS'}] self.assertTrue(task.driver.deploy.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_empty_response(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [] self.assertFalse(task.driver.deploy.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_race(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'some_other_command', 'command_status': 'SUCCESS'}] self.assertFalse(task.driver.deploy.deploy_is_done(task)) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_deploy_is_done_still_running(self, mock_get_cmd): with task_manager.acquire(self.context, self.node.uuid) as task: mock_get_cmd.return_value = [{'command_name': 'prepare_image', 'command_status': 'RUNNING'}] self.assertFalse(task.driver.deploy.deploy_is_done(task)) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'attach_volumes', autospec=True) @mock.patch.object(deploy_utils, 'populate_storage_driver_internal_info') @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk') @mock.patch.object(deploy_utils, 'build_agent_options') @mock.patch.object(deploy_utils, 'build_instance_info_for_deploy') @mock.patch.object(flat_network.FlatNetwork, 'add_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'validate', spec_set=True, autospec=True) def test_prepare_with_smartnic_port( self, validate_net_mock, unconfigure_tenant_net_mock, add_provisioning_net_mock, build_instance_info_mock, build_options_mock, pxe_prepare_ramdisk_mock, storage_driver_info_mock, storage_attach_volumes_mock, power_on_node_if_needed_mock, restore_power_state_mock): node = self.node node.network_interface = 'flat' node.save() add_provisioning_net_mock.return_value = None with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYING build_instance_info_mock.return_value = {'foo': 'bar'} build_options_mock.return_value = {'a': 'b'} power_on_node_if_needed_mock.return_value = states.POWER_OFF self.driver.prepare(task) storage_driver_info_mock.assert_called_once_with(task) validate_net_mock.assert_called_once_with(mock.ANY, task) add_provisioning_net_mock.assert_called_once_with(mock.ANY, task) unconfigure_tenant_net_mock.assert_called_once_with(mock.ANY, task) storage_attach_volumes_mock.assert_called_once_with( task.driver.storage, task) build_instance_info_mock.assert_called_once_with(task) build_options_mock.assert_called_once_with(task.node) pxe_prepare_ramdisk_mock.assert_called_once_with( task, {'a': 'b'}) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) self.node.refresh() self.assertEqual('bar', self.node.instance_info['foo']) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'detach_volumes', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def test_tear_down_with_smartnic_port( self, power_mock, unconfigure_tenant_nets_mock, remove_provisioning_net_mock, storage_detach_volumes_mock, power_on_node_if_needed_mock, restore_power_state_mock): object_utils.create_test_volume_target( self.context, node_id=self.node.id) node = self.node node.network_interface = 'flat' node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF driver_return = self.driver.tear_down(task) power_mock.assert_called_once_with(task, states.POWER_OFF) self.assertEqual(driver_return, states.DELETED) unconfigure_tenant_nets_mock.assert_called_once_with(mock.ANY, task) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) storage_detach_volumes_mock.assert_called_once_with( task.driver.storage, task) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) # Verify no volumes exist for new task instances. with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: self.assertEqual(0, len(task.volume_targets)) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_deploy_storage_should_write_image_false_with_smartnic_port( self, mock_write, mock_pxe_instance, power_on_node_if_needed_mock, restore_power_state_mock): mock_write.return_value = False self.node.provision_state = states.DEPLOYING self.node.deploy_step = { 'step': 'deploy', 'priority': 50, 'interface': 'deploy'} self.node.save() with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF driver_return = self.driver.deploy(task) self.assertIsNone(driver_return) self.assertTrue(mock_pxe_instance.called) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) class AgentRAIDTestCase(db_base.DbTestCase): def setUp(self): super(AgentRAIDTestCase, self).setUp() self.config(enabled_raid_interfaces=['fake', 'agent', 'no-raid']) self.target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, {'size_gb': 200, 'raid_level': 5} ]} self.clean_step = {'step': 'create_configuration', 'interface': 'raid'} n = { 'boot_interface': 'pxe', 'deploy_interface': 'direct', 'raid_interface': 'agent', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, 'target_raid_config': self.target_raid_config, 'clean_step': self.clean_step, } self.node = object_utils.create_test_node(self.context, **n) @mock.patch.object(agent_base, 'get_steps', autospec=True) def test_get_clean_steps(self, get_steps_mock): get_steps_mock.return_value = [ {'step': 'create_configuration', 'interface': 'raid', 'priority': 1}, {'step': 'delete_configuration', 'interface': 'raid', 'priority': 2}] with task_manager.acquire(self.context, self.node.uuid) as task: ret = task.driver.raid.get_clean_steps(task) self.assertEqual(0, ret[0]['priority']) self.assertEqual(0, ret[1]['priority']) @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_create_configuration(self, execute_mock, filter_target_raid_config_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT filter_target_raid_config_mock.return_value = ( self.target_raid_config) return_value = task.driver.raid.create_configuration(task) self.assertEqual(states.CLEANWAIT, return_value) self.assertEqual( self.target_raid_config, task.node.driver_internal_info['target_raid_config']) execute_mock.assert_called_once_with(task, self.clean_step, 'clean') @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_create_configuration_skip_root(self, execute_mock, filter_target_raid_config_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 5} ]} filter_target_raid_config_mock.return_value = ( exp_target_raid_config) return_value = task.driver.raid.create_configuration( task, create_root_volume=False) self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, self.clean_step, 'clean') self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_create_configuration_skip_nonroot(self, execute_mock, filter_target_raid_config_mock): with task_manager.acquire(self.context, self.node.uuid) as task: execute_mock.return_value = states.CLEANWAIT exp_target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, ]} filter_target_raid_config_mock.return_value = ( exp_target_raid_config) return_value = task.driver.raid.create_configuration( task, create_nonroot_volumes=False) self.assertEqual(states.CLEANWAIT, return_value) execute_mock.assert_called_once_with(task, self.clean_step, 'clean') self.assertEqual( exp_target_raid_config, task.node.driver_internal_info['target_raid_config']) @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_create_configuration_no_target_raid_config_after_skipping( self, execute_mock, filter_target_raid_config_mock): with task_manager.acquire(self.context, self.node.uuid) as task: msg = "Node %s has no target RAID configuration" % self.node.uuid filter_target_raid_config_mock.side_effect = ( exception.MissingParameterValue(msg)) self.assertRaises( exception.MissingParameterValue, task.driver.raid.create_configuration, task, create_root_volume=False, create_nonroot_volumes=False) self.assertFalse(execute_mock.called) @mock.patch.object(raid, 'filter_target_raid_config') @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_create_configuration_empty_target_raid_config( self, execute_mock, filter_target_raid_config_mock): execute_mock.return_value = states.CLEANING self.node.target_raid_config = {} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: msg = "Node %s has no target RAID configuration" % self.node.uuid filter_target_raid_config_mock.side_effect = ( exception.MissingParameterValue(msg)) self.assertRaises(exception.MissingParameterValue, task.driver.raid.create_configuration, task) self.assertFalse(execute_mock.called) @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final( self, update_raid_info_mock): command = {'command_result': {'clean_result': 'foo'}} with task_manager.acquire(self.context, self.node.uuid) as task: raid_mgmt = agent.AgentRAID raid_mgmt._create_configuration_final(task, command) update_raid_info_mock.assert_called_once_with(task.node, 'foo') @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final_registered( self, update_raid_info_mock): self.node.clean_step = {'interface': 'raid', 'step': 'create_configuration'} command = {'command_result': {'clean_result': 'foo'}} create_hook = agent_base._get_post_step_hook(self.node, 'clean') with task_manager.acquire(self.context, self.node.uuid) as task: create_hook(task, command) update_raid_info_mock.assert_called_once_with(task.node, 'foo') @mock.patch.object(raid, 'update_raid_info', autospec=True) def test__create_configuration_final_bad_command_result( self, update_raid_info_mock): command = {} with task_manager.acquire(self.context, self.node.uuid) as task: raid_mgmt = agent.AgentRAID self.assertRaises(exception.IronicException, raid_mgmt._create_configuration_final, task, command) self.assertFalse(update_raid_info_mock.called) @mock.patch.object(agent_base, 'execute_step', autospec=True) def test_delete_configuration(self, execute_mock): execute_mock.return_value = states.CLEANING with task_manager.acquire(self.context, self.node.uuid) as task: return_value = task.driver.raid.delete_configuration(task) execute_mock.assert_called_once_with(task, self.clean_step, 'clean') self.assertEqual(states.CLEANING, return_value) def test__delete_configuration_final(self): command = {'command_result': {'clean_result': 'foo'}} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.raid_config = {'foo': 'bar'} raid_mgmt = agent.AgentRAID raid_mgmt._delete_configuration_final(task, command) self.node.refresh() self.assertEqual({}, self.node.raid_config) def test__delete_configuration_final_registered( self): self.node.clean_step = {'interface': 'raid', 'step': 'delete_configuration'} self.node.raid_config = {'foo': 'bar'} command = {'command_result': {'clean_result': 'foo'}} delete_hook = agent_base._get_post_step_hook(self.node, 'clean') with task_manager.acquire(self.context, self.node.uuid) as task: delete_hook(task, command) self.node.refresh() self.assertEqual({}, self.node.raid_config) class AgentRescueTestCase(db_base.DbTestCase): def setUp(self): super(AgentRescueTestCase, self).setUp() for iface in drivers_base.ALL_INTERFACES: impl = 'fake' if iface == 'network': impl = 'flat' if iface == 'rescue': impl = 'agent' config_kwarg = {'enabled_%s_interfaces' % iface: [impl], 'default_%s_interface' % iface: impl} self.config(**config_kwarg) self.config(enabled_hardware_types=['fake-hardware']) instance_info = INSTANCE_INFO instance_info.update({'rescue_password': 'password', 'hashed_rescue_password': '1234'}) driver_info = DRIVER_INFO driver_info.update({'rescue_ramdisk': 'my_ramdisk', 'rescue_kernel': 'my_kernel'}) n = { 'instance_info': instance_info, 'driver_info': driver_info, 'driver_internal_info': DRIVER_INTERNAL_INFO, } self.node = object_utils.create_test_node(self.context, **n) @mock.patch.object(flat_network.FlatNetwork, 'add_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_instance', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_rescue(self, mock_node_power_action, mock_build_agent_opts, mock_clean_up_instance, mock_prepare_ramdisk, mock_unconf_tenant_net, mock_add_rescue_net): self.config(manage_agent_boot=True, group='agent') mock_build_agent_opts.return_value = {'ipa-api-url': 'fake-api'} with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.rescue.rescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) mock_clean_up_instance.assert_called_once_with(mock.ANY, task) mock_unconf_tenant_net.assert_called_once_with(mock.ANY, task) mock_add_rescue_net.assert_called_once_with(mock.ANY, task) mock_build_agent_opts.assert_called_once_with(task.node) mock_prepare_ramdisk.assert_called_once_with( mock.ANY, task, {'ipa-api-url': 'fake-api'}) self.assertEqual(states.RESCUEWAIT, result) @mock.patch.object(flat_network.FlatNetwork, 'add_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_instance', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_rescue_no_manage_agent_boot(self, mock_node_power_action, mock_build_agent_opts, mock_clean_up_instance, mock_prepare_ramdisk, mock_unconf_tenant_net, mock_add_rescue_net): self.config(manage_agent_boot=False, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.rescue.rescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) mock_clean_up_instance.assert_called_once_with(mock.ANY, task) mock_unconf_tenant_net.assert_called_once_with(mock.ANY, task) mock_add_rescue_net.assert_called_once_with(mock.ANY, task) self.assertFalse(mock_build_agent_opts.called) self.assertFalse(mock_prepare_ramdisk.called) self.assertEqual(states.RESCUEWAIT, result) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_instance', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_unrescue(self, mock_node_power_action, mock_clean_ramdisk, mock_prepare_instance, mock_conf_tenant_net, mock_remove_rescue_net): """Test unrescue in case where boot driver prepares instance reboot.""" self.config(manage_agent_boot=True, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.rescue.unrescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) mock_clean_ramdisk.assert_called_once_with( mock.ANY, task) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) mock_conf_tenant_net.assert_called_once_with(mock.ANY, task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) self.assertEqual(states.ACTIVE, result) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_instance', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_unrescue_no_manage_agent_boot(self, mock_node_power_action, mock_clean_ramdisk, mock_prepare_instance, mock_conf_tenant_net, mock_remove_rescue_net): self.config(manage_agent_boot=False, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.rescue.unrescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) self.assertFalse(mock_clean_ramdisk.called) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) mock_conf_tenant_net.assert_called_once_with(mock.ANY, task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) self.assertEqual(states.ACTIVE, result) @mock.patch.object(fake.FakeBoot, 'clean_up_instance', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_rescue_power_on(self, mock_node_power_action, mock_clean_up_instance): self.node.power_state = states.POWER_ON mock_clean_up_instance.side_effect = exception.IronicException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IronicException, task.driver.rescue.rescue, task) mock_node_power_action.assert_called_once_with(task, states.POWER_OFF) task.node.refresh() # Ensure that our stored power state while the lock is still # being held, shows as POWER_ON to an external reader, such # as the API. self.assertEqual(states.POWER_ON, task.node.power_state) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_unrescue_power_on(self, mock_node_power_action, mock_clean_ramdisk): self.node.power_state = states.POWER_ON mock_clean_ramdisk.side_effect = exception.IronicException() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IronicException, task.driver.rescue.unrescue, task) mock_node_power_action.assert_called_once_with(task, states.POWER_OFF) task.node.refresh() # Ensure that our stored power state while the lock is still # being held, shows as POWER_ON to an external reader, such # as the API. self.assertEqual(states.POWER_ON, task.node.power_state) @mock.patch.object(flat_network.FlatNetwork, 'validate_rescue', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate_rescue', autospec=True) def test_agent_rescue_validate(self, mock_boot_validate_rescue, mock_boot_validate, mock_validate_network): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.rescue.validate(task) mock_validate_network.assert_called_once_with(mock.ANY, task) mock_boot_validate.assert_called_once_with(mock.ANY, task) mock_boot_validate_rescue.assert_called_once_with(mock.ANY, task) @mock.patch.object(flat_network.FlatNetwork, 'validate_rescue', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate_rescue', autospec=True) def test_agent_rescue_validate_no_manage_agent(self, mock_boot_validate_rescue, mock_boot_validate, mock_rescuing_net): self.config(manage_agent_boot=False, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.rescue.validate(task) mock_rescuing_net.assert_called_once_with(mock.ANY, task) self.assertFalse(mock_boot_validate.called) self.assertFalse(mock_boot_validate_rescue.called) @mock.patch.object(flat_network.FlatNetwork, 'validate_rescue', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate_rescue', autospec=True) def test_agent_rescue_validate_fails_no_rescue_password( self, mock_boot_validate, mock_rescuing_net): instance_info = self.node.instance_info del instance_info['rescue_password'] self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Node.*missing.*rescue_password', task.driver.rescue.validate, task) mock_rescuing_net.assert_called_once_with(mock.ANY, task) mock_boot_validate.assert_called_once_with(mock.ANY, task) @mock.patch.object(flat_network.FlatNetwork, 'validate_rescue', autospec=True) @mock.patch.object(fake.FakeBoot, 'validate_rescue', autospec=True) def test_agent_rescue_validate_fails_empty_rescue_password( self, mock_boot_validate, mock_rescuing_net): instance_info = self.node.instance_info instance_info['rescue_password'] = " " self.node.instance_info = instance_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, "'instance_info/rescue_password'.*empty", task.driver.rescue.validate, task) mock_rescuing_net.assert_called_once_with(mock.ANY, task) mock_boot_validate.assert_called_once_with(mock.ANY, task) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) def test_agent_rescue_clean_up(self, mock_clean_ramdisk, mock_remove_rescue_net): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.rescue.clean_up(task) self.assertNotIn('rescue_password', task.node.instance_info) mock_clean_ramdisk.assert_called_once_with( mock.ANY, task) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) def test_agent_rescue_clean_up_no_manage_boot(self, mock_clean_ramdisk, mock_remove_rescue_net): self.config(manage_agent_boot=False, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.rescue.clean_up(task) self.assertNotIn('rescue_password', task.node.instance_info) self.assertFalse(mock_clean_ramdisk.called) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'add_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'unconfigure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_ramdisk', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_instance', autospec=True) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_rescue_with_smartnic_port( self, mock_node_power_action, mock_build_agent_opts, mock_clean_up_instance, mock_prepare_ramdisk, mock_unconf_tenant_net, mock_add_rescue_net, power_on_node_if_needed_mock, restore_power_state_mock): self.config(manage_agent_boot=True, group='agent') mock_build_agent_opts.return_value = {'ipa-api-url': 'fake-api'} with task_manager.acquire(self.context, self.node.uuid) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF result = task.driver.rescue.rescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) mock_clean_up_instance.assert_called_once_with(mock.ANY, task) mock_unconf_tenant_net.assert_called_once_with(mock.ANY, task) mock_add_rescue_net.assert_called_once_with(mock.ANY, task) mock_build_agent_opts.assert_called_once_with(task.node) mock_prepare_ramdisk.assert_called_once_with( mock.ANY, task, {'ipa-api-url': 'fake-api'}) self.assertEqual(states.RESCUEWAIT, result) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'configure_tenant_networks', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'prepare_instance', autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test_agent_unrescue_with_smartnic_port( self, mock_node_power_action, mock_clean_ramdisk, mock_prepare_instance, mock_conf_tenant_net, mock_remove_rescue_net, power_on_node_if_needed_mock, restore_power_state_mock): self.config(manage_agent_boot=True, group='agent') with task_manager.acquire(self.context, self.node.uuid) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF result = task.driver.rescue.unrescue(task) mock_node_power_action.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) mock_clean_ramdisk.assert_called_once_with( mock.ANY, task) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) mock_conf_tenant_net.assert_called_once_with(mock.ANY, task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) self.assertEqual(states.ACTIVE, result) self.assertEqual(2, power_on_node_if_needed_mock.call_count) self.assertEqual(2, power_on_node_if_needed_mock.call_count) restore_power_state_mock.assert_has_calls( [mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_OFF)]) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(flat_network.FlatNetwork, 'remove_rescuing_network', spec_set=True, autospec=True) @mock.patch.object(fake.FakeBoot, 'clean_up_ramdisk', autospec=True) def test_agent_rescue_clean_up_smartnic( self, mock_clean_ramdisk, mock_remove_rescue_net, power_on_node_if_needed_mock, restore_power_state_mock): with task_manager.acquire(self.context, self.node.uuid) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF task.driver.rescue.clean_up(task) self.assertNotIn('rescue_password', task.node.instance_info) mock_clean_ramdisk.assert_called_once_with( mock.ANY, task) mock_remove_rescue_net.assert_called_once_with(mock.ANY, task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/0000775000175000017500000000000013652514443022555 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/test_utils.py0000664000175000017500000001572713652514273025343 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iBMC Driver common utils.""" import copy import os import mock from oslo_utils import importutils from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ibmc import utils from ironic.tests.unit.drivers.modules.ibmc import base constants = importutils.try_import('ibmc_client.constants') ibmc_client = importutils.try_import('ibmc_client') ibmc_error = importutils.try_import('ibmc_client.exceptions') class IBMCUtilsTestCase(base.IBMCTestCase): def setUp(self): super(IBMCUtilsTestCase, self).setUp() # Redfish specific configurations self.config(connection_attempts=2, group='ibmc') self.parsed_driver_info = { 'address': 'https://example.com', 'username': 'username', 'password': 'password', 'verify_ca': True, } def test_parse_driver_info(self): response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_default_scheme(self): self.node.driver_info['ibmc_address'] = 'example.com' response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_default_scheme_with_port(self): self.node.driver_info['ibmc_address'] = 'example.com:42' self.parsed_driver_info['address'] = 'https://example.com:42' response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_missing_info(self): for prop in utils.REQUIRED_PROPERTIES: self.node.driver_info = self.driver_info.copy() self.node.driver_info.pop(prop) self.assertRaises(exception.MissingParameterValue, utils.parse_driver_info, self.node) def test_parse_driver_info_invalid_address(self): for value in ['/banana!', '#location', '?search=hello']: self.node.driver_info['ibmc_address'] = value self.assertRaisesRegex(exception.InvalidParameterValue, 'Invalid iBMC address', utils.parse_driver_info, self.node) @mock.patch.object(os.path, 'exists', autospec=True) def test_parse_driver_info_path_verify_ca(self, mock_isdir): mock_isdir.return_value = True fake_path = '/path/to/a/valid/CA' self.node.driver_info['ibmc_verify_ca'] = fake_path self.parsed_driver_info['verify_ca'] = fake_path response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) mock_isdir.assert_called_once_with(fake_path) @mock.patch.object(os.path, 'exists', autospec=True) def test_parse_driver_info_valid_capath(self, mock_isfile): mock_isfile.return_value = True fake_path = '/path/to/a/valid/CA.pem' self.node.driver_info['ibmc_verify_ca'] = fake_path self.parsed_driver_info['verify_ca'] = fake_path response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) mock_isfile.assert_called_once_with(fake_path) def test_parse_driver_info_invalid_value_verify_ca(self): # Integers are not supported self.node.driver_info['ibmc_verify_ca'] = 123456 self.assertRaisesRegex(exception.InvalidParameterValue, 'Invalid value type', utils.parse_driver_info, self.node) def test_parse_driver_info_valid_string_value_verify_ca(self): for value in ('0', 'f', 'false', 'off', 'n', 'no'): self.node.driver_info['ibmc_verify_ca'] = value response = utils.parse_driver_info(self.node) parsed_driver_info = copy.deepcopy(self.parsed_driver_info) parsed_driver_info['verify_ca'] = False self.assertEqual(parsed_driver_info, response) for value in ('1', 't', 'true', 'on', 'y', 'yes'): self.node.driver_info['ibmc_verify_ca'] = value response = utils.parse_driver_info(self.node) self.assertEqual(self.parsed_driver_info, response) def test_parse_driver_info_invalid_string_value_verify_ca(self): for value in ('xyz', '*', '!123', '123'): self.node.driver_info['ibmc_verify_ca'] = value self.assertRaisesRegex(exception.InvalidParameterValue, 'The value should be a Boolean', utils.parse_driver_info, self.node) def test_revert_dictionary(self): data = { "key1": "value1", "key2": "value2" } revert = utils.revert_dictionary(data) self.assertEqual({ "value1": "key1", "value2": "key2" }, revert) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_handle_ibmc_exception_retry(self, connect_ibmc): @utils.handle_ibmc_exception('get IBMC system') def get_ibmc_system(_task): driver_info = utils.parse_driver_info(_task.node) with ibmc_client.connect(**driver_info) as _conn: return _conn.system.get() conn = self.mock_ibmc_conn(connect_ibmc) # Mocks conn.system.get.side_effect = [ ibmc_error.ConnectionError(url=self.ibmc['address'], error='Failed to connect to host'), mock.PropertyMock( boot_source_override=mock.PropertyMock( target=constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS ) ) ] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: system = get_ibmc_system(task) # Asserts self.assertEqual(constants.BOOT_SOURCE_TARGET_PXE, system.boot_source_override.target) self.assertEqual(constants.BOOT_SOURCE_ENABLED_CONTINUOUS, system.boot_source_override.enabled) # 1 failed, 1 succeed connect_ibmc.assert_called_with(**self.ibmc) self.assertEqual(2, connect_ibmc.call_count) # 1 failed, 1 succeed self.assertEqual(2, conn.system.get.call_count) ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/test_vendor.py0000664000175000017500000000462613652514273025474 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iBMC vendor interface.""" import mock from oslo_utils import importutils from ironic.conductor import task_manager from ironic.drivers.modules.ibmc import utils from ironic.tests.unit.drivers.modules.ibmc import base ibmc_client = importutils.try_import('ibmc_client') @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class IBMCVendorTestCase(base.IBMCTestCase): def setUp(self): super(IBMCVendorTestCase, self).setUp() def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_list_boot_type_order(self, connect_ibmc): # Mocks conn = self.mock_ibmc_conn(connect_ibmc) boot_up_seq = ['Pxe', 'Hdd', 'Others', 'Cd'] conn.system.get.return_value = mock.Mock( boot_sequence=['Pxe', 'Hdd', 'Others', 'Cd'] ) expected = {'boot_up_sequence': boot_up_seq} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: seq = task.driver.vendor.boot_up_seq(task) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) self.assertEqual(expected, seq) ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/base.py0000664000175000017500000000317013652514273024043 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test base class for iBMC Driver.""" import mock from ironic.drivers.modules.ibmc import utils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class IBMCTestCase(db_base.DbTestCase): def setUp(self): super(IBMCTestCase, self).setUp() self.driver_info = db_utils.get_test_ibmc_info() self.config(enabled_hardware_types=['ibmc'], enabled_power_interfaces=['ibmc'], enabled_management_interfaces=['ibmc'], enabled_vendor_interfaces=['ibmc']) self.node = obj_utils.create_test_node( self.context, driver='ibmc', driver_info=self.driver_info) self.ibmc = utils.parse_driver_info(self.node) @staticmethod def mock_ibmc_conn(ibmc_client_connect): conn = mock.Mock(system=mock.PropertyMock()) conn.__enter__ = mock.Mock(return_value=conn) conn.__exit__ = mock.Mock(return_value=None) ibmc_client_connect.return_value = conn return conn ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/test_management.py0000664000175000017500000002763413652514273026317 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iBMC Management interface.""" import itertools import mock from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.ibmc import mappings from ironic.drivers.modules.ibmc import utils from ironic.tests.unit.drivers.modules.ibmc import base constants = importutils.try_import('ibmc_client.constants') ibmc_client = importutils.try_import('ibmc_client') ibmc_error = importutils.try_import('ibmc_client.exceptions') class IBMCManagementTestCase(base.IBMCTestCase): def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_get_supported_boot_devices(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock return value _supported_boot_devices = list(mappings.GET_BOOT_DEVICE_MAP) conn.system.get.return_value = mock.Mock( boot_source_override=mock.Mock( supported_boot_devices=_supported_boot_devices ) ) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_boot_devices = ( task.driver.management.get_supported_boot_devices(task)) connect_ibmc.assert_called_once_with(**self.ibmc) expect = sorted(list(mappings.GET_BOOT_DEVICE_MAP.values())) self.assertEqual(expect, sorted(supported_boot_devices)) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_boot_device(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock return value conn.system.set_boot_source.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: device_mapping = [ (boot_devices.PXE, constants.BOOT_SOURCE_TARGET_PXE), (boot_devices.DISK, constants.BOOT_SOURCE_TARGET_HDD), (boot_devices.CDROM, constants.BOOT_SOURCE_TARGET_CD), (boot_devices.BIOS, constants.BOOT_SOURCE_TARGET_BIOS_SETUP), ('floppy', constants.BOOT_SOURCE_TARGET_FLOPPY), ] persistent_mapping = [ (True, constants.BOOT_SOURCE_ENABLED_CONTINUOUS), (False, constants.BOOT_SOURCE_ENABLED_ONCE) ] data_source = list(itertools.product(device_mapping, persistent_mapping)) for (device, persistent) in data_source: task.driver.management.set_boot_device( task, device[0], persistent=persistent[0]) connect_ibmc.assert_called_once_with(**self.ibmc) conn.system.set_boot_source.assert_called_once_with( device[1], enabled=persistent[1]) # Reset mocks connect_ibmc.reset_mock() conn.system.set_boot_source.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_boot_device_fail(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock return value conn.system.set_boot_source.side_effect = ( ibmc_error.IBMCClientError ) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.IBMCError, 'set iBMC boot device', task.driver.management.set_boot_device, task, boot_devices.PXE) connect_ibmc.assert_called_once_with(**self.ibmc) conn.system.set_boot_source.assert_called_once_with( constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_ONCE) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_get_boot_device(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock return value conn.system.get.return_value = mock.Mock( boot_source_override=mock.Mock( target=constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS ) ) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result_boot_device = task.driver.management.get_boot_device(task) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) expected = {'boot_device': boot_devices.PXE, 'persistent': True} self.assertEqual(expected, result_boot_device) def test_get_supported_boot_modes(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_boot_modes = ( task.driver.management.get_supported_boot_modes(task)) self.assertEqual(list(mappings.SET_BOOT_MODE_MAP), supported_boot_modes) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_boot_mode(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock system boot source override return value conn.system.get.return_value = mock.Mock( boot_source_override=mock.Mock( target=constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS ) ) conn.system.set_boot_source.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (boot_modes.LEGACY_BIOS, constants.BOOT_SOURCE_MODE_BIOS), (boot_modes.UEFI, constants.BOOT_SOURCE_MODE_UEFI) ] for ironic_boot_mode, ibmc_boot_mode in expected_values: task.driver.management.set_boot_mode(task, mode=ironic_boot_mode) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) conn.system.set_boot_source.assert_called_once_with( constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS, mode=ibmc_boot_mode) # Reset connect_ibmc.reset_mock() conn.system.set_boot_source.reset_mock() conn.system.get.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_boot_mode_fail(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock system boot source override return value conn.system.get.return_value = mock.Mock( boot_source_override=mock.Mock( target=constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS ) ) conn.system.set_boot_source.side_effect = ( ibmc_error.IBMCClientError ) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: expected_values = [ (boot_modes.LEGACY_BIOS, constants.BOOT_SOURCE_MODE_BIOS), (boot_modes.UEFI, constants.BOOT_SOURCE_MODE_UEFI) ] for ironic_boot_mode, ibmc_boot_mode in expected_values: self.assertRaisesRegex( exception.IBMCError, 'set iBMC boot mode', task.driver.management.set_boot_mode, task, ironic_boot_mode) conn.system.set_boot_source.assert_called_once_with( constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS, mode=ibmc_boot_mode) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) # Reset connect_ibmc.reset_mock() conn.system.set_boot_source.reset_mock() conn.system.get.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_get_boot_mode(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock system boot source override return value conn.system.get.return_value = mock.Mock( boot_source_override=mock.Mock( target=constants.BOOT_SOURCE_TARGET_PXE, enabled=constants.BOOT_SOURCE_ENABLED_CONTINUOUS, mode=constants.BOOT_SOURCE_MODE_BIOS, ) ) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: response = task.driver.management.get_boot_mode(task) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) expected = boot_modes.LEGACY_BIOS self.assertEqual(expected, response) def test_get_sensors_data(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_inject_nmi(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock system boot source override return value conn.system.reset.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.inject_nmi(task) connect_ibmc.assert_called_once_with(**self.ibmc) conn.system.reset.assert_called_once_with(constants.RESET_NMI) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_inject_nmi_fail(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # mock system boot source override return value conn.system.reset.side_effect = ( ibmc_error.IBMCClientError ) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex( exception.IBMCError, 'inject iBMC NMI', task.driver.management.inject_nmi, task) connect_ibmc.assert_called_once_with(**self.ibmc) conn.system.reset.assert_called_once_with(constants.RESET_NMI) ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/test_power.py0000664000175000017500000002740613652514273025334 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iBMC Power interface.""" import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.ibmc import mappings from ironic.drivers.modules.ibmc import utils from ironic.tests.unit.drivers.modules.ibmc import base constants = importutils.try_import('ibmc_client.constants') ibmc_client = importutils.try_import('ibmc_client') ibmc_error = importutils.try_import('ibmc_client.exceptions') @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class IBMCPowerTestCase(base.IBMCTestCase): def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in utils.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(utils, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_get_power_state(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_values = mappings.GET_POWER_STATE_MAP for current, expected in expected_values.items(): # Mock conn.system.get.return_value = mock.Mock( power_state=current ) # Asserts self.assertEqual(expected, task.driver.power.get_power_state(task)) conn.system.get.assert_called_once() connect_ibmc.assert_called_once_with(**self.ibmc) # Reset Mock conn.system.get.reset_mock() connect_ibmc.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_power_state(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: state_mapping = mappings.SET_POWER_STATE_MAP for (expect_state, reset_type) in state_mapping.items(): if expect_state in (states.POWER_OFF, states.SOFT_POWER_OFF): final = constants.SYSTEM_POWER_STATE_OFF transient = constants.SYSTEM_POWER_STATE_ON else: final = constants.SYSTEM_POWER_STATE_ON transient = constants.SYSTEM_POWER_STATE_OFF # Mocks mock_system_get_results = ( [mock.Mock(power_state=transient)] * 3 + [mock.Mock(power_state=final)]) conn.system.get.side_effect = mock_system_get_results task.driver.power.set_power_state(task, expect_state) # Asserts connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(reset_type) self.assertEqual(4, conn.system.get.call_count) # Reset Mocks # TODO(Qianbiao.NG) why reset_mock does not reset call_count connect_ibmc.reset_mock() conn.system.get.reset_mock() conn.system.reset.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_power_state_not_reached(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.config(power_state_change_timeout=2, group='conductor') state_mapping = mappings.SET_POWER_STATE_MAP for (expect_state, reset_type) in state_mapping.items(): if expect_state in (states.POWER_OFF, states.SOFT_POWER_OFF): final = constants.SYSTEM_POWER_STATE_OFF transient = constants.SYSTEM_POWER_STATE_ON else: final = constants.SYSTEM_POWER_STATE_ON transient = constants.SYSTEM_POWER_STATE_OFF # Mocks mock_system_get_results = ( [mock.Mock(power_state=transient)] * 5 + [mock.Mock(power_state=final)]) conn.system.get.side_effect = mock_system_get_results self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, expect_state) # Asserts connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(reset_type) # Reset Mocks connect_ibmc.reset_mock() conn.system.get.reset_mock() conn.system.reset.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_power_state_fail(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # Mocks conn.system.reset.side_effect = ( ibmc_error.IBMCClientError ) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Asserts self.assertRaisesRegex( exception.IBMCError, 'set iBMC power state', task.driver.power.set_power_state, task, states.POWER_ON) connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(constants.RESET_ON) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_set_power_state_timeout(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.config(power_state_change_timeout=2, group='conductor') # Mocks conn.system.get.side_effect = ( [mock.Mock(power_state=constants.SYSTEM_POWER_STATE_OFF)] * 3 ) # Asserts self.assertRaisesRegex( exception.PowerStateFailure, 'Failed to set node power state to power on', task.driver.power.set_power_state, task, states.POWER_ON) connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(constants.RESET_ON) def test_get_supported_power_states(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: supported_power_states = ( task.driver.power.get_supported_power_states(task)) self.assertEqual(sorted(list(mappings.SET_POWER_STATE_MAP)), sorted(supported_power_states)) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) class IBMCPowerRebootTestCase(base.IBMCTestCase): @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_reboot(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) expected_values = [ (constants.SYSTEM_POWER_STATE_OFF, constants.RESET_ON), (constants.SYSTEM_POWER_STATE_ON, constants.RESET_FORCE_RESTART) ] # for (expect_state, reset_type) in state_mapping.items(): for current, reset_type in expected_values: mock_system_get_results = [ # Initial state mock.Mock(power_state=current), # Transient state - powering off mock.Mock(power_state=constants.SYSTEM_POWER_STATE_OFF), # Final state - down powering off mock.Mock(power_state=constants.SYSTEM_POWER_STATE_ON) ] conn.system.get.side_effect = mock_system_get_results with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) # Asserts connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(reset_type) # Reset Mocks connect_ibmc.reset_mock() conn.system.get.reset_mock() conn.system.reset.reset_mock() @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_reboot_not_reached(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Mocks conn.system.get.return_value = mock.Mock( power_state=constants.SYSTEM_POWER_STATE_OFF) self.assertRaisesRegex( exception.PowerStateFailure, 'Failed to set node power state to power on', task.driver.power.reboot, task) # Asserts connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with(constants.RESET_ON) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_reboot_fail(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # Mocks conn.system.reset.side_effect = ( ibmc_error.IBMCClientError ) conn.system.get.return_value = mock.Mock( power_state=constants.SYSTEM_POWER_STATE_ON ) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Asserts self.assertRaisesRegex( exception.IBMCError, 'reboot iBMC', task.driver.power.reboot, task) connect_ibmc.assert_called_with(**self.ibmc) conn.system.get.assert_called_once() conn.system.reset.assert_called_once_with( constants.RESET_FORCE_RESTART) @mock.patch.object(ibmc_client, 'connect', autospec=True) def test_reboot_timeout(self, connect_ibmc): conn = self.mock_ibmc_conn(connect_ibmc) # Mocks conn.system.get.side_effect = [mock.Mock( power_state=constants.SYSTEM_POWER_STATE_OFF )] * 5 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.config(power_state_change_timeout=2, group='conductor') # Asserts self.assertRaisesRegex( exception.PowerStateFailure, 'Failed to set node power state to power on', task.driver.power.reboot, task) # Asserts connect_ibmc.assert_called_with(**self.ibmc) conn.system.reset.assert_called_once_with( constants.RESET_ON) ironic-15.0.0/ironic/tests/unit/drivers/modules/ibmc/__init__.py0000664000175000017500000000000013652514273024655 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_noop_mgmt.py0000664000175000017500000000275613652514273025266 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import boot_devices from ironic.common import exception from ironic.drivers.modules import noop_mgmt from ironic.tests import base class TestNoopManagement(base.TestCase): iface = noop_mgmt.NoopManagement() def test_dummy_methods(self): self.assertEqual({}, self.iface.get_properties()) self.assertIsNone(self.iface.validate("task")) self.assertEqual([boot_devices.PXE, boot_devices.DISK], self.iface.get_supported_boot_devices("task")) self.assertEqual({'boot_device': boot_devices.PXE, 'persistent': True}, self.iface.get_boot_device("task")) def test_set_boot_device(self): self.iface.set_boot_device(mock.Mock(), boot_devices.DISK) self.assertRaises(exception.InvalidParameterValue, self.iface.set_boot_device, mock.Mock(), boot_devices.CDROM) ironic-15.0.0/ironic/tests/unit/drivers/modules/xclarity/0000775000175000017500000000000013652514443023502 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/xclarity/test_common.py0000664000175000017500000001201013652514273026376 0ustar zuulzuul00000000000000# Copyright 2017 Lenovo, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import exception from ironic.drivers.modules.xclarity import common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils xclarity_client = importutils.try_import('xclarity_client.client') xclarity_exceptions = importutils.try_import('xclarity_client.exceptions') xclarity_constants = importutils.try_import('xclarity_client.constants') INFO_DICT = db_utils.get_test_xclarity_driver_info() class XClarityCommonTestCase(db_base.DbTestCase): def setUp(self): super(XClarityCommonTestCase, self).setUp() self.config(enabled_hardware_types=['xclarity'], enabled_power_interfaces=['xclarity'], enabled_management_interfaces=['xclarity']) self.node = obj_utils.create_test_node( self.context, driver='xclarity', properties=db_utils.get_test_xclarity_properties(), driver_info=INFO_DICT) def test_parse_driver_info(self): info = common.parse_driver_info(self.node) self.assertEqual(INFO_DICT['xclarity_manager_ip'], info['xclarity_manager_ip']) self.assertEqual(INFO_DICT['xclarity_username'], info['xclarity_username']) self.assertEqual(INFO_DICT['xclarity_password'], info['xclarity_password']) self.assertEqual(INFO_DICT['xclarity_port'], info['xclarity_port']) self.assertEqual(INFO_DICT['xclarity_hardware_id'], info['xclarity_hardware_id']) def test_parse_driver_info_missing_hardware_id(self): del self.node.driver_info['xclarity_hardware_id'] self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) def test_parse_driver_info_get_param_from_config(self): del self.node.driver_info['xclarity_manager_ip'] del self.node.driver_info['xclarity_username'] del self.node.driver_info['xclarity_password'] self.config(manager_ip='5.6.7.8', group='xclarity') self.config(username='user', group='xclarity') self.config(password='password', group='xclarity') info = common.parse_driver_info(self.node) self.assertEqual('5.6.7.8', info['xclarity_manager_ip']) self.assertEqual('user', info['xclarity_username']) self.assertEqual('password', info['xclarity_password']) def test_parse_driver_info_missing_driver_info_and_config(self): del self.node.driver_info['xclarity_manager_ip'] del self.node.driver_info['xclarity_username'] del self.node.driver_info['xclarity_password'] e = self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) self.assertIn('xclarity_manager_ip', str(e)) self.assertIn('xclarity_username', str(e)) self.assertIn('xclarity_password', str(e)) def test_parse_driver_info_invalid_port(self): self.node.driver_info['xclarity_port'] = 'asd' self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) self.node.driver_info['xclarity_port'] = '65536' self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) self.node.driver_info['xclarity_port'] = 'invalid' self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) self.node.driver_info['xclarity_port'] = '-1' self.assertRaises(exception.InvalidParameterValue, common.parse_driver_info, self.node) @mock.patch.object(xclarity_client, 'Client', autospec=True) def test_get_xclarity_client(self, mock_xclarityclient): expected_call = mock.call(ip='1.2.3.4', password='fake', port=443, username='USERID') common.get_xclarity_client(self.node) self.assertEqual(mock_xclarityclient.mock_calls, [expected_call]) def test_get_server_hardware_id(self): driver_info = self.node.driver_info driver_info['xclarity_hardware_id'] = 'test' self.node.driver_info = driver_info result = common.get_server_hardware_id(self.node) self.assertEqual(result, 'test') ironic-15.0.0/ironic/tests/unit/drivers/modules/xclarity/test_management.py0000664000175000017500000001545013652514273027235 0ustar zuulzuul00000000000000# Copyright 2017 Lenovo, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import importlib import sys import mock from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.xclarity import common from ironic.drivers.modules.xclarity import management from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils xclarity_client_exceptions = importutils.try_import( 'xclarity_client.exceptions') @mock.patch.object(common, 'get_xclarity_client', spect_set=True, autospec=True) class XClarityManagementDriverTestCase(db_base.DbTestCase): def setUp(self): super(XClarityManagementDriverTestCase, self).setUp() self.config(enabled_hardware_types=['xclarity'], enabled_power_interfaces=['xclarity'], enabled_management_interfaces=['xclarity']) self.node = obj_utils.create_test_node( self.context, driver='xclarity', driver_info=db_utils.get_test_xclarity_driver_info()) @mock.patch.object(common, 'get_server_hardware_id', spect_set=True, autospec=True) def test_validate(self, mock_validate, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) common.get_server_hardware_id(task.node) mock_validate.assert_called_with(task.node) def test_get_properties(self, mock_get_xc_client): expected = common.COMMON_PROPERTIES driver = management.XClarityManagement() self.assertEqual(expected, driver.get_properties()) @mock.patch.object(management.XClarityManagement, 'get_boot_device', return_value='pxe') def test_set_boot_device(self, mock_get_boot_device, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.set_boot_device(task, 'pxe') result = task.driver.management.get_boot_device(task) self.assertEqual(result, 'pxe') def test_set_boot_device_fail(self, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: xclarity_client_exceptions.XClarityError = Exception sys.modules['xclarity_client.exceptions'] = ( xclarity_client_exceptions) if 'ironic.drivers.modules.xclarity' in sys.modules: importlib.reload( sys.modules['ironic.drivers.modules.xclarity']) ex = exception.XClarityError('E') mock_get_xc_client.return_value.set_node_boot_info.side_effect = ex self.assertRaises(exception.XClarityError, task.driver.management.set_boot_device, task, "pxe") def test_get_supported_boot_devices(self, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.BIOS, boot_devices.DISK, boot_devices.CDROM] self.assertItemsEqual( expected, task.driver.management.get_supported_boot_devices(task)) @mock.patch.object( management.XClarityManagement, 'get_boot_device', return_value={'boot_device': 'pxe', 'persistent': False}) def test_get_boot_device(self, mock_get_boot_device, mock_get_xc_client): reference = {'boot_device': 'pxe', 'persistent': False} with task_manager.acquire(self.context, self.node.uuid) as task: expected_boot_device = task.driver.management.get_boot_device( task=task) self.assertEqual(reference, expected_boot_device) def test_get_boot_device_fail(self, mock_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: xclarity_client_exceptions.XClarityError = Exception sys.modules['xclarity_client.exceptions'] = ( xclarity_client_exceptions) if 'ironic.drivers.modules.xclarity' in sys.modules: importlib.reload( sys.modules['ironic.drivers.modules.xclarity']) ex = exception.XClarityError('E') mock_xc_client.return_value.get_node_all_boot_info.side_effect = ex self.assertRaises( exception.XClarityError, task.driver.management.get_boot_device, task) def test_get_boot_device_current_none(self, mock_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: reference = {'boot_device': None, 'persistent': None} mock_xc_client.return_value.get_node_all_boot_info.return_value = \ { 'bootOrder': { 'bootOrderList': [{ 'fakeBootOrderDevices': [] }] } } expected_boot_device = task.driver.management.get_boot_device( task=task) self.assertEqual(reference, expected_boot_device) def test_get_boot_device_primary_none(self, mock_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: reference = {'boot_device': None, 'persistent': None} mock_xc_client.return_value.get_node_all_boot_info.return_value = \ { 'bootOrder': { 'bootOrderList': [ { 'bootType': 'SingleUse', 'CurrentBootOrderDevices': [] }, { 'bootType': 'Permanent', 'CurrentBootOrderDevices': [] }, ] } } expected_boot_device = task.driver.management.get_boot_device( task=task) self.assertEqual(reference, expected_boot_device) ironic-15.0.0/ironic/tests/unit/drivers/modules/xclarity/test_power.py0000664000175000017500000001512513652514273026254 0ustar zuulzuul00000000000000# Copyright 2017 Lenovo, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import importlib import sys import mock from oslo_utils import importutils from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.xclarity import common from ironic.drivers.modules.xclarity import power from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils STATE_POWER_ON = "power on" STATE_POWER_OFF = "power off" STATE_POWERING_ON = "power on" STATE_POWERING_OFF = "power on" xclarity_constants = importutils.try_import('xclarity_client.constants') xclarity_client_exceptions = importutils.try_import( 'xclarity_client.exceptions') @mock.patch.object(common, 'get_xclarity_client', spect_set=True, autospec=True) class XClarityPowerDriverTestCase(db_base.DbTestCase): def setUp(self): super(XClarityPowerDriverTestCase, self).setUp() self.config(enabled_hardware_types=['xclarity'], enabled_power_interfaces=['xclarity'], enabled_management_interfaces=['xclarity']) self.node = obj_utils.create_test_node( self.context, driver='xclarity', driver_info=db_utils.get_test_xclarity_driver_info()) def test_get_properties(self, mock_get_xc_client): expected = common.COMMON_PROPERTIES driver = power.XClarityPower() self.assertEqual(expected, driver.get_properties()) @mock.patch.object(common, 'get_server_hardware_id', spect_set=True, autospec=True) def test_validate(self, mock_validate_driver_info, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.validate(task) common.get_server_hardware_id(task.node) mock_validate_driver_info.assert_called_with(task.node) @mock.patch.object(power.XClarityPower, 'get_power_state', return_value=STATE_POWER_ON) def test_get_power_state(self, mock_get_power_state, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: result = power.XClarityPower.get_power_state(task) self.assertEqual(STATE_POWER_ON, result) @mock.patch.object(common, 'translate_xclarity_power_state', spec_set=True, autospec=True) def test_get_power_state_fail(self, mock_translate_state, mock_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: xclarity_client_exceptions.XClarityError = Exception sys.modules['xclarity_client.exceptions'] = ( xclarity_client_exceptions) if 'ironic.drivers.modules.xclarity' in sys.modules: importlib.reload( sys.modules['ironic.drivers.modules.xclarity']) ex = exception.XClarityError('E') mock_xc_client.return_value.get_node_power_status.side_effect = ex self.assertRaises(exception.XClarityError, task.driver.power.get_power_state, task) self.assertFalse(mock_translate_state.called) @mock.patch.object(power.LOG, 'warning') @mock.patch.object(power.XClarityPower, 'get_power_state', return_value=states.POWER_ON) def test_set_power(self, mock_set_power_state, mock_log, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON) expected = task.driver.power.get_power_state(task) self.assertEqual(expected, states.POWER_ON) self.assertFalse(mock_log.called) @mock.patch.object(power.LOG, 'warning') @mock.patch.object(power.XClarityPower, 'get_power_state', return_value=states.POWER_ON) def test_set_power_timeout(self, mock_set_power_state, mock_log, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON, timeout=21) expected = task.driver.power.get_power_state(task) self.assertEqual(expected, states.POWER_ON) self.assertTrue(mock_log.called) def test_set_power_fail(self, mock_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: xclarity_client_exceptions.XClarityError = Exception sys.modules['xclarity_client.exceptions'] = ( xclarity_client_exceptions) if 'ironic.drivers.modules.xclarity' in sys.modules: importlib.reload( sys.modules['ironic.drivers.modules.xclarity']) ex = exception.XClarityError('E') mock_xc_client.return_value.set_node_power_status.side_effect = ex self.assertRaises(exception.XClarityError, task.driver.power.set_power_state, task, states.POWER_OFF) @mock.patch.object(power.LOG, 'warning') @mock.patch.object(power.XClarityPower, 'set_power_state') def test_reboot(self, mock_set_power_state, mock_log, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task) mock_set_power_state.assert_called_with(task, states.REBOOT) self.assertFalse(mock_log.called) @mock.patch.object(power.LOG, 'warning') @mock.patch.object(power.XClarityPower, 'set_power_state') def test_reboot_timeout(self, mock_set_power_state, mock_log, mock_get_xc_client): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task, timeout=55) mock_set_power_state.assert_called_with(task, states.REBOOT) self.assertTrue(mock_log.called) ironic-15.0.0/ironic/tests/unit/drivers/modules/xclarity/__init__.py0000664000175000017500000000000013652514273025602 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_boot_mode_utils.py0000664000175000017500000000454413652514273026453 0ustar zuulzuul00000000000000# Copyright 2018 FUJITSU LIMITED. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import boot_modes from ironic.drivers.modules import boot_mode_utils from ironic.tests import base as tests_base from ironic.tests.unit.objects import utils as obj_utils class GetBootModeTestCase(tests_base.TestCase): def setUp(self): super(GetBootModeTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake-hardware') @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospect=True) def test_get_boot_mode_bios(self, mock_for_deploy): mock_for_deploy.return_value = boot_modes.LEGACY_BIOS boot_mode = boot_mode_utils.get_boot_mode(self.node) self.assertEqual(boot_modes.LEGACY_BIOS, boot_mode) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospec=True) def test_get_boot_mode_uefi(self, mock_for_deploy): mock_for_deploy.return_value = boot_modes.UEFI boot_mode = boot_mode_utils.get_boot_mode(self.node) self.assertEqual(boot_modes.UEFI, boot_mode) @mock.patch.object(boot_mode_utils, 'LOG', autospect=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', autospect=True) def test_get_boot_mode_default(self, mock_for_deploy, mock_log): boot_mode_utils.warn_about_default_boot_mode = False mock_for_deploy.return_value = None boot_mode = boot_mode_utils.get_boot_mode(self.node) self.assertEqual(boot_modes.LEGACY_BIOS, boot_mode) boot_mode = boot_mode_utils.get_boot_mode(self.node) self.assertEqual(boot_modes.LEGACY_BIOS, boot_mode) self.assertEqual(1, mock_log.warning.call_count) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_noop.py0000664000175000017500000000574613652514273024244 0ustar zuulzuul00000000000000# Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import stevedore from ironic.common import exception from ironic.drivers.modules import noop from ironic.tests import base # TODO(dtantsur): move to ironic.common.driver_factory def hardware_interface_extension_manager(interface): """Get a Stevedore extension manager for given hardware interface.""" return stevedore.extension.ExtensionManager( 'ironic.hardware.interfaces.%s' % interface, invoke_on_load=True) class NoInterfacesTestCase(base.TestCase): iface_types = ['bios', 'console', 'inspect', 'raid', 'rescue', 'vendor'] task = mock.Mock(node=mock.Mock(driver='pxe_foobar', spec=['driver']), spec=['node']) def test_bios(self): self.assertRaises(exception.UnsupportedDriverExtension, getattr(noop.NoBIOS(), 'apply_configuration'), self.task, '') self.assertRaises(exception.UnsupportedDriverExtension, getattr(noop.NoBIOS(), 'factory_reset'), self.task) def test_console(self): for method in ('start_console', 'stop_console', 'get_console'): self.assertRaises(exception.UnsupportedDriverExtension, getattr(noop.NoConsole(), method), self.task) def test_rescue(self): for method in ('rescue', 'unrescue'): self.assertRaises(exception.UnsupportedDriverExtension, getattr(noop.NoRescue(), method), self.task) def test_vendor(self): self.assertRaises(exception.UnsupportedDriverExtension, noop.NoVendor().validate, self.task, 'method') self.assertRaises(exception.UnsupportedDriverExtension, noop.NoVendor().driver_validate, 'method') def test_inspect(self): self.assertRaises(exception.UnsupportedDriverExtension, noop.NoInspect().inspect_hardware, self.task) def test_load_by_name(self): for iface_type in self.iface_types: mgr = hardware_interface_extension_manager(iface_type) inst = mgr['no-%s' % iface_type].obj self.assertEqual({}, inst.get_properties()) self.assertRaises(exception.UnsupportedDriverExtension, inst.validate, self.task) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_inspector.py0000664000175000017500000005137213652514273025273 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet import mock import openstack from ironic.common import context from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers.modules import inspector from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils CONF = inspector.CONF @mock.patch('ironic.common.keystone.get_auth', autospec=True, return_value=mock.sentinel.auth) @mock.patch('ironic.common.keystone.get_session', autospec=True, return_value=mock.sentinel.session) @mock.patch.object(openstack.connection, 'Connection', autospec=True) class GetClientTestCase(db_base.DbTestCase): def setUp(self): super(GetClientTestCase, self).setUp() # NOTE(pas-ha) force-reset global inspector session object inspector._INSPECTOR_SESSION = None self.context = context.RequestContext(global_request_id='global') def test__get_client(self, mock_conn, mock_session, mock_auth): inspector._get_client(self.context) mock_conn.assert_called_once_with( session=mock.sentinel.session, oslo_conf=mock.ANY) self.assertEqual(1, mock_auth.call_count) self.assertEqual(1, mock_session.call_count) def test__get_client_standalone(self, mock_conn, mock_session, mock_auth): self.config(auth_strategy='noauth') inspector._get_client(self.context) self.assertEqual('none', inspector.CONF.inspector.auth_type) mock_conn.assert_called_once_with( session=mock.sentinel.session, oslo_conf=mock.ANY) self.assertEqual(1, mock_auth.call_count) self.assertEqual(1, mock_session.call_count) class BaseTestCase(db_base.DbTestCase): def setUp(self): super(BaseTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, inspect_interface='inspector') self.iface = inspector.Inspector() self.task = mock.MagicMock(spec=task_manager.TaskManager) self.task.context = self.context self.task.shared = False self.task.node = self.node self.task.driver = mock.Mock( spec=['boot', 'network', 'inspect', 'power'], inspect=self.iface) self.driver = self.task.driver class CommonFunctionsTestCase(BaseTestCase): def test_validate_ok(self): self.iface.validate(self.task) def test_get_properties(self): res = self.iface.get_properties() self.assertEqual({}, res) def test_get_callback_endpoint(self): for catalog_endp in ['http://192.168.0.42:5050', 'http://192.168.0.42:5050/v1', 'http://192.168.0.42:5050/']: client = mock.Mock() client.get_endpoint.return_value = catalog_endp self.assertEqual('http://192.168.0.42:5050/v1/continue', inspector._get_callback_endpoint(client)) def test_get_callback_endpoint_override(self): CONF.set_override('callback_endpoint_override', 'http://url', group='inspector') client = mock.Mock() self.assertEqual('http://url/v1/continue', inspector._get_callback_endpoint(client)) self.assertFalse(client.get_endpoint.called) def test_get_callback_endpoint_mdns(self): CONF.set_override('callback_endpoint_override', 'mdns', group='inspector') client = mock.Mock() self.assertEqual('mdns', inspector._get_callback_endpoint(client)) self.assertFalse(client.get_endpoint.called) def test_get_callback_endpoint_no_loopback(self): client = mock.Mock() client.get_endpoint.return_value = 'http://127.0.0.1:5050' self.assertRaisesRegex(exception.InvalidParameterValue, 'Loopback', inspector._get_callback_endpoint, client) @mock.patch.object(eventlet, 'spawn_n', lambda f, *a, **kw: f(*a, **kw)) @mock.patch('ironic.drivers.modules.inspector._get_client', autospec=True) class InspectHardwareTestCase(BaseTestCase): def test_validate_ok(self, mock_client): self.iface.validate(self.task) def test_validate_invalid_kernel_params(self, mock_client): CONF.set_override('extra_kernel_params', 'abcdef', group='inspector') self.assertRaises(exception.InvalidParameterValue, self.iface.validate, self.task) def test_validate_require_managed_boot(self, mock_client): CONF.set_override('require_managed_boot', True, group='inspector') self.driver.boot.validate_inspection.side_effect = ( exception.UnsupportedDriverExtension('')) self.assertRaises(exception.UnsupportedDriverExtension, self.iface.validate, self.task) def test_unmanaged_ok(self, mock_client): self.driver.boot.validate_inspection.side_effect = ( exception.UnsupportedDriverExtension('')) mock_introspect = mock_client.return_value.start_introspection self.assertEqual(states.INSPECTWAIT, self.iface.inspect_hardware(self.task)) mock_introspect.assert_called_once_with(self.node.uuid) self.assertFalse(self.driver.boot.prepare_ramdisk.called) self.assertFalse(self.driver.network.add_inspection_network.called) self.assertFalse(self.driver.power.reboot.called) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) self.assertFalse(self.driver.power.set_power_state.called) @mock.patch.object(task_manager, 'acquire', autospec=True) def test_unmanaged_error(self, mock_acquire, mock_client): mock_acquire.return_value.__enter__.return_value = self.task self.driver.boot.validate_inspection.side_effect = ( exception.UnsupportedDriverExtension('')) mock_introspect = mock_client.return_value.start_introspection mock_introspect.side_effect = RuntimeError('boom') self.iface.inspect_hardware(self.task) mock_introspect.assert_called_once_with(self.node.uuid) self.assertIn('boom', self.task.node.last_error) self.task.process_event.assert_called_once_with('fail') self.assertFalse(self.driver.boot.prepare_ramdisk.called) self.assertFalse(self.driver.network.add_inspection_network.called) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) self.assertFalse(self.driver.power.set_power_state.called) def test_require_managed_boot(self, mock_client): CONF.set_override('require_managed_boot', True, group='inspector') self.driver.boot.validate_inspection.side_effect = ( exception.UnsupportedDriverExtension('')) mock_introspect = mock_client.return_value.start_introspection self.assertRaises(exception.UnsupportedDriverExtension, self.iface.inspect_hardware, self.task) self.assertFalse(mock_introspect.called) self.assertFalse(self.driver.boot.prepare_ramdisk.called) self.assertFalse(self.driver.network.add_inspection_network.called) self.assertFalse(self.driver.power.reboot.called) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) self.assertFalse(self.driver.power.set_power_state.called) def test_managed_ok(self, mock_client): endpoint = 'http://192.169.0.42:5050/v1' mock_client.return_value.get_endpoint.return_value = endpoint mock_introspect = mock_client.return_value.start_introspection self.assertEqual(states.INSPECTWAIT, self.iface.inspect_hardware(self.task)) mock_introspect.assert_called_once_with(self.node.uuid, manage_boot=False) self.driver.boot.prepare_ramdisk.assert_called_once_with( self.task, ramdisk_params={ 'ipa-inspection-callback-url': endpoint + '/continue', }) self.driver.network.add_inspection_network.assert_called_once_with( self.task) self.driver.power.set_power_state.assert_has_calls([ mock.call(self.task, states.POWER_OFF, timeout=None), mock.call(self.task, states.POWER_ON, timeout=None), ]) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) def test_managed_custom_params(self, mock_client): CONF.set_override('extra_kernel_params', 'ipa-inspection-collectors=default,logs ' 'ipa-collect-dhcp=1', group='inspector') endpoint = 'http://192.169.0.42:5050/v1' mock_client.return_value.get_endpoint.return_value = endpoint mock_introspect = mock_client.return_value.start_introspection self.iface.validate(self.task) self.assertEqual(states.INSPECTWAIT, self.iface.inspect_hardware(self.task)) mock_introspect.assert_called_once_with(self.node.uuid, manage_boot=False) self.driver.boot.prepare_ramdisk.assert_called_once_with( self.task, ramdisk_params={ 'ipa-inspection-callback-url': endpoint + '/continue', 'ipa-inspection-collectors': 'default,logs', 'ipa-collect-dhcp': '1', }) self.driver.network.add_inspection_network.assert_called_once_with( self.task) self.driver.power.set_power_state.assert_has_calls([ mock.call(self.task, states.POWER_OFF, timeout=None), mock.call(self.task, states.POWER_ON, timeout=None), ]) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) @mock.patch.object(task_manager, 'acquire', autospec=True) def test_managed_error(self, mock_acquire, mock_client): endpoint = 'http://192.169.0.42:5050/v1' mock_client.return_value.get_endpoint.return_value = endpoint mock_acquire.return_value.__enter__.return_value = self.task mock_introspect = mock_client.return_value.start_introspection mock_introspect.side_effect = RuntimeError('boom') self.assertRaises(exception.HardwareInspectionFailure, self.iface.inspect_hardware, self.task) mock_introspect.assert_called_once_with(self.node.uuid, manage_boot=False) self.assertIn('boom', self.task.node.last_error) self.driver.boot.prepare_ramdisk.assert_called_once_with( self.task, ramdisk_params={ 'ipa-inspection-callback-url': endpoint + '/continue', }) self.driver.network.add_inspection_network.assert_called_once_with( self.task) self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) self.driver.power.set_power_state.assert_called_with( self.task, 'power off', timeout=None) @mock.patch('ironic.drivers.modules.inspector._get_client', autospec=True) class CheckStatusTestCase(BaseTestCase): def setUp(self): super(CheckStatusTestCase, self).setUp() self.node.provision_state = states.INSPECTWAIT def test_not_inspecting(self, mock_client): mock_get = mock_client.return_value.get_introspection self.node.provision_state = states.MANAGEABLE inspector._check_status(self.task) self.assertFalse(mock_get.called) def test_not_check_inspecting(self, mock_client): mock_get = mock_client.return_value.get_introspection self.node.provision_state = states.INSPECTING inspector._check_status(self.task) self.assertFalse(mock_get.called) def test_not_inspector(self, mock_client): mock_get = mock_client.return_value.get_introspection self.task.driver.inspect = object() inspector._check_status(self.task) self.assertFalse(mock_get.called) def test_not_finished(self, mock_client): mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=False, error=None, spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.assertFalse(self.task.process_event.called) def test_exception_ignored(self, mock_client): mock_get = mock_client.return_value.get_introspection mock_get.side_effect = RuntimeError('boom') inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.assertFalse(self.task.process_event.called) def test_status_ok(self, mock_client): mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error=None, spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('done') self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) self.assertFalse(self.driver.power.set_power_state.called) def test_status_ok_managed(self, mock_client): utils.set_node_nested_field(self.node, 'driver_internal_info', 'inspector_manage_boot', True) self.node.save() mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error=None, spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('done') self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) self.driver.power.set_power_state.assert_called_once_with( self.task, 'power off', timeout=None) def test_status_ok_managed_no_power_off(self, mock_client): CONF.set_override('power_off', False, group='inspector') utils.set_node_nested_field(self.node, 'driver_internal_info', 'inspector_manage_boot', True) self.node.save() mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error=None, spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('done') self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) self.assertFalse(self.driver.power.set_power_state.called) def test_status_error(self, mock_client): mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error='boom', spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('fail') self.assertIn('boom', self.node.last_error) self.assertFalse(self.driver.network.remove_inspection_network.called) self.assertFalse(self.driver.boot.clean_up_ramdisk.called) self.assertFalse(self.driver.power.set_power_state.called) def test_status_error_managed(self, mock_client): utils.set_node_nested_field(self.node, 'driver_internal_info', 'inspector_manage_boot', True) self.node.save() mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error='boom', spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('fail') self.assertIn('boom', self.node.last_error) self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) self.driver.power.set_power_state.assert_called_once_with( self.task, 'power off', timeout=None) def test_status_error_managed_no_power_off(self, mock_client): CONF.set_override('power_off', False, group='inspector') utils.set_node_nested_field(self.node, 'driver_internal_info', 'inspector_manage_boot', True) self.node.save() mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error='boom', spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('fail') self.assertIn('boom', self.node.last_error) self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) self.assertFalse(self.driver.power.set_power_state.called) def _test_status_clean_up_failed(self, mock_client): utils.set_node_nested_field(self.node, 'driver_internal_info', 'inspector_manage_boot', True) self.node.save() mock_get = mock_client.return_value.get_introspection mock_get.return_value = mock.Mock(is_finished=True, error=None, spec=['is_finished', 'error']) inspector._check_status(self.task) mock_get.assert_called_once_with(self.node.uuid) self.task.process_event.assert_called_once_with('fail') self.assertIn('boom', self.node.last_error) def test_status_boot_clean_up_failed(self, mock_client): self.driver.boot.clean_up_ramdisk.side_effect = RuntimeError('boom') self._test_status_clean_up_failed(mock_client) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) def test_status_network_clean_up_failed(self, mock_client): self.driver.network.remove_inspection_network.side_effect = \ RuntimeError('boom') self._test_status_clean_up_failed(mock_client) self.driver.network.remove_inspection_network.assert_called_once_with( self.task) self.driver.boot.clean_up_ramdisk.assert_called_once_with(self.task) @mock.patch('ironic.drivers.modules.inspector._get_client', autospec=True) class InspectHardwareAbortTestCase(BaseTestCase): def test_abort_ok(self, mock_client): mock_abort = mock_client.return_value.abort_introspection self.iface.abort(self.task) mock_abort.assert_called_once_with(self.node.uuid) def test_abort_error(self, mock_client): mock_abort = mock_client.return_value.abort_introspection mock_abort.side_effect = RuntimeError('boom') self.assertRaises(RuntimeError, self.iface.abort, self.task) mock_abort.assert_called_once_with(self.node.uuid) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/0000775000175000017500000000000013652514443022554 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_boot.py0000664000175000017500000001364113652514273025136 0ustar zuulzuul00000000000000# Copyright 2019 Red Hat, Inc. # All Rights Reserved. # Copyright (c) 2019 Dell Inc. or its subsidiaries. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC boot interface """ import mock from oslo_utils import importutils from ironic.common import boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.drac import boot as drac_boot from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') INFO_DICT = test_utils.INFO_DICT @mock.patch.object(drac_boot, 'redfish_utils', autospec=True) class DracBootTestCase(test_utils.BaseDracTest): def setUp(self): super(DracBootTestCase, self).setUp() self.node = obj_utils.create_test_node( self.context, driver='idrac', driver_info=INFO_DICT) def test__set_boot_device_persistent(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_manager = mock.MagicMock() mock_system.managers = [mock_manager] mock_manager_oem = mock_manager.get_oem_extension.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot._set_boot_device( task, boot_devices.CDROM, persistent=True) mock_manager_oem.set_virtual_boot_device.assert_called_once_with( 'cd', persistent=True, manager=mock_manager, system=mock_system) def test__set_boot_device_cd(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_manager = mock.MagicMock() mock_system.managers = [mock_manager] mock_manager_oem = mock_manager.get_oem_extension.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot._set_boot_device(task, boot_devices.CDROM) mock_manager_oem.set_virtual_boot_device.assert_called_once_with( 'cd', persistent=False, manager=mock_manager, system=mock_system) def test__set_boot_device_floppy(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_manager = mock.MagicMock() mock_system.managers = [mock_manager] mock_manager_oem = mock_manager.get_oem_extension.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot._set_boot_device(task, boot_devices.FLOPPY) mock_manager_oem.set_virtual_boot_device.assert_called_once_with( 'floppy', persistent=False, manager=mock_manager, system=mock_system) def test__set_boot_device_disk(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot._set_boot_device(task, boot_devices.DISK) self.assertFalse(mock_system.called) def test__set_boot_device_missing_oem(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_manager = mock.MagicMock() mock_system.managers = [mock_manager] mock_manager.get_oem_extension.side_effect = ( sushy.exceptions.OEMExtensionNotFoundError) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.RedfishError, task.driver.boot._set_boot_device, task, boot_devices.CDROM) def test__set_boot_device_failover(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_manager_fail = mock.MagicMock() mock_manager_ok = mock.MagicMock() mock_system.managers = [mock_manager_fail, mock_manager_ok] mock_svbd_fail = (mock_manager_fail.get_oem_extension .return_value.set_virtual_boot_device) mock_svbd_ok = (mock_manager_ok.get_oem_extension .return_value.set_virtual_boot_device) mock_svbd_fail.side_effect = sushy.exceptions.SushyError with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot._set_boot_device(task, boot_devices.CDROM) self.assertFalse(mock_system.called) mock_svbd_fail.assert_called_once_with( 'cd', manager=mock_manager_fail, persistent=False, system=mock_system) mock_svbd_ok.assert_called_once_with( 'cd', manager=mock_manager_ok, persistent=False, system=mock_system) def test__set_boot_device_no_manager(self, mock_redfish_utils): mock_system = mock_redfish_utils.get_system.return_value mock_system.managers = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.RedfishError, task.driver.boot._set_boot_device, task, boot_devices.CDROM) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_periodic_task.py0000664000175000017500000004246313652514273027017 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC periodic tasks """ import mock from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import raid as drac_raid from ironic.tests.unit.db import base as db_base from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT class DracPeriodicTaskTestCase(db_base.DbTestCase): def setUp(self): super(DracPeriodicTaskTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) self.raid = drac_raid.DracRAID() self.raid_wsman = drac_raid.DracWSManRAID() self.job = { 'id': 'JID_001436912645', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': '00000101000000', 'until_time': 'TIME_NA', 'message': 'Job in progress', 'status': 'Running', 'percent_complete': 34} self.virtual_disk = { 'id': 'Disk.Virtual.0:RAID.Integrated.1-1', 'name': 'disk 0', 'description': 'Virtual Disk 0 on Integrated RAID Controller 1', 'controller': 'RAID.Integrated.1-1', 'raid_level': '1', 'size_mb': 571776, 'status': 'ok', 'raid_status': 'online', 'span_depth': 1, 'span_length': 2, 'pending_operations': None } def test__query_raid_config_job_status_drac(self): self._test__query_raid_config_job_status(self.raid) def test__query_raid_config_job_status_drac_wsman(self): self._test__query_raid_config_job_status(self.raid_wsman) @mock.patch.object(task_manager, 'acquire', autospec=True) def _test__query_raid_config_job_status(self, raid, mock_acquire): # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42']} self.node.driver_internal_info = driver_internal_info self.node.save() # mock manager mock_manager = mock.Mock() node_list = [(self.node.uuid, 'idrac', '', {'raid_config_job_ids': ['42']})] mock_manager.iter_nodes.return_value = node_list # mock task_manager.acquire task = mock.Mock(node=self.node, driver=mock.Mock(raid=raid)) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) # mock _check_node_raid_jobs raid._check_node_raid_jobs = mock.Mock() raid._query_raid_config_job_status(mock_manager, self.context) raid._check_node_raid_jobs.assert_called_once_with(task) def test__query_raid_config_job_status_no_config_jobs_drac(self): self._test__query_raid_config_job_status_no_config_jobs(self.raid) def test__query_raid_config_job_status_no_config_jobs_drac_wsman(self): self._test__query_raid_config_job_status_no_config_jobs( self.raid_wsman) @mock.patch.object(task_manager, 'acquire', autospec=True) def _test__query_raid_config_job_status_no_config_jobs(self, raid, mock_acquire): # mock manager mock_manager = mock.Mock() node_list = [(self.node.uuid, 'idrac', '', {})] mock_manager.iter_nodes.return_value = node_list # mock task_manager.acquire task = mock.Mock(node=self.node, driver=mock.Mock(raid=raid)) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) # mock _check_node_raid_jobs raid._check_node_raid_jobs = mock.Mock() raid._query_raid_config_job_status(mock_manager, None) self.assertEqual(0, raid._check_node_raid_jobs.call_count) def test__query_raid_config_job_status_no_nodes(self): # mock manager mock_manager = mock.Mock() node_list = [] mock_manager.iter_nodes.return_value = node_list # mock _check_node_raid_jobs self.raid._check_node_raid_jobs = mock.Mock() self.raid._query_raid_config_job_status(mock_manager, None) self.assertEqual(0, self.raid._check_node_raid_jobs.call_count) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__check_node_raid_jobs_without_update(self, mock_get_drac_client): # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42']} self.node.driver_internal_info = driver_internal_info self.node.save() # mock task task = mock.Mock(node=self.node) # mock dracclient.get_job mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = test_utils.dict_to_namedtuple( values=self.job) self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_called_once_with('42') self.assertEqual(0, mock_client.list_virtual_disks.call_count) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual({}, self.node.raid_config) self.assertIs(False, self.node.maintenance) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid.DracRAID, 'get_logical_disks', spec_set=True, autospec=True) def _test__check_node_raid_jobs_with_completed_job( self, mock_notify_conductor_resume, mock_get_logical_disks, mock_get_drac_client): expected_logical_disk = {'size_gb': 558, 'raid_level': '1', 'name': 'disk 0'} # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42']} self.node.driver_internal_info = driver_internal_info self.node.save() # mock task task = mock.Mock(node=self.node, context=self.context) # mock dracclient.get_job self.job['status'] = 'Completed' mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = test_utils.dict_to_namedtuple( values=self.job) # mock driver.raid.get_logical_disks mock_get_logical_disks.return_value = { 'logical_disks': [expected_logical_disk] } self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_called_once_with('42') self.node.refresh() self.assertEqual([], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual([expected_logical_disk], self.node.raid_config['logical_disks']) mock_notify_conductor_resume.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_clean') def test__check_node_raid_jobs_with_completed_job_in_clean( self, mock_notify_conductor_resume): self.node.clean_step = {'foo': 'bar'} self.node.save() self._test__check_node_raid_jobs_with_completed_job( mock_notify_conductor_resume) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy') def test__check_node_raid_jobs_with_completed_job_in_deploy( self, mock_notify_conductor_resume): self._test__check_node_raid_jobs_with_completed_job( mock_notify_conductor_resume) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__check_node_raid_jobs_with_failed_job(self, mock_get_drac_client): # mock node.driver_internal_info and node.clean_step driver_internal_info = {'raid_config_job_ids': ['42']} self.node.driver_internal_info = driver_internal_info self.node.clean_step = {'foo': 'bar'} self.node.save() # mock task task = mock.Mock(node=self.node, context=self.context) # mock dracclient.get_job self.job['status'] = 'Failed' self.job['message'] = 'boom' mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = test_utils.dict_to_namedtuple( values=self.job) # mock dracclient.list_virtual_disks mock_client.list_virtual_disks.return_value = [ test_utils.dict_to_namedtuple(values=self.virtual_disk)] self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_called_once_with('42') self.assertEqual(0, mock_client.list_virtual_disks.call_count) self.node.refresh() self.assertEqual([], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual({}, self.node.raid_config) task.process_event.assert_called_once_with('fail') @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid.DracRAID, 'get_logical_disks', spec_set=True, autospec=True) def _test__check_node_raid_jobs_with_completed_job_already_failed( self, mock_notify_conductor_resume, mock_get_logical_disks, mock_get_drac_client): expected_logical_disk = {'size_gb': 558, 'raid_level': '1', 'name': 'disk 0'} # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42'], 'raid_config_job_failure': True} self.node.driver_internal_info = driver_internal_info self.node.save() # mock task task = mock.Mock(node=self.node, context=self.context) # mock dracclient.get_job self.job['status'] = 'Completed' mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = test_utils.dict_to_namedtuple( values=self.job) # mock driver.raid.get_logical_disks mock_get_logical_disks.return_value = { 'logical_disks': [expected_logical_disk] } self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_called_once_with('42') self.node.refresh() self.assertEqual([], self.node.driver_internal_info['raid_config_job_ids']) self.assertNotIn('raid_config_job_failure', self.node.driver_internal_info) self.assertNotIn('logical_disks', self.node.raid_config) task.process_event.assert_called_once_with('fail') self.assertFalse(mock_notify_conductor_resume.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_clean') def test__check_node_raid_jobs_with_completed_job_already_failed_in_clean( self, mock_notify_conductor_resume): self.node.clean_step = {'foo': 'bar'} self.node.save() self._test__check_node_raid_jobs_with_completed_job_already_failed( mock_notify_conductor_resume) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy') def test__check_node_raid_jobs_with_completed_job_already_failed_in_deploy( self, mock_notify_conductor_resume): self._test__check_node_raid_jobs_with_completed_job_already_failed( mock_notify_conductor_resume) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid.DracRAID, 'get_logical_disks', spec_set=True, autospec=True) def _test__check_node_raid_jobs_with_multiple_jobs_completed( self, mock_notify_conductor_resume, mock_get_logical_disks, mock_get_drac_client): expected_logical_disk = {'size_gb': 558, 'raid_level': '1', 'name': 'disk 0'} # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42', '36']} self.node.driver_internal_info = driver_internal_info self.node.save() # mock task task = mock.Mock(node=self.node, context=self.context) # mock dracclient.get_job self.job['status'] = 'Completed' mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = test_utils.dict_to_namedtuple( values=self.job) # mock driver.raid.get_logical_disks mock_get_logical_disks.return_value = { 'logical_disks': [expected_logical_disk] } self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_has_calls([mock.call('42'), mock.call('36')]) self.node.refresh() self.assertEqual([], self.node.driver_internal_info['raid_config_job_ids']) self.assertNotIn('raid_config_job_failure', self.node.driver_internal_info) self.assertEqual([expected_logical_disk], self.node.raid_config['logical_disks']) mock_notify_conductor_resume.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_clean') def test__check_node_raid_jobs_with_multiple_jobs_completed_in_clean( self, mock_notify_conductor_resume): self.node.clean_step = {'foo': 'bar'} self.node.save() self._test__check_node_raid_jobs_with_multiple_jobs_completed( mock_notify_conductor_resume) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy') def test__check_node_raid_jobs_with_multiple_jobs_completed_in_deploy( self, mock_notify_conductor_resume): self._test__check_node_raid_jobs_with_multiple_jobs_completed( mock_notify_conductor_resume) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid.DracRAID, 'get_logical_disks', spec_set=True, autospec=True) def _test__check_node_raid_jobs_with_multiple_jobs_failed( self, mock_notify_conductor_resume, mock_get_logical_disks, mock_get_drac_client): expected_logical_disk = {'size_gb': 558, 'raid_level': '1', 'name': 'disk 0'} # mock node.driver_internal_info driver_internal_info = {'raid_config_job_ids': ['42', '36']} self.node.driver_internal_info = driver_internal_info self.node.save() # mock task task = mock.Mock(node=self.node, context=self.context) # mock dracclient.get_job self.job['status'] = 'Completed' failed_job = self.job.copy() failed_job['status'] = 'Failed' failed_job['message'] = 'boom' mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.side_effect = [ test_utils.dict_to_namedtuple(values=failed_job), test_utils.dict_to_namedtuple(values=self.job)] # mock driver.raid.get_logical_disks mock_get_logical_disks.return_value = { 'logical_disks': [expected_logical_disk] } self.raid._check_node_raid_jobs(task) mock_client.get_job.assert_has_calls([mock.call('42'), mock.call('36')]) self.node.refresh() self.assertEqual([], self.node.driver_internal_info['raid_config_job_ids']) self.assertNotIn('raid_config_job_failure', self.node.driver_internal_info) self.assertNotIn('logical_disks', self.node.raid_config) task.process_event.assert_called_once_with('fail') self.assertFalse(mock_notify_conductor_resume.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_clean') def test__check_node_raid_jobs_with_multiple_jobs_failed_in_clean( self, mock_notify_conductor_resume): self.node.clean_step = {'foo': 'bar'} self.node.save() self._test__check_node_raid_jobs_with_multiple_jobs_failed( mock_notify_conductor_resume) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy') def test__check_node_raid_jobs_with_multiple_jobs_failed_in_deploy( self, mock_notify_conductor_resume): self._test__check_node_raid_jobs_with_multiple_jobs_failed( mock_notify_conductor_resume) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_inspect.py0000664000175000017500000004241413652514273025640 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC inspection interface """ from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import inspect as drac_inspect from ironic import objects from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT class DracInspectionTestCase(test_utils.BaseDracTest): def setUp(self): super(DracInspectionTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) memory = [{'id': 'DIMM.Socket.A1', 'size_mb': 16384, 'speed': 2133, 'manufacturer': 'Samsung', 'model': 'DDR4 DIMM', 'state': 'ok'}, {'id': 'DIMM.Socket.B1', 'size_mb': 16384, 'speed': 2133, 'manufacturer': 'Samsung', 'model': 'DDR4 DIMM', 'state': 'ok'}] cpus = [{'id': 'CPU.Socket.1', 'cores': 6, 'speed': 2400, 'model': 'Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz', 'state': 'ok', 'ht_enabled': True, 'turbo_enabled': True, 'vt_enabled': True, 'arch64': True}, {'id': 'CPU.Socket.2', 'cores': 6, 'speed': 2400, 'model': 'Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz', 'state': 'ok', 'ht_enabled': False, 'turbo_enabled': True, 'vt_enabled': True, 'arch64': True}] virtual_disks = [ {'id': 'Disk.Virtual.0:RAID.Integrated.1-1', 'name': 'disk 0', 'description': 'Virtual Disk 0 on Integrated RAID Controller 1', 'controller': 'RAID.Integrated.1-1', 'raid_level': '1', 'size_mb': 1143552, 'state': 'ok', 'raid_state': 'online', 'span_depth': 1, 'span_length': 2, 'pending_operations': None}] physical_disks = [ {'id': 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'description': ('Disk 1 in Backplane 1 of ' 'Integrated RAID Controller 1'), 'controller': 'RAID.Integrated.1-1', 'manufacturer': 'SEAGATE', 'model': 'ST600MM0006', 'media_type': 'hdd', 'interface_type': 'sas', 'size_mb': 571776, 'free_size_mb': 571776, 'serial_number': 'S0M3EY2Z', 'firmware_version': 'LS0A', 'state': 'ok', 'raid_state': 'ready'}, {'id': 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'description': ('Disk 1 in Backplane 1 of ' 'Integrated RAID Controller 1'), 'controller': 'RAID.Integrated.1-1', 'manufacturer': 'SEAGATE', 'model': 'ST600MM0006', 'media_type': 'hdd', 'interface_type': 'sas', 'size_mb': 285888, 'free_size_mb': 285888, 'serial_number': 'S0M3EY2Z', 'firmware_version': 'LS0A', 'state': 'ok', 'raid_state': 'ready'}] nics = [ {'id': 'NIC.Embedded.1-1-1', 'mac': 'B0:83:FE:C6:6F:A1', 'model': 'Broadcom Gigabit Ethernet BCM5720 - B0:83:FE:C6:6F:A1', 'speed': '1000 Mbps', 'duplex': 'full duplex', 'media_type': 'Base T'}, {'id': 'NIC.Embedded.2-1-1', 'mac': 'B0:83:FE:C6:6F:A2', 'model': 'Broadcom Gigabit Ethernet BCM5720 - B0:83:FE:C6:6F:A2', 'speed': '1000 Mbps', 'duplex': 'full duplex', 'media_type': 'Base T'}] bios_boot_settings = {'BootMode': {'current_value': 'Bios'}} uefi_boot_settings = {'BootMode': {'current_value': 'Uefi'}, 'PxeDev1EnDis': {'current_value': 'Enabled'}, 'PxeDev2EnDis': {'current_value': 'Disabled'}, 'PxeDev3EnDis': {'current_value': 'Disabled'}, 'PxeDev4EnDis': {'current_value': 'Disabled'}, 'PxeDev1Interface': { 'current_value': 'NIC.Embedded.1-1-1'}, 'PxeDev2Interface': None, 'PxeDev3Interface': None, 'PxeDev4Interface': None} nic_settings = {'LegacyBootProto': {'current_value': 'PXE'}, 'FQDD': 'NIC.Embedded.1-1-1'} self.memory = [test_utils.dict_to_namedtuple(values=m) for m in memory] self.cpus = [test_utils.dict_to_namedtuple(values=c) for c in cpus] self.virtual_disks = [test_utils.dict_to_namedtuple(values=vd) for vd in virtual_disks] self.physical_disks = [test_utils.dict_to_namedtuple(values=pd) for pd in physical_disks] self.nics = [test_utils.dict_to_namedtuple(values=n) for n in nics] self.bios_boot_settings = test_utils.dict_of_object(bios_boot_settings) self.uefi_boot_settings = test_utils.dict_of_object(uefi_boot_settings) self.nic_settings = test_utils.dict_of_object(nic_settings) def test_get_properties(self): expected = drac_common.COMMON_PROPERTIES driver = drac_inspect.DracInspect() self.assertEqual(expected, driver.get_properties()) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_inspect_hardware(self, mock_port_create, mock_get_drac_client): expected_node_properties = { 'memory_mb': 32768, 'local_gb': 1116, 'cpus': 18, 'cpu_arch': 'x86_64', 'capabilities': 'boot_mode:uefi'} mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_memory.return_value = self.memory mock_client.list_cpus.return_value = self.cpus mock_client.list_virtual_disks.return_value = self.virtual_disks mock_client.list_nics.return_value = self.nics mock_client.list_bios_settings.return_value = self.uefi_boot_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: return_value = task.driver.inspect.inspect_hardware(task) self.node.refresh() self.assertEqual(expected_node_properties, self.node.properties) self.assertEqual(states.MANAGEABLE, return_value) self.assertEqual(2, mock_port_create.call_count) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_inspect_hardware_fail(self, mock_port_create, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_memory.return_value = self.memory mock_client.list_cpus.return_value = self.cpus mock_client.list_virtual_disks.side_effect = ( drac_exceptions.BaseClientException('boom')) mock_client.list_bios_settings.return_value = self.bios_boot_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_inspect_hardware_no_virtual_disk(self, mock_port_create, mock_get_drac_client): expected_node_properties = { 'memory_mb': 32768, 'local_gb': 279, 'cpus': 18, 'cpu_arch': 'x86_64', 'capabilities': 'boot_mode:uefi'} mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_memory.return_value = self.memory mock_client.list_cpus.return_value = self.cpus mock_client.list_virtual_disks.return_value = [] mock_client.list_physical_disks.return_value = self.physical_disks mock_client.list_nics.return_value = self.nics mock_client.list_bios_settings.return_value = self.uefi_boot_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: return_value = task.driver.inspect.inspect_hardware(task) self.node.refresh() self.assertEqual(expected_node_properties, self.node.properties) self.assertEqual(states.MANAGEABLE, return_value) self.assertEqual(2, mock_port_create.call_count) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_inspect_hardware_no_cpu( self, mock_port_create, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_memory.return_value = self.memory mock_client.list_cpus.return_value = [] mock_client.list_virtual_disks.return_value = [] mock_client.list_physical_disks.return_value = self.physical_disks mock_client.list_nics.return_value = self.nics mock_client.list_bios_settings.return_value = self.uefi_boot_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_inspect_hardware_with_existing_ports(self, mock_port_create, mock_get_drac_client): expected_node_properties = { 'memory_mb': 32768, 'local_gb': 1116, 'cpus': 18, 'cpu_arch': 'x86_64', 'capabilities': 'boot_mode:uefi'} mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_memory.return_value = self.memory mock_client.list_cpus.return_value = self.cpus mock_client.list_virtual_disks.return_value = self.virtual_disks mock_client.list_nics.return_value = self.nics mock_client.list_bios_settings.return_value = self.uefi_boot_settings mock_port_create.side_effect = exception.MACAlreadyExists("boom") with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: return_value = task.driver.inspect.inspect_hardware(task) self.node.refresh() self.assertEqual(expected_node_properties, self.node.properties) self.assertEqual(states.MANAGEABLE, return_value) self.assertEqual(2, mock_port_create.call_count) def test__guess_root_disk(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: root_disk = task.driver.inspect._guess_root_disk( self.physical_disks) self.assertEqual(285888, root_disk.size_mb) def test__calculate_cpus(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: cpu = task.driver.inspect._calculate_cpus( self.cpus[0]) self.assertEqual(12, cpu) def test__calculate_cpus_without_ht_enabled(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: cpu = task.driver.inspect._calculate_cpus( self.cpus[1]) self.assertEqual(6, cpu) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__get_pxe_dev_nics_with_UEFI_boot_mode(self, mock_get_drac_client): expected_pxe_nic = self.uefi_boot_settings[ 'PxeDev1Interface'].current_value mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_bios_settings.return_value = self.uefi_boot_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_dev_nics = task.driver.inspect._get_pxe_dev_nics( mock_client, self.nics, self.node) self.assertEqual(expected_pxe_nic, pxe_dev_nics[0]) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__get_pxe_dev_nics_with_BIOS_boot_mode(self, mock_get_drac_client): expected_pxe_nic = self.nic_settings['FQDD'] mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_bios_settings.return_value = self.bios_boot_settings mock_client.list_nic_settings.return_value = self.nic_settings with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_dev_nics = task.driver.inspect._get_pxe_dev_nics( mock_client, self.nics, self.node) self.assertEqual(expected_pxe_nic, pxe_dev_nics[0]) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__get_pxe_dev_nics_list_boot_setting_failure(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_bios_settings.side_effect = ( drac_exceptions.BaseClientException('foo')) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect._get_pxe_dev_nics, mock_client, self.nics, self.node) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__get_pxe_dev_nics_list_nic_setting_failure(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_bios_settings.return_value = self.bios_boot_settings mock_client.list_nic_settings.side_effect = ( drac_exceptions.BaseClientException('bar')) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect._get_pxe_dev_nics, mock_client, self.nics, self.node) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) def test__get_pxe_dev_nics_with_empty_list(self, mock_get_drac_client): expected_pxe_nic = [] nic_setting = [] mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_bios_settings.return_value = self.bios_boot_settings mock_client.list_nic_settings.return_value = nic_setting with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_dev_nics = task.driver.inspect._get_pxe_dev_nics( mock_client, self.nics, self.node) self.assertEqual(expected_pxe_nic, pxe_dev_nics) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_job.py0000664000175000017500000001552113652514273024744 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC job specific methods """ from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracJobTestCase(test_utils.BaseDracTest): def setUp(self): super(DracJobTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) self.job_dict = { 'id': 'JID_001436912645', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': '00000101000000', 'until_time': 'TIME_NA', 'message': 'Job in progress', 'status': 'Running', 'percent_complete': 34} self.job = test_utils.make_job(self.job_dict) def test_get_job(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.get_job.return_value = self.job job = drac_job.get_job(self.node, 'foo') mock_client.get_job.assert_called_once_with('foo') self.assertEqual(self.job, job) def test_get_job_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.get_job.side_effect = exc self.assertRaises(exception.DracOperationError, drac_job.get_job, self.node, 'foo') def test_list_unfinished_jobs(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [self.job] jobs = drac_job.list_unfinished_jobs(self.node) mock_client.list_jobs.assert_called_once_with(only_unfinished=True) self.assertEqual([self.job], jobs) def test_list_unfinished_jobs_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.list_jobs.side_effect = exc self.assertRaises(exception.DracOperationError, drac_job.list_unfinished_jobs, self.node) def test_validate_job_queue(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [] drac_job.validate_job_queue(self.node) mock_client.list_jobs.assert_called_once_with(only_unfinished=True) def test_validate_job_queue_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.list_jobs.side_effect = exc self.assertRaises(exception.DracOperationError, drac_job.validate_job_queue, self.node) def test_validate_job_queue_invalid(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [self.job] self.assertRaises(exception.DracOperationError, drac_job.validate_job_queue, self.node) def test_validate_job_queue_name_prefix(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [self.job] drac_job.validate_job_queue(self.node, name_prefix='Fake') mock_client.list_jobs.assert_called_once_with(only_unfinished=True) def test_validate_job_queue_name_prefix_invalid(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [self.job] self.assertRaises(exception.DracOperationError, drac_job.validate_job_queue, self.node, name_prefix='ConfigBIOS') @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracVendorPassthruJobTestCase(test_utils.BaseDracTest): def setUp(self): super(DracVendorPassthruJobTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) self.job_dict = { 'id': 'JID_001436912645', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': '00000101000000', 'until_time': 'TIME_NA', 'message': 'Job in progress', 'status': 'Running', 'percent_complete': 34} self.job = test_utils.make_job(self.job_dict) def test_list_unfinished_jobs(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_jobs.return_value = [self.job] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: resp = task.driver.vendor.list_unfinished_jobs(task) mock_client.list_jobs.assert_called_once_with(only_unfinished=True) self.assertEqual([self.job_dict], resp['unfinished_jobs']) def test_list_unfinished_jobs_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.list_jobs.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.list_unfinished_jobs, task) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_raid.py0000664000175000017500000026473313652514273025124 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC RAID interface """ from dracclient import constants from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic.drivers.modules.drac import raid as drac_raid from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracQueryRaidConfigurationTestCase(test_utils.BaseDracTest): def setUp(self): super(DracQueryRaidConfigurationTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} self.raid_controller = test_utils.make_raid_controller( raid_controller_dict) virtual_disk_dict = { 'id': 'Disk.Virtual.0:RAID.Integrated.1-1', 'name': 'disk 0', 'description': 'Virtual Disk 0 on Integrated RAID Controller 1', 'controller': 'RAID.Integrated.1-1', 'raid_level': '1', 'size_mb': 571776, 'status': 'ok', 'raid_status': 'online', 'span_depth': 1, 'span_length': 2, 'pending_operations': None, 'physical_disks': []} self.virtual_disk = test_utils.make_virtual_disk(virtual_disk_dict) physical_disk_dict = { 'id': 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'description': ('Disk 1 in Backplane 1 of ' 'Integrated RAID Controller 1'), 'controller': 'RAID.Integrated.1-1', 'manufacturer': 'SEAGATE', 'model': 'ST600MM0006', 'media_type': 'hdd', 'interface_type': 'sas', 'size_mb': 571776, 'free_size_mb': 571776, 'serial_number': 'S0M3EY2Z', 'firmware_version': 'LS0A', 'status': 'ok', 'raid_status': 'ready', 'sas_address': '500056B37789ABE3', 'device_protocol': None} self.physical_disk = test_utils.make_physical_disk(physical_disk_dict) def test_list_raid_controllers(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_raid_controllers.return_value = [self.raid_controller] raid_controllers = drac_raid.list_raid_controllers(self.node) mock_client.list_raid_controllers.assert_called_once_with() self.assertEqual(self.raid_controller, raid_controllers[0]) def test_list_raid_controllers_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.list_raid_controllers.side_effect = exc self.assertRaises(exception.DracOperationError, drac_raid.list_raid_controllers, self.node) def test_list_virtual_disks(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_virtual_disks.return_value = [self.virtual_disk] virtual_disks = drac_raid.list_virtual_disks(self.node) mock_client.list_virtual_disks.assert_called_once_with() self.assertEqual(self.virtual_disk, virtual_disks[0]) def test_list_virtual_disks_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.list_virtual_disks.side_effect = exc self.assertRaises(exception.DracOperationError, drac_raid.list_virtual_disks, self.node) def test_list_physical_disks(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_physical_disks.return_value = [self.physical_disk] physical_disks = drac_raid.list_physical_disks(self.node) mock_client.list_physical_disks.assert_called_once_with() self.assertEqual(self.physical_disk, physical_disks[0]) def test_list_physical_disks_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = exception.DracOperationError('boom') mock_client.list_physical_disks.side_effect = exc self.assertRaises(exception.DracOperationError, drac_raid.list_physical_disks, self.node) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracManageVirtualDisksTestCase(test_utils.BaseDracTest): def setUp(self): super(DracManageVirtualDisksTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_create_virtual_disk(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.create_virtual_disk( self.node, 'controller', ['disk1', 'disk2'], '1+0', 43008) mock_validate_job_queue.assert_called_once_with( self.node, name_prefix='Config:RAID:controller') mock_client.create_virtual_disk.assert_called_once_with( 'controller', ['disk1', 'disk2'], '1+0', 43008, None, None, None) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_create_virtual_disk_with_optional_attrs(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.create_virtual_disk( self.node, 'controller', ['disk1', 'disk2'], '1+0', 43008, disk_name='name', span_length=3, span_depth=2) mock_validate_job_queue.assert_called_once_with( self.node, name_prefix='Config:RAID:controller') mock_client.create_virtual_disk.assert_called_once_with( 'controller', ['disk1', 'disk2'], '1+0', 43008, 'name', 3, 2) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_create_virtual_disk_fail(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.create_virtual_disk.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid.create_virtual_disk, self.node, 'controller', ['disk1', 'disk2'], '1+0', 42) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_delete_virtual_disk(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.delete_virtual_disk(self.node, 'disk1') mock_validate_job_queue.assert_called_once_with(self.node) mock_client.delete_virtual_disk.assert_called_once_with('disk1') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_delete_virtual_disk_fail(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.delete_virtual_disk.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid.delete_virtual_disk, self.node, 'disk1') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test__reset_raid_config(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid._reset_raid_config( self.node, 'controller') mock_validate_job_queue.assert_called_once_with( self.node, name_prefix='Config:RAID:controller') mock_client.reset_raid_config.assert_called_once_with( 'controller') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test__reset_raid_config_fail(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.reset_raid_config.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid._reset_raid_config, self.node, 'RAID.Integrated.1-1') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_clear_foreign_config(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.clear_foreign_config( self.node, 'RAID.Integrated.1-1') mock_validate_job_queue.assert_called_once_with( self.node, 'Config:RAID:RAID.Integrated.1-1') mock_client.clear_foreign_config.assert_called_once_with( 'RAID.Integrated.1-1') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_clear_foreign_config_fail(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.clear_foreign_config.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid.clear_foreign_config, self.node, 'RAID.Integrated.1-1') @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_change_physical_disk_state(self, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client controllers_to_physical_disk_ids = {'RAID.Integrated.1-1': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1']} expected_change_disk_state = { 'is_reboot_required': True, 'conversion_results': { 'RAID.Integrated.1-1': {'is_reboot_required': 'optional', 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} mode = constants.RaidStatus.raid mock_client.change_physical_disk_state.return_value = \ expected_change_disk_state actual_change_disk_state = drac_raid.change_physical_disk_state( self.node, mode=mode, controllers_to_physical_disk_ids=controllers_to_physical_disk_ids) mock_validate_job_queue.assert_called_once_with(self.node) mock_client.change_physical_disk_state.assert_called_once_with( mode, controllers_to_physical_disk_ids) self.assertEqual(expected_change_disk_state, actual_change_disk_state) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test__change_physical_disk_mode(self, mock_commit_config, mock_change_physical_disk_state, mock_get_drac_client): mock_commit_config.return_value = '42' mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} actual_change_disk_state = drac_raid._change_physical_disk_mode( self.node, mode=constants.RaidStatus.raid) self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual('completed', self.node.driver_internal_info['raid_config_substep']) self.assertEqual( ['RAID.Integrated.1-1'], self.node.driver_internal_info['raid_config_parameters']) mock_commit_config.assert_called_once_with( self.node, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(states.DEPLOYWAIT, actual_change_disk_state) def test_commit_config(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.commit_config(self.node, 'controller1') mock_client.commit_pending_raid_changes.assert_called_once_with( raid_controller='controller1', reboot=False, realtime=False) def test_commit_config_with_reboot(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.commit_config(self.node, 'controller1', reboot=True, realtime=False) mock_client.commit_pending_raid_changes.assert_called_once_with( raid_controller='controller1', reboot=True, realtime=False) def test_commit_config_with_realtime(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.commit_config(self.node, 'RAID.Integrated.1-1', reboot=False, realtime=True) mock_client.commit_pending_raid_changes.assert_called_once_with( raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) def test_commit_config_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.commit_pending_raid_changes.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid.commit_config, self.node, 'controller1') @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test__commit_to_controllers_with_config_job(self, mock_commit_config, mock_get_drac_client): controllers = [{'is_reboot_required': 'true', 'is_commit_required': True, 'raid_controller': 'AHCI.Slot.3-1'}] substep = "delete_foreign_config" mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_commit_config.return_value = "42" drac_raid._commit_to_controllers(self.node, controllers=controllers, substep=substep) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual(substep, self.node.driver_internal_info['raid_config_substep']) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test__commit_to_controllers_without_config_job( self, mock_commit_config, mock_get_drac_client): controllers = [{'is_reboot_required': 'true', 'is_commit_required': False, 'raid_controller': 'AHCI.Slot.3-1'}] substep = "delete_foreign_config" mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_commit_config.return_value = None drac_raid._commit_to_controllers(self.node, controllers=controllers, substep=substep) self.assertEqual(0, mock_commit_config.call_count) self.assertNotIn('raid_config_job_ids', self.node.driver_internal_info) self.assertEqual(substep, self.node.driver_internal_info['raid_config_substep']) def test_abandon_config(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client drac_raid.abandon_config(self.node, 'controller1') mock_client.abandon_pending_raid_changes.assert_called_once_with( 'controller1') def test_abandon_config_fail(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = drac_exceptions.BaseClientException('boom') mock_client.abandon_pending_raid_changes.side_effect = exc self.assertRaises( exception.DracOperationError, drac_raid.abandon_config, self.node, 'controller1') class DracCreateRaidConfigurationHelpersTestCase(test_utils.BaseDracTest): def setUp(self): super(DracCreateRaidConfigurationHelpersTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) self.physical_disk = { 'id': 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'description': ('Disk 1 in Backplane 1 of ' 'Integrated RAID Controller 1'), 'controller': 'RAID.Integrated.1-1', 'manufacturer': 'SEAGATE', 'model': 'ST600MM0006', 'media_type': 'hdd', 'interface_type': 'sas', 'size_mb': 571776, 'free_size_mb': 571776, 'serial_number': 'S0M3EY2Z', 'firmware_version': 'LS0A', 'status': 'ok', 'raid_status': 'ready', 'sas_address': '500056B37789ABE3', 'device_protocol': None} self.physical_disks = [] for i in range(8): disk = self.physical_disk.copy() disk['id'] = ('Disk.Bay.%s:Enclosure.Internal.0-1:' 'RAID.Integrated.1-1' % i) disk['serial_number'] = 'serial%s' % i self.physical_disks.append(disk) self.root_logical_disk = { 'size_gb': 50, 'raid_level': '1', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'root_volume', 'is_root_volume': True } self.nonroot_logical_disks = [ {'size_gb': 100, 'raid_level': '5', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'data_volume1'}, {'size_gb': 100, 'raid_level': '5', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'data_volume2'} ] self.logical_disks = ( [self.root_logical_disk] + self.nonroot_logical_disks) self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() def _generate_physical_disks(self): physical_disks = [] for disk in self.physical_disks: physical_disks.append(test_utils.make_physical_disk(disk)) return physical_disks def test__filter_logical_disks_root_only(self): logical_disks = drac_raid._filter_logical_disks( self.target_raid_configuration['logical_disks'], True, False) self.assertEqual(1, len(logical_disks)) self.assertEqual('root_volume', logical_disks[0]['volume_name']) def test__filter_logical_disks_nonroot_only(self): logical_disks = drac_raid._filter_logical_disks( self.target_raid_configuration['logical_disks'], False, True) self.assertEqual(2, len(logical_disks)) self.assertEqual('data_volume1', logical_disks[0]['volume_name']) self.assertEqual('data_volume2', logical_disks[1]['volume_name']) def test__filter_logical_disks_excelude_all(self): logical_disks = drac_raid._filter_logical_disks( self.target_raid_configuration['logical_disks'], False, False) self.assertEqual(0, len(logical_disks)) def test__calculate_spans_for_2_disk_and_raid_level_1(self): raid_level = '1' disks_count = 2 spans_count = drac_raid._calculate_spans(raid_level, disks_count) self.assertEqual(1, spans_count) def test__calculate_spans_for_7_disk_and_raid_level_50(self): raid_level = '5+0' disks_count = 7 spans_count = drac_raid._calculate_spans(raid_level, disks_count) self.assertEqual(2, spans_count) def test__calculate_spans_for_7_disk_and_raid_level_10(self): raid_level = '1+0' disks_count = 7 spans_count = drac_raid._calculate_spans(raid_level, disks_count) self.assertEqual(3, spans_count) def test__calculate_spans_for_invalid_raid_level(self): raid_level = 'foo' disks_count = 7 self.assertRaises(exception.DracOperationError, drac_raid._calculate_spans, raid_level, disks_count) def test__max_volume_size_mb(self): physical_disks = self._generate_physical_disks() physical_disk_free_space_mb = {} for disk in physical_disks: physical_disk_free_space_mb[disk] = disk.free_size_mb max_size = drac_raid._max_volume_size_mb( '5', physical_disks[0:3], physical_disk_free_space_mb) self.assertEqual(1143552, max_size) def test__volume_usage_per_disk_mb(self): logical_disk = { 'size_mb': 102400, 'raid_level': '5', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'data_volume1'} physical_disks = self._generate_physical_disks() usage_per_disk = drac_raid._volume_usage_per_disk_mb(logical_disk, physical_disks) self.assertEqual(14656, usage_per_disk) def test__find_configuration(self): logical_disks = [ {'size_mb': 102400, 'raid_level': '5', 'is_root_volume': True, 'disk_type': 'hdd'} ] physical_disks = self._generate_physical_disks() expected_contoller = 'RAID.Integrated.1-1' expected_physical_disk_ids = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'] logical_disks = drac_raid._find_configuration(logical_disks, physical_disks, False) self.assertEqual(expected_contoller, logical_disks[0]['controller']) self.assertEqual(expected_physical_disk_ids, logical_disks[0]['physical_disks']) def test__find_configuration_with_more_than_min_disks_for_raid_level(self): logical_disks = [ {'size_mb': 3072000, 'raid_level': '5', 'is_root_volume': True, 'disk_type': 'hdd'} ] physical_disks = self._generate_physical_disks() expected_contoller = 'RAID.Integrated.1-1' expected_physical_disk_ids = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1'] logical_disks = drac_raid._find_configuration(logical_disks, physical_disks, False) self.assertEqual(expected_contoller, logical_disks[0]['controller']) self.assertEqual(expected_physical_disk_ids, logical_disks[0]['physical_disks']) def test__find_configuration_all_steps(self): logical_disks = [ # step 1 {'size_mb': 102400, 'raid_level': '1', 'physical_disks': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1']}, # step 2 {'size_mb': 51200, 'raid_level': '5'}, # step 3 {'size_mb': 'MAX', 'raid_level': '0', 'physical_disks': [ 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1']}, ] physical_disks = self._generate_physical_disks() logical_disks = drac_raid._find_configuration(logical_disks, physical_disks, False) self.assertEqual(3, len(logical_disks)) # step 1 self.assertIn( {'raid_level': '1', 'size_mb': 102400, 'controller': 'RAID.Integrated.1-1', 'span_depth': 1, 'span_length': 2, 'physical_disks': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1']}, logical_disks) # step 2 self.assertIn( {'raid_level': '5', 'size_mb': 51200, 'controller': 'RAID.Integrated.1-1', 'span_depth': 1, 'span_length': 3, 'physical_disks': [ 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1']}, logical_disks) # step 3 self.assertIn( {'raid_level': '0', 'size_mb': 1143552, 'controller': 'RAID.Integrated.1-1', 'span_depth': 1, 'span_length': 2, 'physical_disks': [ 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1']}, logical_disks) def test__find_configuration_pending_delete(self): logical_disks = [ {'size_mb': 102400, 'raid_level': '5', 'is_root_volume': True, 'disk_type': 'hdd'} ] physical_disks = self._generate_physical_disks() # No free space, but deletion pending means they're still usable. physical_disks = [disk._replace(free_size_mb=0) for disk in physical_disks] expected_contoller = 'RAID.Integrated.1-1' expected_physical_disk_ids = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'] logical_disks = drac_raid._find_configuration(logical_disks, physical_disks, True) self.assertEqual(expected_contoller, logical_disks[0]['controller']) self.assertEqual(expected_physical_disk_ids, logical_disks[0]['physical_disks']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) def test__validate_volume_size_requested_more_than_actual_size( self, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.logical_disk = { 'physical_disks': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1'], 'raid_level': '1+0', 'is_root_volume': True, 'size_mb': 102400000, 'controller': 'RAID.Integrated.1-1'} self.logical_disks = [self.logical_disk.copy()] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks processed_logical_disks = drac_raid._validate_volume_size( self.node, self.node.target_raid_config['logical_disks']) self.assertEqual(2287104, processed_logical_disks[0]['size_mb']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) def test__validate_volume_size_requested_less_than_actual_size( self, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.logical_disk = { 'physical_disks': [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.6:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.7:Enclosure.Internal.0-1:RAID.Integrated.1-1'], 'raid_level': '1+0', 'is_root_volume': True, 'size_mb': 204800, 'controller': 'RAID.Integrated.1-1'} self.logical_disks = [self.logical_disk.copy()] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks processed_logical_disks = drac_raid._validate_volume_size( self.node, self.node.target_raid_config['logical_disks']) self.assertEqual(self.logical_disk, processed_logical_disks[0]) class DracRaidInterfaceTestCase(test_utils.BaseDracTest): def setUp(self): super(DracRaidInterfaceTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) self.physical_disk = { 'id': 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'description': ('Disk 1 in Backplane 1 of ' 'Integrated RAID Controller 1'), 'controller': 'RAID.Integrated.1-1', 'manufacturer': 'SEAGATE', 'model': 'ST600MM0006', 'media_type': 'hdd', 'interface_type': 'sas', 'size_mb': 571776, 'free_size_mb': 571776, 'serial_number': 'S0M3EY2Z', 'firmware_version': 'LS0A', 'status': 'ok', 'raid_status': 'ready', 'sas_address': '500056B37789ABE3', 'device_protocol': None} self.physical_disks = [] for i in range(8): disk = self.physical_disk.copy() disk['id'] = ('Disk.Bay.%s:Enclosure.Internal.0-1:' 'RAID.Integrated.1-1' % i) disk['serial_number'] = 'serial%s' % i self.physical_disks.append(disk) self.root_logical_disk = { 'size_gb': 50, 'raid_level': '1', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'root_volume', 'is_root_volume': True } self.nonroot_logical_disks = [ {'size_gb': 100, 'raid_level': '5', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'data_volume1'}, {'size_gb': 100, 'raid_level': '5', 'disk_type': 'hdd', 'interface_type': 'sas', 'volume_name': 'data_volume2'} ] self.logical_disks = ( [self.root_logical_disk] + self.nonroot_logical_disks) self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.clean_step = {'foo': 'bar'} self.node.save() def _generate_physical_disks(self): physical_disks = [] for disk in self.physical_disks: physical_disks.append(test_utils.make_physical_disk(disk)) return physical_disks @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def _test_create_configuration( self, expected_state, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_client.list_raid_controllers.return_value = [raid_controller] mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} mock_commit_config.side_effect = ['42'] next_substep = "create_virtual_disks" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=False) mock_commit_config.assert_called_with( task.node, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(expected_state, return_value) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(1, mock_change_physical_disk_state.call_count) self.node.refresh() self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) self.assertEqual(next_substep, task.node.driver_internal_info[ 'raid_config_substep']) self.assertEqual(['42'], task.node.driver_internal_info[ 'raid_config_job_ids']) def test_create_configuration_in_clean(self): self._test_create_configuration(states.CLEANWAIT) def test_create_configuration_in_deploy(self): self.node.clean_step = None self.node.save() self._test_create_configuration(states.DEPLOYWAIT) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_without_drives_conversion( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_client.list_raid_controllers.return_value = [raid_controller] mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.false, 'is_commit_required': True} mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.false, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.false, 'is_commit_required': False}}, 'commit_required_ids': ['RAID.Integrated.1-1']} mock_client.create_virtual_disk.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} mock_commit_config.side_effect = ['42'] next_substep = "completed" with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=False) mock_commit_config.assert_called_with( task.node, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(states.CLEANWAIT, return_value) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(1, mock_change_physical_disk_state.call_count) self.assertEqual(1, mock_client.create_virtual_disk.call_count) self.node.refresh() self.assertEqual(False, task.node.driver_internal_info[ 'volume_validation']) self.assertEqual(next_substep, task.node.driver_internal_info[ 'raid_config_substep']) self.assertEqual(['42'], task.node.driver_internal_info[ 'raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_no_change( self, mock_commit_config, mock_change_physical_disk_state, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.false, 'is_commit_required': False}}, 'commit_required_ids': ['RAID.Integrated.1-1']} mock_commit_config.return_value = '42' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.create_configuration( task, create_root_volume=False, create_nonroot_volumes=False, delete_existing=False) self.assertEqual(False, task.node.driver_internal_info[ 'volume_validation']) self.assertEqual(0, mock_client.create_virtual_disk.call_count) self.assertEqual(0, mock_commit_config.call_count) self.assertIsNone(return_value) self.node.refresh() self.assertNotIn('raid_config_job_ids', self.node.driver_internal_info) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_virtual_disks', autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_delete_existing( self, mock_commit_config, mock_validate_job_queue, mock_change_physical_disk_state, mock_list_physical_disks, mock_list_virtual_disks, mock__reset_raid_config, mock_get_drac_client): self.node.clean_step = None self.node.save() mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client physical_disks = self._generate_physical_disks() raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['12'] mock_client.list_raid_controllers.return_value = [raid_controller] mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=False, delete_existing=True) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) mock_commit_config.assert_called_with( task.node, raid_controller='RAID.Integrated.1-1', realtime=True, reboot=False) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(states.DEPLOYWAIT, return_value) self.node.refresh() self.assertEqual(['12'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_nested_raid_level( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk = { 'size_gb': 100, 'raid_level': '5+0', 'is_root_volume': True } self.logical_disks = [self.root_logical_disk] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(1, mock_change_physical_disk_state.call_count) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_nested_raid_10( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk = { 'size_gb': 100, 'raid_level': '1+0', 'is_root_volume': True } self.logical_disks = [self.root_logical_disk] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(1, mock_commit_config.call_count) self.assertEqual(1, mock_change_physical_disk_state.call_count) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_multiple_controllers( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.physical_disks[0]['controller'] = 'controller-2' self.physical_disks[1]['controller'] = 'controller-2' physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_backing_physical_disks( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk['physical_disks'] = [ 'Disk.Bay.3:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1'] self.logical_disks = ( [self.root_logical_disk] + self.nonroot_logical_disks) self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_predefined_number_of_physical_disks( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk['raid_level'] = '0' self.root_logical_disk['number_of_physical_disks'] = 3 self.logical_disks = ( [self.root_logical_disk, self.nonroot_logical_disks[0]]) self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_max_size( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk = { 'size_gb': 'MAX', 'raid_level': '1', 'physical_disks': [ 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1'], 'is_root_volume': True } self.logical_disks = ([self.root_logical_disk] + self.nonroot_logical_disks) self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['12'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.node.refresh() self.assertEqual(['12'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) def test_create_configuration_with_max_size_without_backing_disks( self, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.root_logical_disk = { 'size_gb': 'MAX', 'raid_level': '1', 'is_root_volume': True } self.logical_disks = [self.root_logical_disk] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:2] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.raid.create_configuration, task) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_share_physical_disks( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.nonroot_logical_disks[0]['share_physical_disks'] = True self.nonroot_logical_disks[1]['share_physical_disks'] = True self.logical_disks = self.nonroot_logical_disks self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:3] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_fails_with_sharing_disabled( self, mock_commit_config, mock_validate_job_queue, mock_list_physical_disks, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.nonroot_logical_disks[0]['share_physical_disks'] = False self.nonroot_logical_disks[1]['share_physical_disks'] = False self.logical_disks = self.nonroot_logical_disks self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:3] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_client.list_raid_controllers.return_value = [raid_controller] mock_commit_config.return_value = '42' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.DracOperationError, task.driver.raid.create_configuration, task, create_root_volume=True, create_nonroot_volumes=True) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'change_physical_disk_state', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_max_size_and_share_physical_disks( self, mock_commit_config, mock_change_physical_disk_state, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.nonroot_logical_disks[0]['share_physical_disks'] = True self.nonroot_logical_disks[0]['size_gb'] = 'MAX' self.nonroot_logical_disks[0]['physical_disks'] = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'] self.nonroot_logical_disks[1]['share_physical_disks'] = True self.logical_disks = self.nonroot_logical_disks self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:3] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.side_effect = ['42'] mock_change_physical_disk_state.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'conversion_results': { 'RAID.Integrated.1-1': { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True}}, 'commit_required_ids': ['RAID.Integrated.1-1']} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.raid.create_configuration( task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) self.assertEqual(True, task.node.driver_internal_info[ 'volume_validation']) # Commits to the controller mock_commit_config.assert_called_with( mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info[ 'raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_with_multiple_max_and_sharing_same_disks( self, mock_commit_config, mock_validate_job_queue, mock_list_physical_disks, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.nonroot_logical_disks[0]['share_physical_disks'] = True self.nonroot_logical_disks[0]['size_gb'] = 'MAX' self.nonroot_logical_disks[0]['physical_disks'] = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'] self.nonroot_logical_disks[1]['share_physical_disks'] = True self.nonroot_logical_disks[1]['size_gb'] = 'MAX' self.nonroot_logical_disks[1]['physical_disks'] = [ 'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'] self.logical_disks = self.nonroot_logical_disks self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:3] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks mock_commit_config.return_value = '42' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.DracOperationError, task.driver.raid.create_configuration, task, create_root_volume=True, create_nonroot_volumes=True, delete_existing=False) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_fails_if_not_enough_space( self, mock_commit_config, mock_validate_job_queue, mock_list_physical_disks, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.logical_disk = { 'size_gb': 500, 'raid_level': '1' } self.logical_disks = [self.logical_disk, self.logical_disk] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() self.physical_disks = self.physical_disks[0:3] physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_client.list_raid_controllers.return_value = [raid_controller] mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} mock_commit_config.return_value = '42' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.DracOperationError, task.driver.raid.create_configuration, task, create_root_volume=True, create_nonroot_volumes=True) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_physical_disks', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_create_configuration_fails_if_disk_already_reserved( self, mock_commit_config, mock_validate_job_queue, mock_list_physical_disks, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client self.logical_disk = { 'size_gb': 500, 'raid_level': '1', 'physical_disks': [ 'Disk.Bay.4:Enclosure.Internal.0-1:RAID.Integrated.1-1', 'Disk.Bay.5:Enclosure.Internal.0-1:RAID.Integrated.1-1'], } self.logical_disks = [self.logical_disk, self.logical_disk.copy()] self.target_raid_configuration = {'logical_disks': self.logical_disks} self.node.target_raid_config = self.target_raid_configuration self.node.save() physical_disks = self._generate_physical_disks() mock_list_physical_disks.return_value = physical_disks raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} raid_controller = test_utils.make_raid_controller( raid_controller_dict) mock_client.list_raid_controllers.return_value = [raid_controller] mock_commit_config.return_value = '42' mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises( exception.DracOperationError, task.driver.raid.create_configuration, task, create_root_volume=True, create_nonroot_volumes=True) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', autospec=True) @mock.patch.object(drac_raid, 'list_raid_controllers', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def _test_delete_configuration(self, expected_state, mock_commit_config, mock_validate_job_queue, mock_list_raid_controllers, mock__reset_raid_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client raid_controller_dict = { 'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H710 Mini', 'primary_status': 'ok', 'firmware_version': '21.3.0-0009', 'bus': '1', 'supports_realtime': True} mock_list_raid_controllers.return_value = [ test_utils.make_raid_controller(raid_controller_dict)] mock_commit_config.return_value = '42' mock__reset_raid_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.delete_configuration(task) mock_commit_config.assert_called_once_with( task.node, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) self.assertEqual(expected_state, return_value) self.node.refresh() self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) def test_delete_configuration_in_clean(self): self._test_delete_configuration(states.CLEANWAIT) def test_delete_configuration_in_deploy(self): self.node.clean_step = None self.node.save() self._test_delete_configuration(states.DEPLOYWAIT) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_raid_controllers', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) @mock.patch.object(drac_raid, '_reset_raid_config', spec_set=True, autospec=True) def test_delete_configuration_with_non_realtime_controller( self, mock__reset_raid_config, mock_commit_config, mock_validate_job_queue, mock_list_raid_controllers, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client expected_raid_config_params = ['AHCI.Slot.3-1', 'RAID.Integrated.1-1'] mix_controllers = [{'id': 'AHCI.Slot.3-1', 'description': 'AHCI controller in slot 3', 'manufacturer': 'DELL', 'model': 'BOSS-S1', 'primary_status': 'unknown', 'firmware_version': '2.5.13.3016', 'bus': '5E', 'supports_realtime': False}, {'id': 'RAID.Integrated.1-1', 'description': 'Integrated RAID Controller 1', 'manufacturer': 'DELL', 'model': 'PERC H740 Mini', 'primary_status': 'unknown', 'firmware_version': '50.5.0-1750', 'bus': '3C', 'supports_realtime': True}] mock_list_raid_controllers.return_value = [ test_utils.make_raid_controller(controller) for controller in mix_controllers] mock_commit_config.side_effect = ['42', '12'] mock__reset_raid_config.side_effect = [{ 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True }, { 'is_reboot_required': constants.RebootRequired.true, 'is_commit_required': True }] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.delete_configuration(task) mock_commit_config.assert_has_calls( [mock.call(mock.ANY, raid_controller='AHCI.Slot.3-1', reboot=False, realtime=False), mock.call(mock.ANY, raid_controller='RAID.Integrated.1-1', reboot=True, realtime=False)], any_order=True) self.assertEqual(states.CLEANWAIT, return_value) self.node.refresh() self.assertEqual(expected_raid_config_params, self.node.driver_internal_info[ 'raid_config_parameters']) self.assertEqual(['42', '12'], self.node.driver_internal_info['raid_config_job_ids']) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'list_raid_controllers', autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test_delete_configuration_no_change(self, mock_commit_config, mock_validate_job_queue, mock_list_raid_controllers, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_list_raid_controllers.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid.delete_configuration(task) self.assertEqual(0, mock_client._reset_raid_config.call_count) self.assertEqual(0, mock_commit_config.call_count) self.assertIsNone(return_value) self.node.refresh() self.assertNotIn('raid_config_job_ids', self.node.driver_internal_info) @mock.patch.object(drac_raid, 'list_virtual_disks', autospec=True) def test_get_logical_disks(self, mock_list_virtual_disks): virtual_disk_dict = { 'id': 'Disk.Virtual.0:RAID.Integrated.1-1', 'name': 'disk 0', 'description': 'Virtual Disk 0 on Integrated RAID Controller 1', 'controller': 'RAID.Integrated.1-1', 'raid_level': '1', 'size_mb': 571776, 'status': 'ok', 'raid_status': 'online', 'span_depth': 1, 'span_length': 2, 'pending_operations': None, 'physical_disks': []} mock_list_virtual_disks.return_value = [ test_utils.make_virtual_disk(virtual_disk_dict)] expected_logical_disk = {'id': 'Disk.Virtual.0:RAID.Integrated.1-1', 'size_gb': 558, 'raid_level': '1', 'name': 'disk 0', 'controller': 'RAID.Integrated.1-1'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: props = task.driver.raid.get_logical_disks(task) self.assertEqual({'logical_disks': [expected_logical_disk]}, props) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'clear_foreign_config', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test__execute_foreign_drives_with_no_foreign_drives( self, mock_validate_job_queue, mock_clear_foreign_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client raid_config_params = ['RAID.Integrated.1-1'] raid_config_substep = 'clear_foreign_config' driver_internal_info = self.node.driver_internal_info driver_internal_info['raid_config_parameters'] = raid_config_params driver_internal_info['raid_config_substep'] = raid_config_substep self.node.driver_internal_info = driver_internal_info self.node.save() mock_clear_foreign_config.return_value = { 'is_reboot_required': constants.RebootRequired.false, 'is_commit_required': False } with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid._execute_foreign_drives( task, self.node) self.assertIsNone(return_value) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'clear_foreign_config', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_raid, 'commit_config', spec_set=True, autospec=True) def test__execute_foreign_drives_with_foreign_drives( self, mock_commit_config, mock_validate_job_queue, mock_clear_foreign_config, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client raid_config_params = ['RAID.Integrated.1-1'] raid_config_substep = 'clear_foreign_config' driver_internal_info = self.node.driver_internal_info driver_internal_info['raid_config_parameters'] = raid_config_params driver_internal_info['raid_config_substep'] = raid_config_substep self.node.driver_internal_info = driver_internal_info self.node.save() mock_clear_foreign_config.return_value = { 'is_reboot_required': constants.RebootRequired.optional, 'is_commit_required': True } mock_commit_config.return_value = '42' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.raid._execute_foreign_drives( task, self.node) self.assertEqual(states.CLEANWAIT, return_value) self.assertEqual(['42'], self.node.driver_internal_info['raid_config_job_ids']) self.assertEqual('physical_disk_conversion', self.node.driver_internal_info['raid_config_substep']) self.assertEqual( ['RAID.Integrated.1-1'], self.node.driver_internal_info['raid_config_parameters']) mock_commit_config.assert_called_once_with( self.node, raid_controller='RAID.Integrated.1-1', reboot=False, realtime=True) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_common.py0000664000175000017500000001246113652514273025462 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for common methods used by DRAC modules. """ import dracclient.client import mock from ironic.common import exception from ironic.drivers.modules.drac import common as drac_common from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_drac_info() class DracCommonMethodsTestCase(test_utils.BaseDracTest): def test_parse_driver_info(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) info = drac_common.parse_driver_info(node) self.assertEqual(INFO_DICT['drac_address'], info['drac_address']) self.assertEqual(INFO_DICT['drac_port'], info['drac_port']) self.assertEqual(INFO_DICT['drac_path'], info['drac_path']) self.assertEqual(INFO_DICT['drac_protocol'], info['drac_protocol']) self.assertEqual(INFO_DICT['drac_username'], info['drac_username']) self.assertEqual(INFO_DICT['drac_password'], info['drac_password']) def test_parse_driver_info_missing_host(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_address'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_port(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_port'] info = drac_common.parse_driver_info(node) self.assertEqual(443, info['drac_port']) def test_parse_driver_info_invalid_port(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) node.driver_info['drac_port'] = 'foo' self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_path(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_path'] info = drac_common.parse_driver_info(node) self.assertEqual('/wsman', info['drac_path']) def test_parse_driver_info_missing_protocol(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_protocol'] info = drac_common.parse_driver_info(node) self.assertEqual('https', info['drac_protocol']) def test_parse_driver_info_invalid_protocol(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) node.driver_info['drac_protocol'] = 'foo' self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_username(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_username'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) def test_parse_driver_info_missing_password(self): node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) del node.driver_info['drac_password'] self.assertRaises(exception.InvalidParameterValue, drac_common.parse_driver_info, node) @mock.patch.object(dracclient.client, 'DRACClient', autospec=True) def test_get_drac_client(self, mock_dracclient): expected_call = mock.call('1.2.3.4', 'admin', 'fake', 443, '/wsman', 'https') node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) drac_common.get_drac_client(node) self.assertEqual(mock_dracclient.mock_calls, [expected_call]) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_management.py0000664000175000017500000011223113652514273026302 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # Copyright (c) 2017-2018 Dell Inc. or its subsidiaries. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC management interface """ import mock from oslo_utils import importutils import ironic.common.boot_devices from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic.drivers.modules.drac import management as drac_mgmt from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils dracclient_exceptions = importutils.try_import('dracclient.exceptions') INFO_DICT = test_utils.INFO_DICT @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracManagementInternalMethodsTestCase(test_utils.BaseDracTest): def boot_modes(self, *next_modes): modes = [ {'id': 'IPL', 'name': 'BootSeq', 'is_current': True, 'is_next': False}, {'id': 'OneTime', 'name': 'OneTimeBootMode', 'is_current': False, 'is_next': False}] for mode in modes: if mode['id'] in next_modes: mode['is_next'] = True return [test_utils.dict_to_namedtuple(values=mode) for mode in modes] def setUp(self): super(DracManagementInternalMethodsTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) boot_device_ipl_pxe = { 'id': 'BIOS.Setup.1-1#BootSeq#NIC.Embedded.1-1-1', 'boot_mode': 'IPL', 'current_assigned_sequence': 0, 'pending_assigned_sequence': 0, 'bios_boot_string': 'Embedded NIC 1 Port 1 Partition 1'} boot_device_ipl_disk = { 'id': 'BIOS.Setup.1-1#BootSeq#HardDisk.List.1-1', 'boot_mode': 'IPL', 'current_assigned_sequence': 1, 'pending_assigned_sequence': 1, 'bios_boot_string': 'Hard drive C: BootSeq'} ipl_boot_device_namedtuples = [ test_utils.dict_to_namedtuple(values=boot_device_ipl_pxe), test_utils.dict_to_namedtuple(values=boot_device_ipl_disk)] ipl_boot_devices = {'IPL': ipl_boot_device_namedtuples, 'OneTime': ipl_boot_device_namedtuples} boot_device_uefi_pxe = { 'id': 'UEFI:BIOS.Setup.1-1#UefiBootSeq#NIC.PxeDevice.1-1', 'boot_mode': 'UEFI', 'current_assigned_sequence': 0, 'pending_assigned_sequence': 0, 'bios_boot_string': 'PXE Device 1: Integrated NIC 1 Port 1 Partition 1'} uefi_boot_device_namedtuples = [ test_utils.dict_to_namedtuple(values=boot_device_uefi_pxe)] uefi_boot_devices = {'UEFI': uefi_boot_device_namedtuples, 'OneTime': uefi_boot_device_namedtuples} self.boot_devices = {'IPL': ipl_boot_devices, 'UEFI': uefi_boot_devices} def test__get_boot_device(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = self.boot_modes('IPL') mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = drac_mgmt._get_boot_device(self.node) expected_boot_device = {'boot_device': 'pxe', 'persistent': True} self.assertEqual(expected_boot_device, boot_device) mock_client.list_boot_modes.assert_called_once_with() mock_client.list_boot_devices.assert_called_once_with() def test__get_boot_device_not_persistent(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client # if a non-persistent boot mode is marked as "next", it over-rides any # persistent boot modes mock_client.list_boot_modes.return_value = self.boot_modes('IPL', 'OneTime') mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = drac_mgmt._get_boot_device(self.node) expected_boot_device = {'boot_device': 'pxe', 'persistent': False} self.assertEqual(expected_boot_device, boot_device) mock_client.list_boot_modes.assert_called_once_with() mock_client.list_boot_devices.assert_called_once_with() def test__get_boot_device_with_no_boot_device(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = self.boot_modes('IPL') mock_client.list_boot_devices.return_value = {} boot_device = drac_mgmt._get_boot_device(self.node) expected_boot_device = {'boot_device': None, 'persistent': True} self.assertEqual(expected_boot_device, boot_device) mock_client.list_boot_modes.assert_called_once_with() mock_client.list_boot_devices.assert_called_once_with() def test__get_boot_device_with_empty_boot_mode_list(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [] self.assertRaises(exception.DracOperationError, drac_mgmt._get_boot_device, self.node) def test__get_next_persistent_boot_mode(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = self.boot_modes('IPL') boot_mode = drac_mgmt._get_next_persistent_boot_mode(self.node) mock_get_drac_client.assert_called_once_with(self.node) mock_client.list_boot_modes.assert_called_once_with() expected_boot_mode = 'IPL' self.assertEqual(expected_boot_mode, boot_mode) def test__get_next_persistent_boot_mode_with_non_persistent_boot_mode( self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = self.boot_modes('IPL', 'OneTime') boot_mode = drac_mgmt._get_next_persistent_boot_mode(self.node) mock_get_drac_client.assert_called_once_with(self.node) mock_client.list_boot_modes.assert_called_once_with() expected_boot_mode = 'IPL' self.assertEqual(expected_boot_mode, boot_mode) def test__get_next_persistent_boot_mode_list_boot_modes_fail( self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client exc = dracclient_exceptions.BaseClientException('boom') mock_client.list_boot_modes.side_effect = exc self.assertRaises(exception.DracOperationError, drac_mgmt._get_next_persistent_boot_mode, self.node) mock_get_drac_client.assert_called_once_with(self.node) mock_client.list_boot_modes.assert_called_once_with() def test__get_next_persistent_boot_mode_with_empty_boot_mode_list( self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_modes.return_value = [] self.assertRaises(exception.DracOperationError, drac_mgmt._get_next_persistent_boot_mode, self.node) mock_get_drac_client.assert_called_once_with(self.node) mock_client.list_boot_modes.assert_called_once_with() def test__is_boot_order_flexibly_programmable(self, mock_get_drac_client): self.assertTrue(drac_mgmt._is_boot_order_flexibly_programmable( persistent=True, bios_settings={'SetBootOrderFqdd1': ()})) def test__is_boot_order_flexibly_programmable_not_persistent( self, mock_get_drac_client): self.assertFalse(drac_mgmt._is_boot_order_flexibly_programmable( persistent=False, bios_settings={'SetBootOrderFqdd1': ()})) def test__is_boot_order_flexibly_programmable_with_no_bios_setting( self, mock_get_drac_client): self.assertFalse(drac_mgmt._is_boot_order_flexibly_programmable( persistent=True, bios_settings={})) def test__flexibly_program_boot_order_for_disk_and_bios( self, mock_get_drac_client): settings = drac_mgmt._flexibly_program_boot_order( ironic.common.boot_devices.DISK, drac_boot_mode='Bios') expected_settings = {'SetBootOrderFqdd1': 'HardDisk.List.1-1'} self.assertEqual(expected_settings, settings) def test__flexibly_program_boot_order_for_disk_and_uefi( self, mock_get_drac_client): settings = drac_mgmt._flexibly_program_boot_order( ironic.common.boot_devices.DISK, drac_boot_mode='Uefi') expected_settings = { 'SetBootOrderFqdd1': '*.*.*', 'SetBootOrderFqdd2': 'NIC.*.*', 'SetBootOrderFqdd3': 'Optical.*.*', 'SetBootOrderFqdd4': 'Floppy.*.*', } self.assertEqual(expected_settings, settings) def test__flexibly_program_boot_order_for_pxe(self, mock_get_drac_client): settings = drac_mgmt._flexibly_program_boot_order( ironic.common.boot_devices.PXE, drac_boot_mode='Uefi') expected_settings = {'SetBootOrderFqdd1': 'NIC.*.*'} self.assertEqual(expected_settings, settings) def test__flexibly_program_boot_order_for_cdrom(self, mock_get_drac_client): settings = drac_mgmt._flexibly_program_boot_order( ironic.common.boot_devices.CDROM, drac_boot_mode='Uefi') expected_settings = {'SetBootOrderFqdd1': 'Optical.*.*'} self.assertEqual(expected_settings, settings) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_set_boot_device(self, mock_validate_job_queue, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] mock_list_unfinished_jobs.return_value = [] mock_job = mock.Mock() mock_job.status = "Scheduled" mock_client.get_job.return_value = mock_job boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' self.node.driver_internal_info['clean_steps'] = [] boot_device = drac_mgmt.set_boot_device( self.node, ironic.common.boot_devices.PXE, persistent=False) self.assertEqual(0, mock_list_unfinished_jobs.call_count) self.assertEqual(0, mock_client.delete_jobs.call_count) mock_validate_job_queue.assert_called_once_with(self.node) mock_client.change_boot_device_order.assert_called_once_with( 'OneTime', 'BIOS.Setup.1-1#BootSeq#NIC.Embedded.1-1-1') self.assertEqual(0, mock_client.set_bios_settings.call_count) mock_client.commit_pending_bios_changes.assert_called_once_with() @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_called_with_no_change( self, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = {'boot_device': ironic.common.boot_devices.PXE, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' mock_list_unfinished_jobs.return_value = [] boot_device = drac_mgmt.set_boot_device( self.node, ironic.common.boot_devices.PXE, persistent=True) mock_list_unfinished_jobs.assert_called_once_with(self.node) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.set_bios_settings.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch.object(drac_mgmt, '_flexibly_program_boot_order', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_is_boot_order_flexibly_programmable', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_called_with_no_drac_boot_device( self, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock__is_boot_order_flexibly_programmable, mock__flexibly_program_boot_order, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_devices.return_value = self.boot_devices['UEFI'] mock_list_unfinished_jobs.return_value = [] mock_job = mock.Mock() mock_job.status = "Scheduled" mock_client.get_job.return_value = mock_job boot_device = {'boot_device': ironic.common.boot_devices.PXE, 'persistent': False} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'UEFI' settings = [ { 'name': 'BootMode', 'instance_id': 'BIOS.Setup.1-1:BootMode', 'current_value': 'Uefi', 'pending_value': None, 'read_only': False, 'possible_values': ['Bios', 'Uefi'] }, ] bios_settings = { s['name']: test_utils.dict_to_namedtuple( values=s) for s in settings} mock_client.list_bios_settings.return_value = bios_settings mock__is_boot_order_flexibly_programmable.return_value = True flexibly_program_settings = { 'SetBootOrderFqdd1': '*.*.*', 'SetBootOrderFqdd2': 'NIC.*.*', 'SetBootOrderFqdd3': 'Optical.*.*', 'SetBootOrderFqdd4': 'Floppy.*.*', } mock__flexibly_program_boot_order.return_value = \ flexibly_program_settings drac_mgmt.set_boot_device(self.node, ironic.common.boot_devices.DISK, persistent=True) mock_list_unfinished_jobs.assert_called_once_with(self.node) self.assertEqual(0, mock_client.change_boot_device_order.call_count) mock_client.set_bios_settings.assert_called_once_with( flexibly_program_settings) mock_client.commit_pending_bios_changes.assert_called_once_with() @mock.patch.object(drac_mgmt, '_is_boot_order_flexibly_programmable', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_called_with_not_flexibly_programmable( self, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock__is_boot_order_flexibly_programmable, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_list_unfinished_jobs.return_value = [] mock_client.list_boot_devices.return_value = self.boot_devices['UEFI'] boot_device = {'boot_device': ironic.common.boot_devices.PXE, 'persistent': False} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'UEFI' mock__is_boot_order_flexibly_programmable.return_value = False self.assertRaises(exception.InvalidParameterValue, drac_mgmt.set_boot_device, self.node, ironic.common.boot_devices.CDROM, persistent=False) mock_list_unfinished_jobs.assert_called_once_with(self.node) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.set_bios_settings.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch.object(drac_mgmt, '_is_boot_order_flexibly_programmable', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_called_with_unknown_boot_mode( self, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock__is_boot_order_flexibly_programmable, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_devices.return_value = self.boot_devices['UEFI'] boot_device = {'boot_device': ironic.common.boot_devices.PXE, 'persistent': False} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'UEFI' settings = [ { 'name': 'BootMode', 'instance_id': 'BIOS.Setup.1-1:BootMode', 'current_value': 'Bad', 'pending_value': None, 'read_only': False, 'possible_values': ['Bios', 'Uefi', 'Bad'] }, ] bios_settings = { s['name']: test_utils.dict_to_namedtuple( values=s) for s in settings} mock_client.list_bios_settings.return_value = bios_settings mock__is_boot_order_flexibly_programmable.return_value = True mock_list_unfinished_jobs.return_value = [] self.assertRaises(exception.DracOperationError, drac_mgmt.set_boot_device, self.node, ironic.common.boot_devices.DISK, persistent=True) mock_list_unfinished_jobs.assert_called_once_with(self.node) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.set_bios_settings.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch('time.time') @mock.patch('time.sleep') @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_job_not_scheduled( self, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock_sleep, mock_time, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_list_unfinished_jobs.return_value = [] mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] mock_job = mock.Mock() mock_job.status = "New" mock_client.get_job.return_value = mock_job mock_time.side_effect = [10, 50] boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' self.assertRaises(exception.DracOperationError, drac_mgmt.set_boot_device, self.node, ironic.common.boot_devices.PXE, persistent=True) mock_list_unfinished_jobs.assert_called_once_with(self.node) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) def test_set_boot_device_with_list_unfinished_jobs_fail( self, mock_list_unfinished_jobs, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_list_unfinished_jobs.side_effect = exception.DracOperationError( 'boom') self.assertRaises(exception.DracOperationError, drac_mgmt.set_boot_device, self.node, ironic.common.boot_devices.PXE, persistent=True) self.assertEqual(0, mock_client.change_boot_device_order.call_count) self.assertEqual(0, mock_client.set_bios_settings.call_count) self.assertEqual(0, mock_client.commit_pending_bios_changes.call_count) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) def test_set_boot_device_with_list_unfinished_jobs_without_clean_step( self, mock__get_next_persistent_boot_mode, mock__get_boot_device, mock_list_unfinished_jobs, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client bios_job_dict = { 'id': 'JID_602553293345', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} bios_job = test_utils.make_job(bios_job_dict) mock_list_unfinished_jobs.return_value = [bios_job] mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' self.node.driver_internal_info['clean_steps'] = [] drac_mgmt.set_boot_device(self.node, ironic.common.boot_devices.DISK, persistent=True) self.assertEqual(0, mock_list_unfinished_jobs.call_count) self.assertEqual(0, mock_client.delete_jobs.call_count) mock_validate_job_queue.assert_called_once_with(self.node) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) def test_set_boot_device_with_multiple_unfinished_jobs_without_clean_step( self, mock__get_next_persistent_boot_mode, mock__get_boot_device, mock_list_unfinished_jobs, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client job_dict = { 'id': 'JID_602553293345', 'name': 'Config:RAID:RAID.Integrated.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} job = test_utils.make_job(job_dict) bios_job_dict = { 'id': 'JID_602553293346', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} bios_job = test_utils.make_job(bios_job_dict) mock_list_unfinished_jobs.return_value = [job, bios_job] mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' self.node.driver_internal_info['clean_steps'] = [] drac_mgmt.set_boot_device(self.node, ironic.common.boot_devices.DISK, persistent=True) self.assertEqual(0, mock_list_unfinished_jobs.call_count) self.assertEqual(0, mock_client.delete_jobs.call_count) mock_validate_job_queue.assert_called_once_with(self.node) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def test_set_boot_device_with_list_unfinished_jobs_with_clean_step( self, mock_validate_job_queue, mock_list_unfinished_jobs, mock__get_boot_device, mock__get_next_persistent_boot_mode, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' mock_job = mock.Mock() mock_job.status = "Scheduled" mock_client.get_job.return_value = mock_job bios_job_dict = { 'id': 'JID_602553293345', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} bios_job = test_utils.make_job(bios_job_dict) mock_list_unfinished_jobs.return_value = [bios_job] self.node.driver_internal_info['clean_steps'] = [{ u'interface': u'management', u'step': u'clear_job_queue'}] boot_device = drac_mgmt.set_boot_device( self.node, ironic.common.boot_devices.PXE, persistent=False) mock_list_unfinished_jobs.assert_called_once_with(self.node) mock_client.delete_jobs.assert_called_once_with( job_ids=['JID_602553293345']) self.assertEqual(0, mock_validate_job_queue.call_count) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) @mock.patch.object(drac_job, 'list_unfinished_jobs', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) @mock.patch.object(drac_mgmt, '_get_next_persistent_boot_mode', spec_set=True, autospec=True) def test_set_boot_device_with_multiple_unfinished_jobs_with_clean_step( self, mock__get_next_persistent_boot_mode, mock__get_boot_device, mock_list_unfinished_jobs, mock_validate_job_queue, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client job_dict = { 'id': 'JID_602553293345', 'name': 'Config:RAID:RAID.Integrated.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} job = test_utils.make_job(job_dict) bios_job_dict = { 'id': 'JID_602553293346', 'name': 'ConfigBIOS:BIOS.Setup.1-1', 'start_time': 'TIME_NOW', 'until_time': 'TIME_NA', 'message': 'Task successfully scheduled.', 'status': 'Scheduled', 'percent_complete': 0} bios_job = test_utils.make_job(bios_job_dict) mock_list_unfinished_jobs.return_value = [job, bios_job] mock_client.list_boot_devices.return_value = self.boot_devices['IPL'] boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = boot_device mock__get_next_persistent_boot_mode.return_value = 'IPL' self.node.driver_internal_info['clean_steps'] = [{ u'interface': u'management', u'step': u'clear_job_queue'}] drac_mgmt.set_boot_device(self.node, ironic.common.boot_devices.DISK, persistent=True) mock_list_unfinished_jobs.assert_called_once_with(self.node) mock_client.delete_jobs.assert_called_once_with( job_ids=['JID_602553293345', 'JID_602553293346']) self.assertEqual(0, mock_validate_job_queue.call_count) @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracManagementTestCase(test_utils.BaseDracTest): def setUp(self): super(DracManagementTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) def test_get_properties(self, mock_get_drac_client): expected = drac_common.COMMON_PROPERTIES driver = drac_mgmt.DracManagement() self.assertEqual(expected, driver.get_properties()) def test_get_supported_boot_devices(self, mock_get_drac_client): expected_boot_devices = [ironic.common.boot_devices.PXE, ironic.common.boot_devices.DISK, ironic.common.boot_devices.CDROM] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: boot_devices = ( task.driver.management.get_supported_boot_devices(task)) self.assertEqual(sorted(expected_boot_devices), sorted(boot_devices)) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) def test_get_boot_device(self, mock__get_boot_device, mock_get_drac_client): expected_boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} mock__get_boot_device.return_value = expected_boot_device with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: boot_device = task.driver.management.get_boot_device(task) self.assertEqual(expected_boot_device, boot_device) mock__get_boot_device.assert_called_once_with(task.node) @mock.patch.object(drac_mgmt, '_get_boot_device', spec_set=True, autospec=True) def test_get_boot_device_from_driver_internal_info(self, mock__get_boot_device, mock_get_drac_client): expected_boot_device = {'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['drac_boot_device'] = ( expected_boot_device) boot_device = task.driver.management.get_boot_device(task) self.assertEqual(expected_boot_device, boot_device) self.assertEqual(0, mock__get_boot_device.call_count) def test_set_boot_device(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.management.set_boot_device( task, ironic.common.boot_devices.DISK, persistent=True) expected_boot_device = { 'boot_device': ironic.common.boot_devices.DISK, 'persistent': True} self.node.refresh() self.assertEqual( self.node.driver_internal_info['drac_boot_device'], expected_boot_device) def test_set_boot_device_fail(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.set_boot_device, task, 'foo') def test_get_sensors_data(self, mock_get_drac_client): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(NotImplementedError, task.driver.management.get_sensors_data, task) def test_reset_idrac(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.management.reset_idrac(task) mock_client.reset_idrac.assert_called_once_with( force=True, wait=True) self.assertIsNone(return_value) def test_known_good_state(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.management.known_good_state(task) mock_client.reset_idrac.assert_called_once_with( force=True, wait=True) mock_client.delete_jobs.assert_called_once_with( job_ids=['JID_CLEARALL_FORCE']) self.assertIsNone(return_value) def test_clear_job_queue(self, mock_get_drac_client): mock_client = mock.Mock() mock_get_drac_client.return_value = mock_client with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: return_value = task.driver.management.clear_job_queue(task) mock_client.delete_jobs.assert_called_once_with( job_ids=['JID_CLEARALL_FORCE']) self.assertIsNone(return_value) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_bios.py0000664000175000017500000004011313652514273025121 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Copyright (c) 2015-2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC BIOS configuration specific methods """ from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.drac import bios as drac_bios from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import job as drac_job from ironic import objects from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT class DracWSManBIOSConfigurationTestCase(test_utils.BaseDracTest): def setUp(self): super(DracWSManBIOSConfigurationTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) patch_get_drac_client = mock.patch.object( drac_common, 'get_drac_client', spec_set=True, autospec=True) mock_get_drac_client = patch_get_drac_client.start() self.mock_client = mock_get_drac_client.return_value self.addCleanup(patch_get_drac_client.stop) proc_virt_attr = { 'current_value': 'Enabled', 'pending_value': None, 'read_only': False, 'possible_values': ['Enabled', 'Disabled']} mock_proc_virt_attr = mock.NonCallableMock(spec=[], **proc_virt_attr) mock_proc_virt_attr.name = 'ProcVirtualization' self.bios_attrs = {'ProcVirtualization': mock_proc_virt_attr} self.mock_client.set_lifecycle_settings.return_value = { "is_commit_required": True } self.mock_client.commit_pending_lifecycle_changes.return_value = \ "JID_1234" self.mock_client.set_bios_settings.return_value = { "is_commit_required": True, "is_reboot_required": True } self.mock_client.commit_pending_bios_changes.return_value = \ "JID_5678" @mock.patch.object(drac_common, 'parse_driver_info', autospec=True) def test_validate(self, mock_parse_driver_info): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.bios.validate(task) mock_parse_driver_info.assert_called_once_with(task.node) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid) as task: test_properties = task.driver.bios.get_properties() for each_property in drac_common.COMMON_PROPERTIES: self.assertIn(each_property, test_properties) @mock.patch.object(objects, 'BIOSSettingList', autospec=True) def test_cache_bios_settings_noop(self, mock_BIOSSettingList): create_list = [] update_list = [] delete_list = [] nochange_list = [{'name': 'ProcVirtualization', 'value': 'Enabled'}] mock_BIOSSettingList.sync_node_setting.return_value = ( create_list, update_list, delete_list, nochange_list) self.mock_client.list_bios_settings.return_value = self.bios_attrs with task_manager.acquire(self.context, self.node.uuid) as task: kwsettings = self.mock_client.list_bios_settings() settings = [{"name": name, "value": attrib.__dict__['current_value']} for name, attrib in kwsettings.items()] self.mock_client.list_bios_settings.reset_mock() task.driver.bios.cache_bios_settings(task) self.mock_client.list_bios_settings.assert_called_once_with() mock_BIOSSettingList.sync_node_setting.assert_called_once_with( task.context, task.node.id, settings) mock_BIOSSettingList.create.assert_not_called() mock_BIOSSettingList.save.assert_not_called() mock_BIOSSettingList.delete.assert_not_called() def test_cache_bios_settings_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.list_bios_settings.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.bios.cache_bios_settings, task) @mock.patch.object(deploy_utils, 'get_async_step_return_state', autospec=True) @mock.patch.object(deploy_utils, 'set_async_step_flags', autospec=True) @mock.patch.object(drac_bios.DracWSManBIOS, 'cache_bios_settings', spec_set=True) @mock.patch.object(drac_job, 'validate_job_queue', spec_set=True, autospec=True) def _test_step(self, mock_validate_job_queue, mock_cache_bios_settings, mock_set_async_step_flags, mock_get_async_step_return_state): if self.node.clean_step: step_data = self.node.clean_step expected_state = states.CLEANWAIT mock_get_async_step_return_state.return_value = states.CLEANWAIT else: step_data = self.node.deploy_step expected_state = states.DEPLOYWAIT mock_get_async_step_return_state.return_value = states.DEPLOYWAIT data = step_data['argsinfo'].get('settings', None) step = step_data['step'] if step == 'apply_configuration': attributes = {s['name']: s['value'] for s in data} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: info = task.node.driver_internal_info if step == 'factory_reset': ret_state = task.driver.bios.factory_reset(task) attrib = {"BIOS Reset To Defaults Requested": "True"} self.mock_client.set_lifecycle_settings.\ assert_called_once_with(attrib) self.mock_client.commit_pending_lifecycle_changes.\ assert_called_once_with(reboot=True) job_id = self.mock_client.commit_pending_lifecycle_changes() self.assertIn(job_id, info['bios_config_job_ids']) if step == 'apply_configuration': ret_state = task.driver.bios.apply_configuration(task, data) self.mock_client.set_bios_settings.assert_called_once_with( attributes) self.mock_client.commit_pending_bios_changes.\ assert_called_once_with(reboot=True) job_id = self.mock_client.commit_pending_bios_changes() self.assertIn(job_id, info['bios_config_job_ids']) mock_validate_job_queue.assert_called_once_with(task.node) mock_set_async_step_flags.assert_called_once_with( task.node, reboot=True, skip_current_step=True, polling=True) mock_get_async_step_return_state.assert_called_once_with( task.node) self.assertEqual(expected_state, ret_state) def test_factory_reset_clean(self): self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test_step() def test_factory_reset_deploy(self): self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'factory_reset', 'argsinfo': {}} self.node.save() self._test_step() def test_apply_configuration_clean(self): settings = [{'name': 'ProcVirtualization', 'value': 'Enabled'}] self.node.clean_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': settings}} self.node.save() self._test_step() def test_apply_configuration_deploy(self): settings = [{'name': 'ProcVirtualization', 'value': 'Enabled'}] self.node.deploy_step = {'priority': 100, 'interface': 'bios', 'step': 'apply_configuration', 'argsinfo': {'settings': settings}} self.node.save() self._test_step() def test_apply_conf_set_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.set_bios_settings.side_affect = exc settings = [{'name': 'ProcVirtualization', 'value': 'Enabled'}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.DracOperationError, task.driver.bios.apply_configuration, task, settings) def test_apply_conf_commit_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.commit_pending_bios_changes.side_affect = exc settings = [{'name': 'ProcVirtualization', 'value': 'Enabled'}] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.DracOperationError, task.driver.bios.apply_configuration, task, settings) def test_factory_reset_set_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.set_lifecycle_settings.side_affect = exc with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.DracOperationError, task.driver.bios.factory_reset, task) def test_factory_reset_commit_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.commit_pending_lifecycle_changes.side_affect = exc with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.DracOperationError, task.driver.bios.factory_reset, task) class DracBIOSConfigurationTestCase(test_utils.BaseDracTest): def setUp(self): super(DracBIOSConfigurationTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) patch_get_drac_client = mock.patch.object( drac_common, 'get_drac_client', spec_set=True, autospec=True) mock_get_drac_client = patch_get_drac_client.start() self.mock_client = mock.Mock() mock_get_drac_client.return_value = self.mock_client self.addCleanup(patch_get_drac_client.stop) proc_virt_attr = { 'current_value': 'Enabled', 'pending_value': None, 'read_only': False, 'possible_values': ['Enabled', 'Disabled']} mock_proc_virt_attr = mock.NonCallableMock(spec=[], **proc_virt_attr) mock_proc_virt_attr.name = 'ProcVirtualization' self.bios_attrs = {'ProcVirtualization': mock_proc_virt_attr} def test_get_config(self): self.mock_client.list_bios_settings.return_value = self.bios_attrs with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: bios_config = task.driver.vendor.get_bios_config(task) self.mock_client.list_bios_settings.assert_called_once_with() self.assertIn('ProcVirtualization', bios_config) def test_get_config_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.list_bios_settings.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.get_bios_config, task) self.mock_client.list_bios_settings.assert_called_once_with() def test_set_config(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.set_bios_config(task, ProcVirtualization='Enabled') self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.set_bios_settings.assert_called_once_with( {'ProcVirtualization': 'Enabled'}) def test_set_config_fail(self): self.mock_client.list_jobs.return_value = [] exc = drac_exceptions.BaseClientException('boom') self.mock_client.set_bios_settings.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.set_bios_config, task, ProcVirtualization='Enabled') self.mock_client.set_bios_settings.assert_called_once_with( {'ProcVirtualization': 'Enabled'}) def test_commit_config(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.commit_bios_config(task) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( False) def test_commit_config_with_reboot(self): self.mock_client.list_jobs.return_value = [] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.commit_bios_config(task, reboot=True) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( True) def test_commit_config_fail(self): self.mock_client.list_jobs.return_value = [] exc = drac_exceptions.BaseClientException('boom') self.mock_client.commit_pending_bios_changes.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.commit_bios_config, task) self.mock_client.list_jobs.assert_called_once_with( only_unfinished=True) self.mock_client.commit_pending_bios_changes.assert_called_once_with( False) def test_abandon_config(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.vendor.abandon_bios_config(task) self.mock_client.abandon_pending_bios_changes.assert_called_once_with() def test_abandon_config_fail(self): exc = drac_exceptions.BaseClientException('boom') self.mock_client.abandon_pending_bios_changes.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.vendor.abandon_bios_config, task) self.mock_client.abandon_pending_bios_changes.assert_called_once_with() ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/test_power.py0000664000175000017500000002200113652514273025315 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for DRAC power interface """ from dracclient import constants as drac_constants from dracclient import exceptions as drac_exceptions import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.drac import common as drac_common from ironic.drivers.modules.drac import power as drac_power from ironic.tests.unit.drivers.modules.drac import utils as test_utils from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = test_utils.INFO_DICT @mock.patch.object(drac_common, 'get_drac_client', spec_set=True, autospec=True) class DracPowerTestCase(test_utils.BaseDracTest): def setUp(self): super(DracPowerTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='idrac', driver_info=INFO_DICT) def test_get_properties(self, mock_get_drac_client): expected = drac_common.COMMON_PROPERTIES driver = drac_power.DracPower() self.assertEqual(expected, driver.get_properties()) def test_get_power_state(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: power_state = task.driver.power.get_power_state(task) self.assertEqual(states.POWER_ON, power_state) mock_client.get_power_state.assert_called_once_with() def test_get_power_state_fail(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value exc = drac_exceptions.BaseClientException('boom') mock_client.get_power_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.DracOperationError, task.driver.power.get_power_state, task) mock_client.get_power_state.assert_called_once_with() @mock.patch.object(drac_power.LOG, 'warning') def test_set_power_state(self, mock_log, mock_get_drac_client): mock_client = mock_get_drac_client.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_OFF] mock_client.set_power_state.assert_called_once_with(drac_power_state) self.assertFalse(mock_log.called) def test_set_power_state_fail(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value exc = drac_exceptions.BaseClientException('boom') mock_client.set_power_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.power.set_power_state, task, states.POWER_OFF) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_OFF] mock_client.set_power_state.assert_called_once_with(drac_power_state) @mock.patch.object(drac_power.LOG, 'warning') def test_set_power_state_timeout(self, mock_log, mock_get_drac_client): mock_client = mock_get_drac_client.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_OFF, timeout=11) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_OFF] mock_client.set_power_state.assert_called_once_with(drac_power_state) self.assertTrue(mock_log.called) @mock.patch.object(drac_power.LOG, 'warning') def test_reboot_while_powered_on(self, mock_log, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) drac_power_state = drac_power.REVERSE_POWER_STATES[states.REBOOT] mock_client.set_power_state.assert_called_once_with(drac_power_state) self.assertFalse(mock_log.called) @mock.patch.object(drac_power.LOG, 'warning') def test_reboot_while_powered_on_timeout(self, mock_log, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task, timeout=42) drac_power_state = drac_power.REVERSE_POWER_STATES[states.REBOOT] mock_client.set_power_state.assert_called_once_with(drac_power_state) self.assertTrue(mock_log.called) def test_reboot_while_powered_off(self, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_ON] mock_client.set_power_state.assert_called_once_with(drac_power_state) @mock.patch('time.sleep') def test_reboot_retries_success(self, mock_sleep, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_OFF exc = drac_exceptions.DRACOperationFailed( drac_messages=['The command failed to set RequestedState']) mock_client.set_power_state.side_effect = [exc, None] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) drac_power_state = drac_power.REVERSE_POWER_STATES[states.POWER_ON] self.assertEqual(2, mock_client.set_power_state.call_count) mock_client.set_power_state.assert_has_calls( [mock.call(drac_power_state), mock.call(drac_power_state)]) @mock.patch('time.sleep') def test_reboot_retries_fail(self, mock_sleep, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.return_value = drac_constants.POWER_OFF exc = drac_exceptions.DRACOperationFailed( drac_messages=['The command failed to set RequestedState']) mock_client.set_power_state.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.DracOperationError, task.driver.power.reboot, task) self.assertEqual(drac_power.POWER_STATE_TRIES, mock_client.set_power_state.call_count) @mock.patch('time.sleep') def test_reboot_retries_power_change_success(self, mock_sleep, mock_get_drac_client): mock_client = mock_get_drac_client.return_value mock_client.get_power_state.side_effect = [drac_constants.POWER_OFF, drac_constants.POWER_ON] exc = drac_exceptions.DRACOperationFailed( drac_messages=['The command failed to set RequestedState']) mock_client.set_power_state.side_effect = [exc, None] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.reboot(task) self.assertEqual(2, mock_client.set_power_state.call_count) drac_power_state1 = drac_power.REVERSE_POWER_STATES[states.POWER_ON] drac_power_state2 = drac_power.REVERSE_POWER_STATES[states.REBOOT] mock_client.set_power_state.assert_has_calls( [mock.call(drac_power_state1), mock.call(drac_power_state2)]) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/utils.py0000664000175000017500000000665513652514273024303 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_utils import importutils from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils INFO_DICT = db_utils.get_test_drac_info() dracclient_job = importutils.try_import('dracclient.resources.job') dracclient_raid = importutils.try_import('dracclient.resources.raid') class BaseDracTest(db_base.DbTestCase): def setUp(self): super(BaseDracTest, self).setUp() self.config(enabled_hardware_types=['idrac', 'fake-hardware'], enabled_boot_interfaces=[ 'idrac-redfish-virtual-media', 'fake'], enabled_power_interfaces=['idrac-wsman', 'fake'], enabled_management_interfaces=['idrac-wsman', 'fake'], enabled_inspect_interfaces=[ 'idrac-wsman', 'fake', 'no-inspect'], enabled_vendor_interfaces=[ 'idrac-wsman', 'fake', 'no-vendor'], enabled_raid_interfaces=['idrac-wsman', 'fake', 'no-raid'], enabled_bios_interfaces=['idrac-wsman', 'no-bios']) class DictToObj(object): """Returns a dictionary into a class""" def __init__(self, dictionary): for key in dictionary: setattr(self, key, dictionary[key]) def dict_to_namedtuple(name='GenericNamedTuple', values=None, tuple_class=None): """Converts a dict to a collections.namedtuple""" if values is None: values = {} if tuple_class is None: tuple_class = collections.namedtuple(name, list(values)) else: # Support different versions of the driver as fields change. values = {field: values.get(field) for field in tuple_class._fields} return tuple_class(**values) def dict_of_object(data): """Create a dictionary object""" for k, v in data.items(): if isinstance(v, dict): dict_obj = DictToObj(v) data[k] = dict_obj return data def make_job(job_dict): tuple_class = dracclient_job.Job if dracclient_job else None return dict_to_namedtuple(values=job_dict, tuple_class=tuple_class) def make_raid_controller(raid_controller_dict): tuple_class = dracclient_raid.RAIDController if dracclient_raid else None return dict_to_namedtuple(values=raid_controller_dict, tuple_class=tuple_class) def make_virtual_disk(virtual_disk_dict): tuple_class = dracclient_raid.VirtualDisk if dracclient_raid else None return dict_to_namedtuple(values=virtual_disk_dict, tuple_class=tuple_class) def make_physical_disk(physical_disk_dict): tuple_class = dracclient_raid.PhysicalDisk if dracclient_raid else None return dict_to_namedtuple(values=physical_disk_dict, tuple_class=tuple_class) ironic-15.0.0/ironic/tests/unit/drivers/modules/drac/__init__.py0000664000175000017500000000000013652514273024654 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_pxe.py0000664000175000017500000016062413652514273024062 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for PXE driver.""" import os import tempfile import mock from oslo_config import cfg from oslo_serialization import jsonutils as json from oslo_utils import timeutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import image_service from ironic.common import pxe_utils from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base as drivers_base from ironic.drivers.modules import agent_base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import ipxe from ironic.drivers.modules import pxe from ironic.drivers.modules import pxe_base from ironic.drivers.modules.storage import noop as noop_storage from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() # NOTE(TheJulia): Mark pxe interface loading as None in order # to prent false counts for individual method tests. @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) @mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None) class PXEBootTestCase(db_base.DbTestCase): driver = 'fake-hardware' boot_interface = 'pxe' driver_info = DRV_INFO_DICT driver_internal_info = DRV_INTERNAL_INFO_DICT def setUp(self): super(PXEBootTestCase, self).setUp() self.context.auth_token = 'fake' self.config_temp_dir('tftp_root', group='pxe') self.config_temp_dir('images_path', group='pxe') self.config_temp_dir('http_root', group='deploy') instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.config(enabled_boot_interfaces=[self.boot_interface, 'ipxe', 'fake']) self.node = obj_utils.create_test_node( self.context, driver=self.driver, boot_interface=self.boot_interface, # Avoid fake properties in get_properties() output vendor_interface='no-vendor', instance_info=instance_info, driver_info=self.driver_info, driver_internal_info=self.driver_internal_info) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) self.config(group='conductor', api_url='http://127.0.0.1:1234/') self.config(my_ipv6='2001:db8::1') def test_get_properties(self): expected = pxe_base.COMMON_PROPERTIES expected.update(agent_base.VENDOR_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_good(self, mock_glance): mock_glance.return_value = {'properties': {'kernel_id': 'fake-kernel', 'ramdisk_id': 'fake-initr'}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.validate(task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_good_whole_disk_image(self, mock_glance): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot.validate(task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_validate_skip_check_write_image_false(self, mock_write, mock_glance): mock_write.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.validate(task) self.assertFalse(mock_glance.called) def test_validate_fail_missing_deploy_kernel(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_deploy_ramdisk(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_image_source(self): info = dict(INST_INFO_DICT) del info['image_source'] self.node.instance_info = json.dumps(info) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node['instance_info'] = json.dumps(info) self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_no_port(self): new_node = obj_utils.create_test_node( self.context, uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', driver=self.driver, boot_interface=self.boot_interface, instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT) with task_manager.acquire(self.context, new_node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_trusted_boot_with_secure_boot(self): instance_info = {"boot_option": "netboot", "secure_boot": "true", "trusted_boot": "true"} properties = {'capabilities': 'trusted_boot:true'} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['capabilities'] = instance_info task.node.properties = properties task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_fail_invalid_trusted_boot_value(self): properties = {'capabilities': 'trusted_boot:value'} instance_info = {"trusted_boot": "value"} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties = properties task.node.instance_info['capabilities'] = instance_info self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_no_image_kernel_ramdisk_props(self, mock_glance): instance_info = {"boot_option": "netboot"} mock_glance.return_value = {'properties': {}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['capabilities'] = instance_info self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_glance_image_doesnt_exists(self, mock_glance): mock_glance.side_effect = exception.ImageNotFound('not found') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_glance_conn_problem(self, mock_glance): exceptions = (exception.GlanceConnectionFailed('connection fail'), exception.ImageNotAuthorized('not authorized'), exception.Invalid('invalid')) mock_glance.side_effect = exceptions for exc in exceptions: with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_inspection(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_inspection(task) def test_validate_inspection_no_inspection_ramdisk(self): driver_info = self.node.driver_info del driver_info['deploy_ramdisk'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.UnsupportedDriverExtension, task.driver.boot.validate_inspection, task) @mock.patch.object(manager_utils, 'node_get_boot_mode', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) @mock.patch.object(pxe_utils, 'get_image_info', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'build_pxe_config_options', autospec=True) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) def _test_prepare_ramdisk(self, mock_pxe_config, mock_build_pxe, mock_cache_r_k, mock_deploy_img_info, mock_instance_img_info, dhcp_factory_mock, set_boot_device_mock, get_boot_mode_mock, uefi=False, cleaning=False, ipxe_use_swift=False, whole_disk_image=False, mode='deploy', node_boot_mode=None, persistent=False): mock_build_pxe.return_value = {} kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode mock_deploy_img_info.return_value = {kernel_label: 'a', ramdisk_label: 'r'} if whole_disk_image: mock_instance_img_info.return_value = {} else: mock_instance_img_info.return_value = {'kernel': 'b'} mock_pxe_config.return_value = None mock_cache_r_k.return_value = None provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock get_boot_mode_mock.return_value = node_boot_mode driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = whole_disk_image self.node.driver_internal_info = driver_internal_info if mode == 'rescue': mock_deploy_img_info.return_value = { 'rescue_kernel': 'a', 'rescue_ramdisk': 'r'} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) task.driver.boot.prepare_ramdisk(task, {'foo': 'bar'}) mock_deploy_img_info.assert_called_once_with(task.node, mode=mode, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) if self.node.provision_state == states.DEPLOYING: get_boot_mode_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=persistent) if ipxe_use_swift: if whole_disk_image: self.assertFalse(mock_cache_r_k.called) else: mock_cache_r_k.assert_called_once_with( task, {'kernel': 'b'}, ipxe_enabled=False) mock_instance_img_info.assert_called_once_with( task, ipxe_enabled=False) elif not cleaning and mode == 'deploy': mock_cache_r_k.assert_called_once_with( task, {'deploy_kernel': 'a', 'deploy_ramdisk': 'r', 'kernel': 'b'}, ipxe_enabled=False) mock_instance_img_info.assert_called_once_with( task, ipxe_enabled=False) elif mode == 'deploy': mock_cache_r_k.assert_called_once_with( task, {'deploy_kernel': 'a', 'deploy_ramdisk': 'r'}, ipxe_enabled=False) elif mode == 'rescue': mock_cache_r_k.assert_called_once_with( task, {'rescue_kernel': 'a', 'rescue_ramdisk': 'r'}, ipxe_enabled=False) if uefi: mock_pxe_config.assert_called_once_with( task, {}, CONF.pxe.uefi_pxe_config_template, ipxe_enabled=False) else: mock_pxe_config.assert_called_once_with( task, {}, CONF.pxe.pxe_config_template, ipxe_enabled=False) def test_prepare_ramdisk(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_true(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'True' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_bool_true(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = True self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_sloppy_true(self): for value in ['true', 't', '1', 'on', 'y', 'YES']: self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = value self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_false(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'False' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_bool_false(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = False self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_force_persistent_boot_device_sloppy_false(self): for value in ['false', 'f', '0', 'off', 'n', 'NO', 'yxz']: self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = value self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_default(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Default' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_force_persistent_boot_device_always(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Always' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_never(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Never' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_rescue(self): self.node.provision_state = states.RESCUING self.node.save() self._test_prepare_ramdisk(mode='rescue') def test_prepare_ramdisk_uefi(self): self.node.provision_state = states.DEPLOYING self.node.save() properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True) def test_prepare_ramdisk_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk(cleaning=True) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_on_bm( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True) set_boot_mode_mock.assert_called_once_with(mock.ANY, boot_modes.UEFI) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_on_ironic( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk(node_boot_mode=boot_modes.LEGACY_BIOS) with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.LEGACY_BIOS, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 0) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_default_boot_mode_on_ironic_bios( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(default_boot_mode=boot_modes.LEGACY_BIOS, group='deploy') self._test_prepare_ramdisk() with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.LEGACY_BIOS, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 1) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_default_boot_mode_on_ironic_uefi( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(default_boot_mode=boot_modes.UEFI, group='deploy') self._test_prepare_ramdisk(uefi=True) with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.UEFI, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 1) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_conflicting_boot_modes( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True, node_boot_mode=boot_modes.LEGACY_BIOS) set_boot_mode_mock.assert_called_once_with(mock.ANY, boot_modes.UEFI) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_conflicting_boot_modes_set_unsupported( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() set_boot_mode_mock.side_effect = exception.UnsupportedDriverExtension( extension='management', driver='test-driver' ) self.assertRaises(exception.UnsupportedDriverExtension, self._test_prepare_ramdisk, uefi=True, node_boot_mode=boot_modes.LEGACY_BIOS) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_not_called( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True, node_boot_mode=boot_modes.UEFI) self.assertEqual(set_boot_mode_mock.call_count, 0) @mock.patch.object(pxe_utils, 'clean_up_pxe_env', autospec=True) @mock.patch.object(pxe_utils, 'get_image_info', autospec=True) def _test_clean_up_ramdisk(self, get_image_info_mock, clean_up_pxe_env_mock, mode='deploy'): with task_manager.acquire(self.context, self.node.uuid) as task: kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode image_info = {kernel_label: ['', '/path/to/' + kernel_label], ramdisk_label: ['', '/path/to/' + ramdisk_label]} get_image_info_mock.return_value = image_info task.driver.boot.clean_up_ramdisk(task) clean_up_pxe_env_mock.assert_called_once_with( task, image_info, ipxe_enabled=False) get_image_info_mock.assert_called_once_with( task.node, mode=mode, ipxe_enabled=False) def test_clean_up_ramdisk(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_clean_up_ramdisk() def test_clean_up_ramdisk_rescue(self): self.node.provision_state = states.RESCUING self.node.save() self._test_clean_up_ramdisk(mode='rescue') @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=4) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.driver_internal_info['root_uuid_or_disk_id'] = ( "30212642-09d3-467f-8e09-21685826ab50") task.node.driver_internal_info['is_whole_disk_image'] = False task.node.instance_info = { 'capabilities': {'boot_option': 'netboot'}} task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, image_info, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, "30212642-09d3-467f-8e09-21685826ab50", 'bios', False, False, False, False, ipxe_enabled=False) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=True) @mock.patch('os.path.isfile', return_value=False) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot_active( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock, create_pxe_config_mock, isfile_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} instance_info = {"boot_option": "netboot"} get_image_info_mock.return_value = image_info self.node.provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.driver_internal_info['root_uuid_or_disk_id'] = ( "30212642-09d3-467f-8e09-21685826ab50") task.node.driver_internal_info['is_whole_disk_image'] = False task.node.instance_info['capabilities'] = instance_info task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, image_info, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) create_pxe_config_mock.assert_called_once_with( task, mock.ANY, CONF.pxe.pxe_config_template, ipxe_enabled=False) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, "30212642-09d3-467f-8e09-21685826ab50", 'bios', False, False, False, False, ipxe_enabled=False) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot_missing_root_uuid( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} instance_info = {"boot_option": "netboot"} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with(task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, image_info, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) self.assertFalse(switch_pxe_config_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(pxe_base.LOG, 'warning', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_whole_disk_image_missing_root_uuid( self, get_image_info_mock, cache_mock, dhcp_factory_mock, set_boot_device_mock, clean_up_pxe_mock, log_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock get_image_info_mock.return_value = {} instance_info = {"boot_option": "netboot"} with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with(task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, {}, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) self.assertTrue(log_mock.called) clean_up_pxe_mock.assert_called_once_with( task, ipxe_enabled=False) set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) def test_prepare_instance_localboot(self, clean_up_pxe_config_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid) as task: instance_info = task.node.instance_info instance_info['capabilities'] = {'boot_option': 'local'} task.node.instance_info = instance_info task.node.save() task.driver.boot.prepare_instance(task) clean_up_pxe_config_mock.assert_called_once_with( task, ipxe_enabled=False) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) def test_prepare_instance_localboot_active(self, clean_up_pxe_config_mock, set_boot_device_mock): self.node.provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: instance_info = task.node.instance_info instance_info['capabilities'] = {'boot_option': 'local'} task.node.instance_info = instance_info task.node.save() task.driver.boot.prepare_instance(task) clean_up_pxe_config_mock.assert_called_once_with( task, ipxe_enabled=False) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def _test_prepare_instance_ramdisk( self, get_image_info_mock, cache_mock, dhcp_factory_mock, create_pxe_config_mock, switch_pxe_config_mock, set_boot_device_mock, config_file_exits=False): image_info = {'kernel': ['', '/path/to/kernel'], 'ramdisk': ['', '/path/to/ramdisk']} get_image_info_mock.return_value = image_info provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock self.node.provision_state = states.DEPLOYING get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: instance_info = task.node.instance_info instance_info['capabilities'] = {'boot_option': 'ramdisk'} task.node.instance_info = instance_info task.node.save() dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with(task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, image_info, False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) if config_file_exits: self.assertFalse(create_pxe_config_mock.called) else: create_pxe_config_mock.assert_called_once_with( task, mock.ANY, CONF.pxe.pxe_config_template, ipxe_enabled=False) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, None, 'bios', False, ipxe_enabled=False, iscsi_boot=False, ramdisk_boot=True) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=True) @mock.patch.object(os.path, 'isfile', lambda path: True) def test_prepare_instance_ramdisk_pxe_conf_missing(self): self._test_prepare_instance_ramdisk(config_file_exits=True) @mock.patch.object(os.path, 'isfile', lambda path: False) def test_prepare_instance_ramdisk_pxe_conf_exists(self): self._test_prepare_instance_ramdisk(config_file_exits=False) @mock.patch.object(pxe_utils, 'clean_up_pxe_env', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_clean_up_instance(self, get_image_info_mock, clean_up_pxe_env_mock): with task_manager.acquire(self.context, self.node.uuid) as task: image_info = {'kernel': ['', '/path/to/kernel'], 'ramdisk': ['', '/path/to/ramdisk']} get_image_info_mock.return_value = image_info task.driver.boot.clean_up_instance(task) clean_up_pxe_env_mock.assert_called_once_with(task, image_info, ipxe_enabled=False) get_image_info_mock.assert_called_once_with(task, ipxe_enabled=False) class PXERamdiskDeployTestCase(db_base.DbTestCase): def setUp(self): super(PXERamdiskDeployTestCase, self).setUp() self.temp_dir = tempfile.mkdtemp() self.config(tftp_root=self.temp_dir, group='pxe') self.temp_dir = tempfile.mkdtemp() self.config(images_path=self.temp_dir, group='pxe') self.config(enabled_deploy_interfaces=['ramdisk']) self.config(enabled_boot_interfaces=['pxe']) for iface in drivers_base.ALL_INTERFACES: impl = 'fake' if iface == 'network': impl = 'noop' if iface == 'deploy': impl = 'ramdisk' if iface == 'boot': impl = 'pxe' config_kwarg = {'enabled_%s_interfaces' % iface: [impl], 'default_%s_interface' % iface: impl} self.config(**config_kwarg) self.config(enabled_hardware_types=['fake-hardware']) instance_info = INST_INFO_DICT self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_ramdisk( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock self.node.provision_state = states.DEPLOYING image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=False, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid) task.node.properties['capabilities'] = 'boot_option:netboot' task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.deploy.prepare(task) task.driver.deploy.deploy(task) get_image_info_mock.assert_called_once_with(task, ipxe_enabled=False) cache_mock.assert_called_once_with( task, image_info, ipxe_enabled=False) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, None, 'bios', False, ipxe_enabled=False, iscsi_boot=False, ramdisk_boot=True) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=True) @mock.patch.object(pxe.LOG, 'warning', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_deploy(self, mock_image_info, mock_cache, mock_dhcp_factory, mock_switch_config, mock_warning): image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} mock_image_info.return_value = image_info i_info = self.node.instance_info i_info.update({'capabilities': {'boot_option': 'ramdisk'}}) self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertIsNone(task.driver.deploy.deploy(task)) mock_image_info.assert_called_once_with(task, ipxe_enabled=False) mock_cache.assert_called_once_with( task, image_info, ipxe_enabled=False) self.assertFalse(mock_warning.called) i_info['configdrive'] = 'meow' self.node.instance_info = i_info self.node.save() mock_warning.reset_mock() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertIsNone(task.driver.deploy.deploy(task)) self.assertTrue(mock_warning.called) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare(self, mock_prepare_instance): node = self.node node.provision_state = states.DEPLOYING node.instance_info = {} node.save() with task_manager.acquire(self.context, node.uuid) as task: task.driver.deploy.prepare(task) self.assertFalse(mock_prepare_instance.called) self.assertEqual({'boot_option': 'ramdisk'}, task.node.instance_info['capabilities']) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_active(self, mock_prepare_instance): node = self.node node.provision_state = states.ACTIVE node.save() with task_manager.acquire(self.context, node.uuid) as task: task.driver.deploy.prepare(task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_unrescuing(self, mock_prepare_instance): node = self.node node.provision_state = states.UNRESCUING node.save() with task_manager.acquire(self.context, node.uuid) as task: task.driver.deploy.prepare(task) mock_prepare_instance.assert_called_once_with(mock.ANY, task) @mock.patch.object(pxe.LOG, 'warning', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) def test_prepare_fixes_and_logs_boot_option_warning( self, mock_prepare_instance, mock_warning): node = self.node node.properties['capabilities'] = 'boot_option:ramdisk' node.provision_state = states.DEPLOYING node.instance_info = {} node.save() with task_manager.acquire(self.context, node.uuid) as task: task.driver.deploy.prepare(task) self.assertFalse(mock_prepare_instance.called) self.assertEqual({'boot_option': 'ramdisk'}, task.node.instance_info['capabilities']) self.assertTrue(mock_warning.called) @mock.patch.object(deploy_utils, 'validate_image_properties', autospec=True) def test_validate(self, mock_validate_img): node = self.node node.properties['capabilities'] = 'boot_option:netboot' node.save() with task_manager.acquire(self.context, node.uuid) as task: task.driver.deploy.validate(task) self.assertTrue(mock_validate_img.called) @mock.patch.object(fake.FakeBoot, 'validate', autospec=True) @mock.patch.object(deploy_utils, 'validate_image_properties', autospec=True) def test_validate_interface_mismatch(self, mock_validate_image, mock_boot_validate): node = self.node node.boot_interface = 'fake' node.save() self.config(enabled_boot_interfaces=['fake'], default_boot_interface='fake') with task_manager.acquire(self.context, node.uuid) as task: error = self.assertRaises(exception.InvalidParameterValue, task.driver.deploy.validate, task) error_message = ('Invalid configuration: The boot interface must ' 'have the `ramdisk_boot` capability. You are ' 'using an incompatible boot interface.') self.assertEqual(error_message, str(error)) self.assertFalse(mock_boot_validate.called) self.assertFalse(mock_validate_image.called) @mock.patch.object(pxe.PXEBoot, 'validate', autospec=True) def test_validate_calls_boot_validate(self, mock_validate): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.deploy.validate(task) mock_validate.assert_called_once_with(mock.ANY, task) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(pxe.LOG, 'warning', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_deploy_with_smartnic_port( self, mock_image_info, mock_cache, mock_dhcp_factory, mock_switch_config, mock_warning, power_on_node_if_needed_mock, restore_power_state_mock): image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} mock_image_info.return_value = image_info i_info = self.node.instance_info i_info.update({'capabilities': {'boot_option': 'ramdisk'}}) self.node.instance_info = i_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: power_on_node_if_needed_mock.return_value = states.POWER_OFF self.assertIsNone(task.driver.deploy.deploy(task)) mock_image_info.assert_called_once_with(task, ipxe_enabled=False) mock_cache.assert_called_once_with( task, image_info, ipxe_enabled=False) self.assertFalse(mock_warning.called) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) i_info['configdrive'] = 'meow' self.node.instance_info = i_info self.node.save() mock_warning.reset_mock() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertIsNone(task.driver.deploy.deploy(task)) self.assertTrue(mock_warning.called) class PXEValidateRescueTestCase(db_base.DbTestCase): def setUp(self): super(PXEValidateRescueTestCase, self).setUp() for iface in drivers_base.ALL_INTERFACES: impl = 'fake' if iface == 'network': impl = 'flat' if iface == 'rescue': impl = 'agent' if iface == 'boot': impl = 'pxe' config_kwarg = {'enabled_%s_interfaces' % iface: [impl], 'default_%s_interface' % iface: impl} self.config(**config_kwarg) self.config(enabled_hardware_types=['fake-hardware']) driver_info = DRV_INFO_DICT driver_info.update({'rescue_ramdisk': 'my_ramdisk', 'rescue_kernel': 'my_kernel'}) instance_info = INST_INFO_DICT instance_info.update({'rescue_password': 'password'}) n = { 'driver': 'fake-hardware', 'instance_info': instance_info, 'driver_info': driver_info, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.node = obj_utils.create_test_node(self.context, **n) def test_validate_rescue(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_rescue(task) def test_validate_rescue_no_rescue_ramdisk(self): driver_info = self.node.driver_info del driver_info['rescue_ramdisk'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*rescue_ramdisk', task.driver.boot.validate_rescue, task) def test_validate_rescue_fails_no_rescue_kernel(self): driver_info = self.node.driver_info del driver_info['rescue_kernel'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*rescue_kernel', task.driver.boot.validate_rescue, task) @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) @mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) class PXEBootRetryTestCase(db_base.DbTestCase): boot_interface = 'pxe' def setUp(self): super(PXEBootRetryTestCase, self).setUp() self.config(enabled_boot_interfaces=['pxe', 'ipxe', 'fake']) self.config(boot_retry_timeout=300, group='pxe') self.node = obj_utils.create_test_node( self.context, driver='fake-hardware', boot_interface=self.boot_interface, provision_state=states.DEPLOYWAIT) @mock.patch.object(pxe.PXEBoot, '_check_boot_status', autospec=True) def test_check_boot_timeouts(self, mock_check_status, mock_power, mock_boot_dev): def _side_effect(iface, task): self.assertEqual(self.node.uuid, task.node.uuid) mock_check_status.side_effect = _side_effect manager = mock.Mock(spec=['iter_nodes']) manager.iter_nodes.return_value = [ (uuidutils.generate_uuid(), 'fake-hardware', ''), (self.node.uuid, self.node.driver, self.node.conductor_group) ] iface = pxe.PXEBoot() iface._check_boot_timeouts(manager, self.context) mock_check_status.assert_called_once_with(iface, mock.ANY) def test_check_boot_status_another_boot_interface(self, mock_power, mock_boot_dev): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot = fake.FakeBoot() pxe.PXEBoot()._check_boot_status(task) self.assertTrue(task.shared) self.assertFalse(mock_power.called) self.assertFalse(mock_boot_dev.called) def test_check_boot_status_recent_power_change(self, mock_power, mock_boot_dev): for field in ('agent_last_heartbeat', 'last_power_state_change'): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info = { field: str(timeutils.utcnow().isoformat()) } task.driver.boot._check_boot_status(task) self.assertTrue(task.shared) self.assertFalse(mock_power.called) self.assertFalse(mock_boot_dev.called) def test_check_boot_status_maintenance(self, mock_power, mock_boot_dev): self.node.maintenance = True self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot._check_boot_status(task) self.assertFalse(task.shared) self.assertFalse(mock_power.called) self.assertFalse(mock_boot_dev.called) def test_check_boot_status_wrong_state(self, mock_power, mock_boot_dev): self.node.provision_state = states.DEPLOYING self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot._check_boot_status(task) self.assertFalse(task.shared) self.assertFalse(mock_power.called) self.assertFalse(mock_boot_dev.called) def test_check_boot_status_retry(self, mock_power, mock_boot_dev): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot._check_boot_status(task) self.assertFalse(task.shared) mock_power.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON) ]) mock_boot_dev.assert_called_once_with(task, 'pxe', persistent=False) class iPXEBootRetryTestCase(PXEBootRetryTestCase): boot_interface = 'ipxe' ironic-15.0.0/ironic/tests/unit/drivers/modules/test_ipxe.py0000664000175000017500000013175213652514273024233 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for iPXE driver.""" import os import mock from oslo_config import cfg from oslo_serialization import jsonutils as json from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import boot_modes from ironic.common import dhcp_factory from ironic.common import exception from ironic.common.glance_service import image_service from ironic.common import pxe_utils from ironic.common import states from ironic.common import utils as common_utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base as drivers_base from ironic.drivers.modules import agent_base from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import ipxe from ironic.drivers.modules import pxe_base from ironic.drivers.modules.storage import noop as noop_storage from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() # NOTE(TheJulia): This code is essentially a bulk copy of the # test_pxe file with some contextual modifications to enforce # use of ipxe while also explicitly having it globally disabled # in the conductor. @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) class iPXEBootTestCase(db_base.DbTestCase): driver = 'fake-hardware' boot_interface = 'ipxe' driver_info = DRV_INFO_DICT driver_internal_info = DRV_INTERNAL_INFO_DICT def setUp(self): super(iPXEBootTestCase, self).setUp() self.context.auth_token = 'fake' self.config_temp_dir('tftp_root', group='pxe') self.config_temp_dir('images_path', group='pxe') self.config_temp_dir('http_root', group='deploy') self.config(group='deploy', http_url='http://myserver') instance_info = INST_INFO_DICT self.config(enabled_boot_interfaces=[self.boot_interface, 'ipxe', 'fake']) self.node = obj_utils.create_test_node( self.context, driver=self.driver, boot_interface=self.boot_interface, # Avoid fake properties in get_properties() output vendor_interface='no-vendor', instance_info=instance_info, driver_info=self.driver_info, driver_internal_info=self.driver_internal_info) self.port = obj_utils.create_test_port(self.context, node_id=self.node.id) self.config(group='conductor', api_url='http://127.0.0.1:1234/') def test_get_properties(self): expected = pxe_base.COMMON_PROPERTIES expected.update(agent_base.VENDOR_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_good(self, mock_glance): mock_glance.return_value = {'properties': {'kernel_id': 'fake-kernel', 'ramdisk_id': 'fake-initr'}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.validate(task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_good_whole_disk_image(self, mock_glance): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot.validate(task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', autospec=True) def test_validate_skip_check_write_image_false(self, mock_write, mock_glance): mock_write.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.boot.validate(task) self.assertFalse(mock_glance.called) def test_validate_fail_missing_deploy_kernel(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_deploy_ramdisk(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: del task.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_missing_image_source(self): info = dict(INST_INFO_DICT) del info['image_source'] self.node.instance_info = json.dumps(info) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node['instance_info'] = json.dumps(info) self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_no_port(self): new_node = obj_utils.create_test_node( self.context, uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', driver=self.driver, boot_interface=self.boot_interface, instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT) with task_manager.acquire(self.context, new_node.uuid, shared=True) as task: self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) def test_validate_fail_trusted_boot_with_secure_boot(self): instance_info = {"boot_option": "netboot", "secure_boot": "true", "trusted_boot": "true"} properties = {'capabilities': 'trusted_boot:true'} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['capabilities'] = instance_info task.node.properties = properties task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_fail_invalid_trusted_boot_value(self): properties = {'capabilities': 'trusted_boot:value'} instance_info = {"trusted_boot": "value"} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties = properties task.node.instance_info['capabilities'] = instance_info self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_no_image_kernel_ramdisk_props(self, mock_glance): instance_info = {"boot_option": "netboot"} mock_glance.return_value = {'properties': {}} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.instance_info['capabilities'] = instance_info self.assertRaises(exception.MissingParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_glance_image_doesnt_exists(self, mock_glance): mock_glance.side_effect = exception.ImageNotFound('not found') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_validate_fail_glance_conn_problem(self, mock_glance): exceptions = (exception.GlanceConnectionFailed('connection fail'), exception.ImageNotAuthorized('not authorized'), exception.Invalid('invalid')) mock_glance.side_effect = exceptions for exc in exceptions: with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.boot.validate, task) def test_validate_inspection(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_inspection(task) def test_validate_inspection_no_inspection_ramdisk(self): driver_info = self.node.driver_info del driver_info['deploy_ramdisk'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.UnsupportedDriverExtension, task.driver.boot.validate_inspection, task) # TODO(TheJulia): Many of the interfaces mocked below are private PXE # interface methods. As time progresses, these will need to be migrated # and refactored as we begin to separate PXE and iPXE interfaces. @mock.patch.object(manager_utils, 'node_get_boot_mode', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) @mock.patch.object(pxe_utils, 'get_image_info', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'build_pxe_config_options', autospec=True) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) def _test_prepare_ramdisk(self, mock_pxe_config, mock_build_pxe, mock_cache_r_k, mock_deploy_img_info, mock_instance_img_info, dhcp_factory_mock, set_boot_device_mock, get_boot_mode_mock, uefi=False, cleaning=False, ipxe_use_swift=False, whole_disk_image=False, mode='deploy', node_boot_mode=None, persistent=False): mock_build_pxe.return_value = {} kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode mock_deploy_img_info.return_value = {kernel_label: 'a', ramdisk_label: 'r'} if whole_disk_image: mock_instance_img_info.return_value = {} else: mock_instance_img_info.return_value = {'kernel': 'b'} mock_pxe_config.return_value = None mock_cache_r_k.return_value = None provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock get_boot_mode_mock.return_value = node_boot_mode driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = whole_disk_image self.node.driver_internal_info = driver_internal_info if mode == 'rescue': mock_deploy_img_info.return_value = { 'rescue_kernel': 'a', 'rescue_ramdisk': 'r'} self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=4) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) task.driver.boot.prepare_ramdisk(task, {'foo': 'bar'}) mock_deploy_img_info.assert_called_once_with(task.node, mode=mode, ipxe_enabled=True) provider_mock.update_dhcp.assert_called_once_with( task, dhcp_opts) if self.node.provision_state == states.DEPLOYING: get_boot_mode_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=persistent) if ipxe_use_swift: if whole_disk_image: self.assertFalse(mock_cache_r_k.called) else: mock_cache_r_k.assert_called_once_with( task, {'kernel': 'b'}, ipxe_enabled=True) mock_instance_img_info.assert_called_once_with( task, ipxe_enabled=True) elif not cleaning and mode == 'deploy': mock_cache_r_k.assert_called_once_with( task, {'deploy_kernel': 'a', 'deploy_ramdisk': 'r', 'kernel': 'b'}, ipxe_enabled=True) mock_instance_img_info.assert_called_once_with( task, ipxe_enabled=True) elif mode == 'deploy': mock_cache_r_k.assert_called_once_with( task, {'deploy_kernel': 'a', 'deploy_ramdisk': 'r'}, ipxe_enabled=True) elif mode == 'rescue': mock_cache_r_k.assert_called_once_with( task, {'rescue_kernel': 'a', 'rescue_ramdisk': 'r'}, ipxe_enabled=True) if uefi: mock_pxe_config.assert_called_once_with( task, {}, CONF.pxe.uefi_pxe_config_template, ipxe_enabled=True) else: mock_pxe_config.assert_called_once_with( task, {}, CONF.pxe.pxe_config_template, ipxe_enabled=True) def test_prepare_ramdisk(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_true(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'True' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_bool_true(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = True self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_sloppy_true(self): for value in ['true', 't', '1', 'on', 'y', 'YES']: self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = value self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_false(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'False' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_bool_false(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = False self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_force_persistent_boot_device_sloppy_false(self): for value in ['false', 'f', '0', 'off', 'n', 'NO', 'yxz']: self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = value self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_force_persistent_boot_device_default(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Default' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_force_persistent_boot_device_always(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Always' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=True) def test_prepare_ramdisk_force_persistent_boot_device_never(self): self.node.provision_state = states.DEPLOYING driver_info = self.node.driver_info driver_info['force_persistent_boot_device'] = 'Never' self.node.driver_info = driver_info self.node.save() self._test_prepare_ramdisk(persistent=False) def test_prepare_ramdisk_rescue(self): self.node.provision_state = states.RESCUING self.node.save() self._test_prepare_ramdisk(mode='rescue') def test_prepare_ramdisk_uefi(self): self.node.provision_state = states.DEPLOYING self.node.save() properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True) @mock.patch.object(os.path, 'isfile', lambda path: True) @mock.patch.object(common_utils, 'file_has_content', lambda *args: False) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_prepare_ramdisk_ipxe_with_copy_file_different( self, render_mock, write_mock): self.node.provision_state = states.DEPLOYING self.node.save() render_mock.return_value = 'foo' self._test_prepare_ramdisk() write_mock.assert_called_once_with( os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') render_mock.assert_called_once_with( CONF.pxe.ipxe_boot_script, {'ipxe_for_mac_uri': 'pxelinux.cfg/'}) @mock.patch.object(os.path, 'isfile', lambda path: False) @mock.patch('ironic.common.utils.file_has_content', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_prepare_ramdisk_ipxe_with_copy_no_file( self, render_mock, write_mock, file_has_content_mock): self.node.provision_state = states.DEPLOYING self.node.save() render_mock.return_value = 'foo' self._test_prepare_ramdisk() self.assertFalse(file_has_content_mock.called) write_mock.assert_called_once_with( os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') render_mock.assert_called_once_with( CONF.pxe.ipxe_boot_script, {'ipxe_for_mac_uri': 'pxelinux.cfg/'}) @mock.patch.object(os.path, 'isfile', lambda path: True) @mock.patch.object(common_utils, 'file_has_content', lambda *args: True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_prepare_ramdisk_ipxe_without_copy( self, render_mock, write_mock): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk() self.assertFalse(write_mock.called) @mock.patch.object(common_utils, 'render_template', lambda *args: 'foo') @mock.patch('ironic.common.utils.write_to_file', autospec=True) def test_prepare_ramdisk_ipxe_swift(self, write_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(group='pxe', ipxe_use_swift=True) self._test_prepare_ramdisk(ipxe_use_swift=True) write_mock.assert_called_once_with( os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') @mock.patch.object(common_utils, 'render_template', lambda *args: 'foo') @mock.patch('ironic.common.utils.write_to_file', autospec=True) def test_prepare_ramdisk_ipxe_swift_whole_disk_image( self, write_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(group='pxe', ipxe_use_swift=True) self._test_prepare_ramdisk(ipxe_use_swift=True, whole_disk_image=True) write_mock.assert_called_once_with( os.path.join( CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') def test_prepare_ramdisk_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk(cleaning=True) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_on_bm( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True) set_boot_mode_mock.assert_called_once_with(mock.ANY, boot_modes.UEFI) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_on_ironic( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk(node_boot_mode=boot_modes.LEGACY_BIOS) with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.LEGACY_BIOS, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 0) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_default_boot_mode_on_ironic_bios( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(default_boot_mode=boot_modes.LEGACY_BIOS, group='deploy') self._test_prepare_ramdisk() with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.LEGACY_BIOS, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 1) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_default_boot_mode_on_ironic_uefi( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() self.config(default_boot_mode=boot_modes.UEFI, group='deploy') self._test_prepare_ramdisk(uefi=True) with task_manager.acquire(self.context, self.node.uuid) as task: driver_internal_info = task.node.driver_internal_info self.assertIn('deploy_boot_mode', driver_internal_info) self.assertEqual(boot_modes.UEFI, driver_internal_info['deploy_boot_mode']) self.assertEqual(set_boot_mode_mock.call_count, 1) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_conflicting_boot_modes( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True, node_boot_mode=boot_modes.LEGACY_BIOS) set_boot_mode_mock.assert_called_once_with(mock.ANY, boot_modes.UEFI) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_conflicting_boot_modes_set_unsupported( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() set_boot_mode_mock.side_effect = exception.UnsupportedDriverExtension( extension='management', driver='test-driver' ) self.assertRaises(exception.UnsupportedDriverExtension, self._test_prepare_ramdisk, uefi=True, node_boot_mode=boot_modes.LEGACY_BIOS) @mock.patch.object(manager_utils, 'node_set_boot_mode', autospec=True) def test_prepare_ramdisk_set_boot_mode_not_called( self, set_boot_mode_mock): self.node.provision_state = states.DEPLOYING self.node.save() properties = self.node.properties properties['capabilities'] = 'boot_mode:uefi' self.node.properties = properties self.node.save() self._test_prepare_ramdisk(uefi=True, node_boot_mode=boot_modes.UEFI) self.assertEqual(set_boot_mode_mock.call_count, 0) @mock.patch.object(pxe_utils, 'clean_up_pxe_env', autospec=True) @mock.patch.object(pxe_utils, 'get_image_info', autospec=True) def _test_clean_up_ramdisk(self, get_image_info_mock, clean_up_pxe_env_mock, mode='deploy'): with task_manager.acquire(self.context, self.node.uuid) as task: kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode image_info = {kernel_label: ['', '/path/to/' + kernel_label], ramdisk_label: ['', '/path/to/' + ramdisk_label]} get_image_info_mock.return_value = image_info task.driver.boot.clean_up_ramdisk(task) clean_up_pxe_env_mock.assert_called_once_with( task, image_info, ipxe_enabled=True) get_image_info_mock.assert_called_once_with( task.node, mode=mode, ipxe_enabled=True) def test_clean_up_ramdisk(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_clean_up_ramdisk() def test_clean_up_ramdisk_rescue(self): self.node.provision_state = states.RESCUING self.node.save() self._test_clean_up_ramdisk(mode='rescue') @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} instance_info = {"boot_option": "netboot"} get_image_info_mock.return_value = image_info with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid, ipxe_enabled=True) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['root_uuid_or_disk_id'] = ( "30212642-09d3-467f-8e09-21685826ab50") task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=True) cache_mock.assert_called_once_with(task, image_info, ipxe_enabled=True) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, "30212642-09d3-467f-8e09-21685826ab50", 'bios', False, False, False, False, ipxe_enabled=True) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=True) @mock.patch('os.path.isfile', return_value=False) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot_active( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock, create_pxe_config_mock, isfile_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} instance_info = {"boot_option": "netboot"} get_image_info_mock.return_value = image_info self.node.provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid, ipxe_enabled=True) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['root_uuid_or_disk_id'] = ( "30212642-09d3-467f-8e09-21685826ab50") task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=True) cache_mock.assert_called_once_with(task, image_info, ipxe_enabled=True) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) create_pxe_config_mock.assert_called_once_with( task, mock.ANY, CONF.pxe.pxe_config_template, ipxe_enabled=True) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, "30212642-09d3-467f-8e09-21685826ab50", 'bios', False, False, False, False, ipxe_enabled=True) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot_missing_root_uuid( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock image_info = {'kernel': ('', '/path/to/kernel'), 'ramdisk': ('', '/path/to/ramdisk')} get_image_info_mock.return_value = image_info instance_info = {"boot_option": "netboot"} with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=4) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['is_whole_disk_image'] = False task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=True) cache_mock.assert_called_once_with(task, image_info, ipxe_enabled=True) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) self.assertFalse(switch_pxe_config_mock.called) self.assertFalse(set_boot_device_mock.called) # NOTE(TheJulia): The log mock below is attached to the iPXE interface # which directly logs the warning that is being checked for. @mock.patch.object(pxe_base.LOG, 'warning', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory') @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_whole_disk_image_missing_root_uuid( self, get_image_info_mock, cache_mock, dhcp_factory_mock, set_boot_device_mock, clean_up_pxe_mock, log_mock): provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock get_image_info_mock.return_value = {} instance_info = {"boot_option": "netboot"} with task_manager.acquire(self.context, self.node.uuid) as task: dhcp_opts = pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) task.node.properties['capabilities'] = 'boot_mode:bios' task.node.instance_info['capabilities'] = instance_info task.node.driver_internal_info['is_whole_disk_image'] = True task.driver.boot.prepare_instance(task) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=True) cache_mock.assert_called_once_with(task, {}, ipxe_enabled=True) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) self.assertTrue(log_mock.called) clean_up_pxe_mock.assert_called_once_with(task, ipxe_enabled=True) set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch('os.path.isfile', lambda filename: False) @mock.patch.object(pxe_utils, 'create_pxe_config', autospec=True) @mock.patch.object(deploy_utils, 'is_iscsi_boot', lambda task: True) @mock.patch.object(noop_storage.NoopStorage, 'should_write_image', lambda task: False) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(deploy_utils, 'switch_pxe_config', autospec=True) @mock.patch.object(dhcp_factory, 'DHCPFactory', autospec=True) @mock.patch.object(pxe_utils, 'cache_ramdisk_kernel', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_prepare_instance_netboot_iscsi( self, get_image_info_mock, cache_mock, dhcp_factory_mock, switch_pxe_config_mock, set_boot_device_mock, create_pxe_config_mock): http_url = 'http://192.1.2.3:1234' self.config(http_url=http_url, group='deploy') provider_mock = mock.MagicMock() dhcp_factory_mock.return_value = provider_mock vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_internal_info = { 'boot_from_volume': vol_id} dhcp_opts = pxe_utils.dhcp_options_for_instance(task, ipxe_enabled=True) dhcp_opts += pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True, ip_version=6) pxe_config_path = pxe_utils.get_pxe_config_file_path( task.node.uuid, ipxe_enabled=True) task.node.properties['capabilities'] = 'boot_mode:bios' task.driver.boot.prepare_instance(task) self.assertFalse(get_image_info_mock.called) self.assertFalse(cache_mock.called) provider_mock.update_dhcp.assert_called_once_with(task, dhcp_opts) create_pxe_config_mock.assert_called_once_with( task, mock.ANY, CONF.pxe.pxe_config_template, ipxe_enabled=True) switch_pxe_config_mock.assert_called_once_with( pxe_config_path, None, boot_modes.LEGACY_BIOS, False, ipxe_enabled=True, iscsi_boot=True, ramdisk_boot=False) set_boot_device_mock.assert_called_once_with(task, boot_devices.PXE, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) def test_prepare_instance_localboot(self, clean_up_pxe_config_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid) as task: instance_info = task.node.instance_info instance_info['capabilities'] = {'boot_option': 'local'} task.node.instance_info = instance_info task.node.save() task.driver.boot.prepare_instance(task) clean_up_pxe_config_mock.assert_called_once_with( task, ipxe_enabled=True) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) def test_prepare_instance_localboot_active(self, clean_up_pxe_config_mock, set_boot_device_mock): self.node.provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: instance_info = task.node.instance_info instance_info['capabilities'] = {'boot_option': 'local'} task.node.instance_info = instance_info task.node.save() task.driver.boot.prepare_instance(task) clean_up_pxe_config_mock.assert_called_once_with( task, ipxe_enabled=True) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(pxe_utils, 'clean_up_pxe_env', autospec=True) @mock.patch.object(pxe_utils, 'get_instance_image_info', autospec=True) def test_clean_up_instance(self, get_image_info_mock, clean_up_pxe_env_mock): with task_manager.acquire(self.context, self.node.uuid) as task: image_info = {'kernel': ['', '/path/to/kernel'], 'ramdisk': ['', '/path/to/ramdisk']} get_image_info_mock.return_value = image_info task.driver.boot.clean_up_instance(task) clean_up_pxe_env_mock.assert_called_once_with( task, image_info, ipxe_enabled=True) get_image_info_mock.assert_called_once_with( task, ipxe_enabled=True) @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) class iPXEValidateRescueTestCase(db_base.DbTestCase): def setUp(self): super(iPXEValidateRescueTestCase, self).setUp() for iface in drivers_base.ALL_INTERFACES: impl = 'fake' if iface == 'network': impl = 'flat' if iface == 'rescue': impl = 'agent' if iface == 'boot': impl = 'ipxe' config_kwarg = {'enabled_%s_interfaces' % iface: [impl], 'default_%s_interface' % iface: impl} self.config(**config_kwarg) self.config(enabled_hardware_types=['fake-hardware']) driver_info = DRV_INFO_DICT driver_info.update({'rescue_ramdisk': 'my_ramdisk', 'rescue_kernel': 'my_kernel'}) instance_info = INST_INFO_DICT instance_info.update({'rescue_password': 'password'}) n = { 'driver': 'fake-hardware', 'instance_info': instance_info, 'driver_info': driver_info, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.node = obj_utils.create_test_node(self.context, **n) def test_validate_rescue(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_rescue(task) def test_validate_rescue_no_rescue_ramdisk(self): driver_info = self.node.driver_info del driver_info['rescue_ramdisk'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*rescue_ramdisk', task.driver.boot.validate_rescue, task) def test_validate_rescue_fails_no_rescue_kernel(self): driver_info = self.node.driver_info del driver_info['rescue_kernel'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*rescue_kernel', task.driver.boot.validate_rescue, task) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_agent_client.py0000664000175000017500000005205413652514273025717 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from http import client as http_client import json import mock import requests import retrying from ironic.common import exception from ironic import conf from ironic.drivers.modules import agent_client from ironic.tests import base CONF = conf.CONF class MockResponse(object): def __init__(self, text, status_code=http_client.OK): assert isinstance(text, str) self.text = text self.status_code = status_code def json(self): return json.loads(self.text) class MockNode(object): def __init__(self): self.uuid = 'uuid' self.driver_internal_info = { 'agent_url': "http://127.0.0.1:9999", 'hardware_manager_version': {'generic': '1'} } self.instance_info = {} def as_dict(self, secure=False): assert secure, 'agent_client must pass secure=True' return { 'uuid': self.uuid, 'driver_internal_info': self.driver_internal_info, 'instance_info': self.instance_info } class TestAgentClient(base.TestCase): def setUp(self): super(TestAgentClient, self).setUp() self.client = agent_client.AgentClient() self.client.session = mock.MagicMock(autospec=requests.Session) self.node = MockNode() def test_content_type_header(self): client = agent_client.AgentClient() self.assertEqual('application/json', client.session.headers['Content-Type']) def test__get_command_url(self): command_url = self.client._get_command_url(self.node) expected = ('%s/v1/commands/' % self.node.driver_internal_info['agent_url']) self.assertEqual(expected, command_url) def test__get_command_url_fail(self): del self.node.driver_internal_info['agent_url'] self.assertRaises(exception.IronicException, self.client._get_command_url, self.node) def test__get_command_body(self): expected = json.dumps({'name': 'prepare_image', 'params': {}}) self.assertEqual(expected, self.client._get_command_body('prepare_image', {})) def test__command(self): response_data = {'status': 'ok'} response_text = json.dumps(response_data) self.client.session.post.return_value = MockResponse(response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) response = self.client._command(self.node, method, params) self.assertEqual(response, response_data) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}, timeout=60) def test__command_fail_json(self): response_text = 'this be not json matey!' self.client.session.post.return_value = MockResponse(response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) self.assertRaises(exception.IronicException, self.client._command, self.node, method, params) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}, timeout=60) def test__command_fail_post(self): error = 'Boom' self.client.session.post.side_effect = requests.RequestException(error) method = 'foo.bar' params = {} self.client._get_command_url(self.node) self.client._get_command_body(method, params) e = self.assertRaises(exception.IronicException, self.client._command, self.node, method, params) self.assertEqual('Error invoking agent command %(method)s for node ' '%(node)s. Error: %(error)s' % {'method': method, 'node': self.node.uuid, 'error': error}, str(e)) def test__command_fail_connect(self): error = 'Boom' self.client.session.post.side_effect = requests.ConnectionError(error) method = 'foo.bar' params = {} self.client._get_command_url(self.node) self.client._get_command_body(method, params) e = self.assertRaises(exception.AgentConnectionFailed, self.client._command, self.node, method, params) self.assertEqual('Connection to agent failed: Failed to connect to ' 'the agent running on node %(node)s for invoking ' 'command %(method)s. Error: %(error)s' % {'method': method, 'node': self.node.uuid, 'error': error}, str(e)) def test__command_error_code(self): response_text = '{"faultstring": "you dun goofd"}' self.client.session.post.return_value = MockResponse( response_text, status_code=http_client.BAD_REQUEST) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) self.assertRaises(exception.AgentAPIError, self.client._command, self.node, method, params) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}, timeout=60) def test__command_error_code_okay_error_typeerror_embedded(self): response_text = ('{"faultstring": "you dun goofd", ' '"command_error": {"type": "TypeError"}}') self.client.session.post.return_value = MockResponse( response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) self.assertRaises(exception.AgentAPIError, self.client._command, self.node, method, params) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false'}, timeout=60) def test_get_commands_status(self): with mock.patch.object(self.client.session, 'get', autospec=True) as mock_get: res = mock.MagicMock(spec_set=['json']) res.json.return_value = {'commands': []} mock_get.return_value = res self.assertEqual([], self.client.get_commands_status(self.node)) agent_url = self.node.driver_internal_info.get('agent_url') mock_get.assert_called_once_with( '%(agent_url)s/%(api_version)s/commands' % { 'agent_url': agent_url, 'api_version': CONF.agent.agent_api_version}, timeout=CONF.agent.command_timeout) def test_get_commands_status_retries(self): with mock.patch.object(self.client.session, 'get', autospec=True) as mock_get: res = mock.MagicMock(spec_set=['json']) res.json.return_value = {'commands': []} mock_get.side_effect = [ requests.ConnectionError('boom'), res] self.assertEqual([], self.client.get_commands_status(self.node)) self.assertEqual(2, mock_get.call_count) def test_prepare_image(self): self.client._command = mock.MagicMock(spec_set=[]) image_info = {'image_id': 'image'} params = {'image_info': image_info} self.client.prepare_image(self.node, image_info, wait=False) self.client._command.assert_called_once_with( node=self.node, method='standby.prepare_image', params=params, wait=False) def test_prepare_image_with_configdrive(self): self.client._command = mock.MagicMock(spec_set=[]) configdrive_url = 'http://swift/configdrive' self.node.instance_info['configdrive'] = configdrive_url image_info = {'image_id': 'image'} params = { 'image_info': image_info, 'configdrive': configdrive_url, } self.client.prepare_image(self.node, image_info, wait=False) self.client._command.assert_called_once_with( node=self.node, method='standby.prepare_image', params=params, wait=False) def test_start_iscsi_target(self): self.client._command = mock.MagicMock(spec_set=[]) iqn = 'fake-iqn' port = agent_client.DEFAULT_IPA_PORTAL_PORT wipe_disk_metadata = False params = {'iqn': iqn, 'portal_port': port, 'wipe_disk_metadata': wipe_disk_metadata} self.client.start_iscsi_target(self.node, iqn) self.client._command.assert_called_once_with( node=self.node, method='iscsi.start_iscsi_target', params=params, wait=True) def test_start_iscsi_target_custom_port(self): self.client._command = mock.MagicMock(spec_set=[]) iqn = 'fake-iqn' port = 3261 wipe_disk_metadata = False params = {'iqn': iqn, 'portal_port': port, 'wipe_disk_metadata': wipe_disk_metadata} self.client.start_iscsi_target(self.node, iqn, portal_port=port) self.client._command.assert_called_once_with( node=self.node, method='iscsi.start_iscsi_target', params=params, wait=True) def test_start_iscsi_target_wipe_disk_metadata(self): self.client._command = mock.MagicMock(spec_set=[]) iqn = 'fake-iqn' port = agent_client.DEFAULT_IPA_PORTAL_PORT wipe_disk_metadata = True params = {'iqn': iqn, 'portal_port': port, 'wipe_disk_metadata': wipe_disk_metadata} self.client.start_iscsi_target(self.node, iqn, wipe_disk_metadata=wipe_disk_metadata) self.client._command.assert_called_once_with( node=self.node, method='iscsi.start_iscsi_target', params=params, wait=True) def _test_install_bootloader(self, root_uuid, efi_system_part_uuid=None, prep_boot_part_uuid=None): self.client._command = mock.MagicMock(spec_set=[]) params = {'root_uuid': root_uuid, 'efi_system_part_uuid': efi_system_part_uuid, 'prep_boot_part_uuid': prep_boot_part_uuid, 'target_boot_mode': 'hello'} self.client.install_bootloader( self.node, root_uuid, efi_system_part_uuid=efi_system_part_uuid, prep_boot_part_uuid=prep_boot_part_uuid, target_boot_mode='hello') self.client._command.assert_called_once_with( command_timeout_factor=2, node=self.node, method='image.install_bootloader', params=params, wait=True) def test_install_bootloader(self): self._test_install_bootloader(root_uuid='fake-root-uuid', efi_system_part_uuid='fake-efi-uuid') def test_install_bootloader_with_prep(self): self._test_install_bootloader(root_uuid='fake-root-uuid', efi_system_part_uuid='fake-efi-uuid', prep_boot_part_uuid='fake-prep-uuid') def test_get_clean_steps(self): self.client._command = mock.MagicMock(spec_set=[]) ports = [] expected_params = { 'node': self.node.as_dict(secure=True), 'ports': [] } self.client.get_clean_steps(self.node, ports) self.client._command.assert_called_once_with( node=self.node, method='clean.get_clean_steps', params=expected_params, wait=True) def test_execute_clean_step(self): self.client._command = mock.MagicMock(spec_set=[]) ports = [] step = {'priority': 10, 'step': 'erase_devices', 'interface': 'deploy'} expected_params = { 'step': step, 'node': self.node.as_dict(secure=True), 'ports': [], 'clean_version': self.node.driver_internal_info['hardware_manager_version'] } self.client.execute_clean_step(step, self.node, ports) self.client._command.assert_called_once_with( node=self.node, method='clean.execute_clean_step', params=expected_params) def test_power_off(self): self.client._command = mock.MagicMock(spec_set=[]) self.client.power_off(self.node) self.client._command.assert_called_once_with( node=self.node, method='standby.power_off', params={}) def test_sync(self): self.client._command = mock.MagicMock(spec_set=[]) self.client.sync(self.node) self.client._command.assert_called_once_with( node=self.node, method='standby.sync', params={}, wait=True) def test_finalize_rescue(self): self.client._command = mock.MagicMock(spec_set=[]) self.node.instance_info['rescue_password'] = 'password' self.node.instance_info['hashed_rescue_password'] = '1234' expected_params = { 'rescue_password': '1234', 'hashed': True, } self.client.finalize_rescue(self.node) self.client._command.assert_called_once_with( node=self.node, method='rescue.finalize_rescue', params=expected_params) def test_finalize_rescue_exc(self): # node does not have 'rescue_password' set in its 'instance_info' self.client._command = mock.MagicMock(spec_set=[]) self.assertRaises(exception.IronicException, self.client.finalize_rescue, self.node) self.assertFalse(self.client._command.called) def test_finalize_rescue_fallback(self): self.config(require_rescue_password_hashed=False, group="conductor") self.client._command = mock.MagicMock(spec_set=[]) self.node.instance_info['rescue_password'] = 'password' self.node.instance_info['hashed_rescue_password'] = '1234' self.client._command.side_effect = [ exception.AgentAPIError('blah'), ('', '')] self.client.finalize_rescue(self.node) self.client._command.assert_has_calls([ mock.call(node=mock.ANY, method='rescue.finalize_rescue', params={'rescue_password': '1234', 'hashed': True}), mock.call(node=mock.ANY, method='rescue.finalize_rescue', params={'rescue_password': 'password'})]) def test_finalize_rescue_fallback_restricted(self): self.config(require_rescue_password_hashed=True, group="conductor") self.client._command = mock.MagicMock(spec_set=[]) self.node.instance_info['rescue_password'] = 'password' self.node.instance_info['hashed_rescue_password'] = '1234' self.client._command.side_effect = exception.AgentAPIError('blah') self.assertRaises(exception.InstanceRescueFailure, self.client.finalize_rescue, self.node) self.client._command.assert_has_calls([ mock.call(node=mock.ANY, method='rescue.finalize_rescue', params={'rescue_password': '1234', 'hashed': True})]) def test__command_agent_client(self): response_data = {'status': 'ok'} response_text = json.dumps(response_data) self.client.session.post.return_value = MockResponse(response_text) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} i_info = self.node.driver_internal_info i_info['agent_secret_token'] = 'magical' self.node.driver_internal_info = i_info url = self.client._get_command_url(self.node) body = self.client._get_command_body(method, params) response = self.client._command(self.node, method, params) self.assertEqual(response, response_data) self.client.session.post.assert_called_once_with( url, data=body, params={'wait': 'false', 'agent_token': 'magical'}, timeout=60) class TestAgentClientAttempts(base.TestCase): def setUp(self): super(TestAgentClientAttempts, self).setUp() self.client = agent_client.AgentClient() self.client.session = mock.MagicMock(autospec=requests.Session) self.node = MockNode() @mock.patch.object(retrying.time, 'sleep', autospec=True) def test__command_fail_all_attempts(self, mock_sleep): mock_sleep.return_value = None error = 'Connection Timeout' method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} self.client.session.post.side_effect = [requests.Timeout(error), requests.Timeout(error), requests.Timeout(error), requests.Timeout(error)] self.client._get_command_url(self.node) self.client._get_command_body(method, params) e = self.assertRaises(exception.AgentConnectionFailed, self.client._command, self.node, method, params) self.assertEqual('Connection to agent failed: Failed to connect to ' 'the agent running on node %(node)s for invoking ' 'command %(method)s. Error: %(error)s' % {'method': method, 'node': self.node.uuid, 'error': error}, str(e)) self.assertEqual(3, self.client.session.post.call_count) @mock.patch.object(retrying.time, 'sleep', autospec=True) def test__command_succeed_after_two_timeouts(self, mock_sleep): mock_sleep.return_value = None error = 'Connection Timeout' response_data = {'status': 'ok'} response_text = json.dumps(response_data) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} self.client.session.post.side_effect = [requests.Timeout(error), requests.Timeout(error), MockResponse(response_text)] response = self.client._command(self.node, method, params) self.assertEqual(3, self.client.session.post.call_count) self.assertEqual(response, response_data) self.client.session.post.assert_called_with( self.client._get_command_url(self.node), data=self.client._get_command_body(method, params), params={'wait': 'false'}, timeout=60) @mock.patch.object(retrying.time, 'sleep', autospec=True) def test__command_succeed_after_one_timeout(self, mock_sleep): mock_sleep.return_value = None error = 'Connection Timeout' response_data = {'status': 'ok'} response_text = json.dumps(response_data) method = 'standby.run_image' image_info = {'image_id': 'test_image'} params = {'image_info': image_info} self.client.session.post.side_effect = [requests.Timeout(error), MockResponse(response_text), requests.Timeout(error)] response = self.client._command(self.node, method, params) self.assertEqual(2, self.client.session.post.call_count) self.assertEqual(response, response_data) self.client.session.post.assert_called_with( self.client._get_command_url(self.node), data=self.client._get_command_body(method, params), params={'wait': 'false'}, timeout=60) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/0000775000175000017500000000000013652514443022575 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_boot.py0000664000175000017500000025466613652514273025175 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Boot Driver """ import io import os import shutil import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common.glance_service import service_utils from ironic.common.i18n import _ from ironic.common import images from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules.irmc import boot as irmc_boot from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers.modules import pxe from ironic.drivers.modules import pxe_base from ironic.tests import base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers.modules.irmc import test_common from ironic.tests.unit.drivers.modules import test_pxe from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils INFO_DICT = db_utils.get_test_irmc_info() CONF = cfg.CONF PARSED_IFNO = { 'irmc_address': '1.2.3.4', 'irmc_port': 80, 'irmc_username': 'admin0', 'irmc_password': 'fake0', 'irmc_auth_method': 'digest', 'irmc_client_timeout': 60, 'irmc_snmp_community': 'public', 'irmc_snmp_port': 161, 'irmc_snmp_version': 'v2c', 'irmc_snmp_security': None, 'irmc_sensor_method': 'ipmitool', } class IRMCDeployPrivateMethodsTestCase(test_common.BaseIRMCTest): boot_interface = 'irmc-virtual-media' def setUp(self): irmc_boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc_boot.check_share_fs_mounted_patcher.stop) super(IRMCDeployPrivateMethodsTestCase, self).setUp() CONF.irmc.remote_image_share_root = '/remote_image_share_root' CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' CONF.irmc.remote_image_user_domain = 'local' @mock.patch.object(os.path, 'isdir', spec_set=True, autospec=True) def test__parse_config_option(self, isdir_mock): isdir_mock.return_value = True result = irmc_boot._parse_config_option() isdir_mock.assert_called_once_with('/remote_image_share_root') self.assertIsNone(result) @mock.patch.object(os.path, 'isdir', spec_set=True, autospec=True) def test__parse_config_option_non_existed_root(self, isdir_mock): CONF.irmc.remote_image_share_root = '/non_existed_root' isdir_mock.return_value = False self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_config_option) isdir_mock.assert_called_once_with('/non_existed_root') @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_driver_info_in_share(self, isfile_mock): """With required 'irmc_deploy_iso' in share.""" isfile_mock.return_value = True self.node.driver_info['irmc_deploy_iso'] = 'deploy.iso' driver_info_expected = {'irmc_deploy_iso': 'deploy.iso'} driver_info_actual = irmc_boot._parse_driver_info(self.node, mode='deploy') isfile_mock.assert_called_once_with( '/remote_image_share_root/deploy.iso') self.assertEqual(driver_info_expected, driver_info_actual) @mock.patch.object(irmc_boot, '_is_image_href_ordinary_file_name', spec_set=True, autospec=True) def test__parse_driver_info_not_in_share( self, is_image_href_ordinary_file_name_mock): """With required 'irmc_deploy_iso' not in share.""" self.node.driver_info[ 'irmc_rescue_iso'] = 'bc784057-a140-4130-add3-ef890457e6b3' driver_info_expected = {'irmc_rescue_iso': 'bc784057-a140-4130-add3-ef890457e6b3'} is_image_href_ordinary_file_name_mock.return_value = False driver_info_actual = irmc_boot._parse_driver_info(self.node, mode='rescue') self.assertEqual(driver_info_expected, driver_info_actual) @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_driver_info_with_iso_invalid(self, isfile_mock): """With required 'irmc_deploy_iso' non existed.""" isfile_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_deploy_iso'] = 'deploy.iso' error_msg = (_("Deploy ISO file, %(deploy_iso)s, " "not found for node: %(node)s.") % {'deploy_iso': '/remote_image_share_root/deploy.iso', 'node': task.node.uuid}) e = self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_driver_info, task.node, mode='deploy') self.assertEqual(error_msg, str(e)) def test__parse_driver_info_with_iso_missing(self): """With required 'irmc_rescue_iso' empty.""" self.node.driver_info['irmc_rescue_iso'] = None error_msg = ("Error validating iRMC virtual media for rescue. Some" " parameters were missing in node's driver_info." " Missing are: ['irmc_rescue_iso']") e = self.assertRaises(exception.MissingParameterValue, irmc_boot._parse_driver_info, self.node, mode='rescue') self.assertEqual(error_msg, str(e)) def test__parse_instance_info_with_boot_iso_file_name_ok(self): """With optional 'irmc_boot_iso' file name.""" CONF.irmc.remote_image_share_root = '/etc' self.node.instance_info['irmc_boot_iso'] = 'hosts' instance_info_expected = {'irmc_boot_iso': 'hosts'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_without_boot_iso_ok(self): """With optional no 'irmc_boot_iso' file name.""" CONF.irmc.remote_image_share_root = '/etc' self.node.instance_info['irmc_boot_iso'] = None instance_info_expected = {} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_uuid_ok(self): """With optional 'irmc_boot_iso' glance uuid.""" self.node.instance_info[ 'irmc_boot_iso'] = 'bc784057-a140-4130-add3-ef890457e6b3' instance_info_expected = {'irmc_boot_iso': 'bc784057-a140-4130-add3-ef890457e6b3'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_glance_ok(self): """With optional 'irmc_boot_iso' glance url.""" self.node.instance_info['irmc_boot_iso'] = ( 'glance://bc784057-a140-4130-add3-ef890457e6b3') instance_info_expected = { 'irmc_boot_iso': 'glance://bc784057-a140-4130-add3-ef890457e6b3', } instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_http_ok(self): """With optional 'irmc_boot_iso' http url.""" self.node.driver_info[ 'irmc_deploy_iso'] = 'http://irmc_boot_iso' driver_info_expected = {'irmc_deploy_iso': 'http://irmc_boot_iso'} driver_info_actual = irmc_boot._parse_driver_info(self.node) self.assertEqual(driver_info_expected, driver_info_actual) def test__parse_instance_info_with_boot_iso_https_ok(self): """With optional 'irmc_boot_iso' https url.""" self.node.instance_info[ 'irmc_boot_iso'] = 'https://irmc_boot_iso' instance_info_expected = {'irmc_boot_iso': 'https://irmc_boot_iso'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) def test__parse_instance_info_with_boot_iso_file_url_ok(self): """With optional 'irmc_boot_iso' file url.""" self.node.instance_info[ 'irmc_boot_iso'] = 'file://irmc_boot_iso' instance_info_expected = {'irmc_boot_iso': 'file://irmc_boot_iso'} instance_info_actual = irmc_boot._parse_instance_info(self.node) self.assertEqual(instance_info_expected, instance_info_actual) @mock.patch.object(os.path, 'isfile', spec_set=True, autospec=True) def test__parse_instance_info_with_boot_iso_invalid(self, isfile_mock): CONF.irmc.remote_image_share_root = '/etc' isfile_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid) as task: task.node.instance_info['irmc_boot_iso'] = 'hosts~non~existed' error_msg = (_("Boot ISO file, %(boot_iso)s, " "not found for node: %(node)s.") % {'boot_iso': '/etc/hosts~non~existed', 'node': task.node.uuid}) e = self.assertRaises(exception.InvalidParameterValue, irmc_boot._parse_instance_info, task.node) self.assertEqual(error_msg, str(e)) @mock.patch.object(deploy_utils, 'get_image_instance_info', spec_set=True, autospec=True) @mock.patch('os.path.isfile', autospec=True) def test_parse_deploy_info_ok(self, mock_isfile, get_image_instance_info_mock): CONF.irmc.remote_image_share_root = '/etc' get_image_instance_info_mock.return_value = {'a': 'b'} driver_info_expected = {'a': 'b', 'irmc_deploy_iso': 'hosts', 'irmc_boot_iso': 'fstab'} with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_deploy_iso'] = 'hosts' task.node.instance_info['irmc_boot_iso'] = 'fstab' driver_info_actual = irmc_boot._parse_deploy_info(task.node) self.assertEqual(driver_info_expected, driver_info_actual) boot_iso_path = os.path.join( CONF.irmc.remote_image_share_root, task.node.instance_info['irmc_boot_iso'] ) mock_isfile.assert_any_call(boot_iso_path) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test__setup_vmedia_with_file_deploy(self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_deploy_iso'] = 'deploy_iso_filename' ramdisk_opts = {'a': 'b'} irmc_boot._setup_vmedia(task, mode='deploy', ramdisk_options=ramdisk_opts) self.assertFalse(fetch_mock.called) setup_vmedia_mock.assert_called_once_with( task, 'deploy_iso_filename', ramdisk_opts) set_boot_device_mock.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test__setup_vmedia_with_file_rescue(self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_rescue_iso'] = 'rescue_iso_filename' ramdisk_opts = {'a': 'b'} irmc_boot._setup_vmedia(task, mode='rescue', ramdisk_options=ramdisk_opts) self.assertFalse(fetch_mock.called) setup_vmedia_mock.assert_called_once_with( task, 'rescue_iso_filename', ramdisk_opts) set_boot_device_mock.assert_called_once_with(task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test_setup_vmedia_with_image_service_deploy( self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): CONF.irmc.remote_image_share_root = '/' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_deploy_iso'] = 'glance://deploy_iso' ramdisk_opts = {'a': 'b'} irmc_boot._setup_vmedia(task, mode='deploy', ramdisk_options=ramdisk_opts) fetch_mock.assert_called_once_with( task.context, 'glance://deploy_iso', "/deploy-%s.iso" % self.node.uuid) setup_vmedia_mock.assert_called_once_with( task, "deploy-%s.iso" % self.node.uuid, ramdisk_opts) set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) def test_setup_vmedia_with_image_service_rescue( self, fetch_mock, setup_vmedia_mock, set_boot_device_mock): CONF.irmc.remote_image_share_root = '/' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_info['irmc_rescue_iso'] = 'glance://rescue_iso' ramdisk_opts = {'a': 'b'} irmc_boot._setup_vmedia(task, mode='rescue', ramdisk_options=ramdisk_opts) fetch_mock.assert_called_once_with( task.context, 'glance://rescue_iso', "/rescue-%s.iso" % self.node.uuid) setup_vmedia_mock.assert_called_once_with( task, "rescue-%s.iso" % self.node.uuid, ramdisk_opts) set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM) def test__get_iso_name(self): actual = irmc_boot._get_iso_name(self.node, label='deploy') expected = "deploy-%s.iso" % self.node.uuid self.assertEqual(expected, actual) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__prepare_boot_iso_file(self, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): deploy_info_mock.return_value = {'irmc_boot_iso': 'irmc_boot.iso'} with task_manager.acquire(self.context, self.node.uuid) as task: irmc_boot._prepare_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) self.assertFalse(fetch_mock.called) self.assertFalse(image_props_mock.called) self.assertFalse(boot_mode_mock.called) self.assertFalse(create_boot_iso_mock.called) task.node.refresh() self.assertEqual('irmc_boot.iso', task.node.driver_internal_info['irmc_boot_iso']) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_is_image_href_ordinary_file_name', spec_set=True, autospec=True) def test__prepare_boot_iso_fetch_ok(self, is_image_href_ordinary_file_name_mock, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): CONF.irmc.remote_image_share_root = '/' image = '733d1c44-a2ea-414b-aca7-69decf20d810' is_image_href_ordinary_file_name_mock.return_value = False deploy_info_mock.return_value = {'irmc_boot_iso': image} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['irmc_boot_iso'] = image irmc_boot._prepare_boot_iso(task, 'root-uuid') deploy_info_mock.assert_called_once_with(task.node) fetch_mock.assert_called_once_with( task.context, image, "/boot-%s.iso" % self.node.uuid) self.assertFalse(image_props_mock.called) self.assertFalse(boot_mode_mock.called) self.assertFalse(create_boot_iso_mock.called) task.node.refresh() self.assertEqual("boot-%s.iso" % self.node.uuid, task.node.driver_internal_info['irmc_boot_iso']) @mock.patch.object(images, 'create_boot_iso', spec_set=True, autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy', spec_set=True, autospec=True) @mock.patch.object(images, 'get_image_properties', spec_set=True, autospec=True) @mock.patch.object(images, 'fetch', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) def test__prepare_boot_iso_create_ok(self, deploy_info_mock, fetch_mock, image_props_mock, boot_mode_mock, create_boot_iso_mock): CONF.pxe.pxe_append_params = 'kernel-params' deploy_info_mock.return_value = \ {'image_source': 'image-uuid', 'irmc_deploy_iso': '02f9d414-2ce0-4cf5-b48f-dbc1bf678f55'} image_props_mock.return_value = {'kernel_id': 'kernel_uuid', 'ramdisk_id': 'ramdisk_uuid'} boot_mode_mock.return_value = 'uefi' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._prepare_boot_iso(task, 'root-uuid') self.assertFalse(fetch_mock.called) deploy_info_mock.assert_called_once_with(task.node) image_props_mock.assert_called_once_with( task.context, 'image-uuid', ['kernel_id', 'ramdisk_id']) create_boot_iso_mock.assert_called_once_with( task.context, '/remote_image_share_root/' "boot-%s.iso" % self.node.uuid, 'kernel_uuid', 'ramdisk_uuid', deploy_iso_href='02f9d414-2ce0-4cf5-b48f-dbc1bf678f55', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') task.node.refresh() self.assertEqual("boot-%s.iso" % self.node.uuid, task.node.driver_internal_info['irmc_boot_iso']) def test__get_floppy_image_name(self): actual = irmc_boot._get_floppy_image_name(self.node) expected = "image-%s.img" % self.node.uuid self.assertEqual(expected, actual) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image(self, tempfile_mock, create_vfat_image_mock, copyfile_mock): mock_image_file_handle = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj = mock.MagicMock() mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.side_effect = [mock_image_file_handle] deploy_args = {'arg1': 'val1', 'arg2': 'val2'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._prepare_floppy_image(task, deploy_args) create_vfat_image_mock.assert_called_once_with( 'image-tmp-file', parameters=deploy_args) copyfile_mock.assert_called_once_with( 'image-tmp-file', '/remote_image_share_root/' + "image-%s.img" % self.node.uuid) @mock.patch.object(shutil, 'copyfile', spec_set=True, autospec=True) @mock.patch.object(images, 'create_vfat_image', spec_set=True, autospec=True) @mock.patch.object(tempfile, 'NamedTemporaryFile', spec_set=True, autospec=True) def test__prepare_floppy_image_exception(self, tempfile_mock, create_vfat_image_mock, copyfile_mock): mock_image_file_handle = mock.MagicMock(spec=io.BytesIO) mock_image_file_obj = mock.MagicMock() mock_image_file_obj.name = 'image-tmp-file' mock_image_file_handle.__enter__.return_value = mock_image_file_obj tempfile_mock.side_effect = [mock_image_file_handle] deploy_args = {'arg1': 'val1', 'arg2': 'val2'} copyfile_mock.side_effect = IOError("fake error") with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IRMCOperationError, irmc_boot._prepare_floppy_image, task, deploy_args) create_vfat_image_mock.assert_called_once_with( 'image-tmp-file', parameters=deploy_args) copyfile_mock.assert_called_once_with( 'image-tmp-file', '/remote_image_share_root/' + "image-%s.img" % self.node.uuid) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) def test_attach_boot_iso_if_needed( self, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.ACTIVE task.node.driver_internal_info['irmc_boot_iso'] = 'boot-iso' irmc_boot.attach_boot_iso_if_needed(task) setup_vmedia_mock.assert_called_once_with(task, 'boot-iso') set_boot_device_mock.assert_called_once_with( task, boot_devices.CDROM) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) def test_attach_boot_iso_if_needed_on_rebuild( self, setup_vmedia_mock, set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = states.DEPLOYING task.node.driver_internal_info['irmc_boot_iso'] = 'boot-iso' irmc_boot.attach_boot_iso_if_needed(task) self.assertFalse(setup_vmedia_mock.called) self.assertFalse(set_boot_device_mock.called) @mock.patch.object(irmc_boot, '_attach_virtual_cd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_attach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_prepare_floppy_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__setup_vmedia_for_boot_with_parameters(self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _prepare_floppy_image_mock, _attach_virtual_fd_mock, _attach_virtual_cd_mock): parameters = {'a': 'b'} iso_filename = 'deploy_iso_or_boot_iso' _prepare_floppy_image_mock.return_value = 'floppy_file_name' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._setup_vmedia_for_boot(task, iso_filename, parameters) _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _prepare_floppy_image_mock.assert_called_once_with(task, parameters) _attach_virtual_fd_mock.assert_called_once_with(task.node, 'floppy_file_name') _attach_virtual_cd_mock.assert_called_once_with(task.node, iso_filename) @mock.patch.object(irmc_boot, '_attach_virtual_cd', autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__setup_vmedia_for_boot_without_parameters( self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _attach_virtual_cd_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._setup_vmedia_for_boot(task, 'bootable_iso_filename') _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _attach_virtual_cd_mock.assert_called_once_with( task.node, 'bootable_iso_filename') @mock.patch.object(irmc_boot, '_get_iso_name', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_get_floppy_image_name', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_remove_share_file', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_fd', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_detach_virtual_cd', spec_set=True, autospec=True) def test__cleanup_vmedia_boot_ok(self, _detach_virtual_cd_mock, _detach_virtual_fd_mock, _remove_share_file_mock, _get_floppy_image_name_mock, _get_iso_name_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._cleanup_vmedia_boot(task) _detach_virtual_cd_mock.assert_called_once_with(task.node) _detach_virtual_fd_mock.assert_called_once_with(task.node) _get_floppy_image_name_mock.assert_called_once_with(task.node) _get_iso_name_mock.assert_has_calls( [mock.call(task.node, label='deploy'), mock.call(task.node, label='rescue')]) self.assertEqual(3, _remove_share_file_mock.call_count) _remove_share_file_mock.assert_has_calls( [mock.call(_get_floppy_image_name_mock(task.node)), mock.call(_get_iso_name_mock(task.node, label='deploy')), mock.call(_get_iso_name_mock(task.node, label='rescue'))]) @mock.patch.object(ironic_utils, 'unlink_without_raise', spec_set=True, autospec=True) def test__remove_share_file(self, unlink_without_raise_mock): CONF.irmc.remote_image_share_root = '/share' irmc_boot._remove_share_file("boot.iso") unlink_without_raise_mock.assert_called_once_with('/share/boot.iso') @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_cd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_boot.scci.get_virtual_cd_set_params_cmd = ( mock.MagicMock(sepc_set=[])) cd_set_params = (irmc_boot.scci .get_virtual_cd_set_params_cmd.return_value) CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_user_domain = 'local' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' irmc_boot.scci.get_share_type.return_value = 0 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._attach_virtual_cd(task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) (irmc_boot.scci.get_virtual_cd_set_params_cmd .assert_called_once_with)('10.20.30.40', 'local', 0, 'share', 'iso_filename', 'admin', 'admin0') irmc_client.assert_has_calls( [mock.call(cd_set_params, do_async=False), mock.call(irmc_boot.scci.MOUNT_CD, do_async=False)]) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_cd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._attach_virtual_cd, task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) self.assertEqual("iRMC Inserting virtual cdrom failed. " "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_cd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._detach_virtual_cd(task.node) irmc_client.assert_called_once_with(irmc_boot.scci.UNMOUNT_CD) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_cd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._detach_virtual_cd, task.node) self.assertEqual("iRMC Ejecting virtual cdrom failed. " "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_fd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_boot.scci.get_virtual_fd_set_params_cmd = ( mock.MagicMock(sepc_set=[])) fd_set_params = (irmc_boot.scci .get_virtual_fd_set_params_cmd.return_value) CONF.irmc.remote_image_server = '10.20.30.40' CONF.irmc.remote_image_user_domain = 'local' CONF.irmc.remote_image_share_type = 'NFS' CONF.irmc.remote_image_share_name = 'share' CONF.irmc.remote_image_user_name = 'admin' CONF.irmc.remote_image_user_password = 'admin0' irmc_boot.scci.get_share_type.return_value = 0 with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._attach_virtual_fd(task.node, 'floppy_image_filename') get_irmc_client_mock.assert_called_once_with(task.node) (irmc_boot.scci.get_virtual_fd_set_params_cmd .assert_called_once_with)('10.20.30.40', 'local', 0, 'share', 'floppy_image_filename', 'admin', 'admin0') irmc_client.assert_has_calls( [mock.call(fd_set_params, do_async=False), mock.call(irmc_boot.scci.MOUNT_FD, do_async=False)]) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__attach_virtual_fd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._attach_virtual_fd, task.node, 'iso_filename') get_irmc_client_mock.assert_called_once_with(task.node) self.assertEqual("iRMC Inserting virtual floppy failed. " "Reason: fake error", str(e)) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_fd_ok(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_boot._detach_virtual_fd(task.node) irmc_client.assert_called_once_with(irmc_boot.scci.UNMOUNT_FD) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__detach_virtual_fd_fail(self, get_irmc_client_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception("fake error") irmc_boot.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: e = self.assertRaises(exception.IRMCOperationError, irmc_boot._detach_virtual_fd, task.node) self.assertEqual("iRMC Ejecting virtual floppy failed. " "Reason: fake error", str(e)) @mock.patch.object(irmc_boot, '_parse_config_option', spec_set=True, autospec=True) def test_check_share_fs_mounted_ok(self, parse_conf_mock): # Note(naohirot): mock.patch.stop() and mock.patch.start() don't work. # therefor monkey patching is used to # irmc_boot.check_share_fs_mounted. # irmc_boot.check_share_fs_mounted is mocked in # third_party_driver_mocks.py. # irmc_boot.check_share_fs_mounted_orig is the real function. CONF.irmc.remote_image_share_root = '/' CONF.irmc.remote_image_share_type = 'nfs' result = irmc_boot.check_share_fs_mounted_orig() parse_conf_mock.assert_called_once_with() self.assertIsNone(result) @mock.patch.object(irmc_boot, '_parse_config_option', spec_set=True, autospec=True) def test_check_share_fs_mounted_exception(self, parse_conf_mock): # Note(naohirot): mock.patch.stop() and mock.patch.start() don't work. # therefor monkey patching is used to # irmc_boot.check_share_fs_mounted. # irmc_boot.check_share_fs_mounted is mocked in # third_party_driver_mocks.py. # irmc_boot.check_share_fs_mounted_orig is the real function. CONF.irmc.remote_image_share_root = '/etc' CONF.irmc.remote_image_share_type = 'cifs' self.assertRaises(exception.IRMCSharedFileSystemNotMounted, irmc_boot.check_share_fs_mounted_orig) parse_conf_mock.assert_called_once_with() class IRMCVirtualMediaBootTestCase(test_common.BaseIRMCTest): boot_interface = 'irmc-virtual-media' def setUp(self): irmc_boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc_boot.check_share_fs_mounted_patcher.stop) super(IRMCVirtualMediaBootTestCase, self).setUp() @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_whole_disk_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info = {'is_whole_disk_image': True} task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) self.assertFalse(is_glance_image_mock.called) validate_prop_mock.assert_called_once_with(task.context, d_info, []) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_glance_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info is_glance_image_mock.return_value = True with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, ['kernel_id', 'ramdisk_id']) @mock.patch.object(deploy_utils, 'validate_image_properties', spec_set=True, autospec=True) @mock.patch.object(service_utils, 'is_glance_image', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_parse_deploy_info', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'check_share_fs_mounted', spec_set=True, autospec=True) def test_validate_non_glance_image(self, check_share_fs_mounted_mock, deploy_info_mock, is_glance_image_mock, validate_prop_mock): d_info = {'image_source': '733d1c44-a2ea-414b-aca7-69decf20d810'} deploy_info_mock.return_value = d_info is_glance_image_mock.return_value = False with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.validate(task) check_share_fs_mounted_mock.assert_called_once_with() deploy_info_mock.assert_called_once_with(task.node) validate_prop_mock.assert_called_once_with( task.context, d_info, ['kernel', 'ramdisk']) @mock.patch.object(irmc_management, 'backup_bios_config', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia', spec_set=True, autospec=True) @mock.patch.object(deploy_utils, 'get_single_nic_with_vif_port_id', spec_set=True, autospec=True) def _test_prepare_ramdisk(self, get_single_nic_with_vif_port_id_mock, _setup_vmedia_mock, mock_backup_bios, mode='deploy'): instance_info = self.node.instance_info instance_info['irmc_boot_iso'] = 'glance://abcdef' instance_info['image_source'] = '6b2f0c0c-79e8-4db6-842e-43c9764204af' self.node.instance_info = instance_info self.node.save() ramdisk_params = {'a': 'b'} get_single_nic_with_vif_port_id_mock.return_value = '12:34:56:78:90:ab' with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_ramdisk(task, ramdisk_params) expected_ramdisk_opts = {'a': 'b', 'BOOTIF': '12:34:56:78:90:ab', 'ipa-agent-token': mock.ANY} get_single_nic_with_vif_port_id_mock.assert_called_once_with( task) _setup_vmedia_mock.assert_called_once_with( task, mode, expected_ramdisk_opts) self.assertEqual('glance://abcdef', self.node.instance_info['irmc_boot_iso']) provision_state = task.node.provision_state self.assertEqual(1 if provision_state == states.DEPLOYING else 0, mock_backup_bios.call_count) def test_prepare_ramdisk_glance_image_deploying(self): self.node.provision_state = states.DEPLOYING self.node.save() self._test_prepare_ramdisk() def test_prepare_ramdisk_glance_image_rescuing(self): self.node.provision_state = states.RESCUING self.node.save() self._test_prepare_ramdisk(mode='rescue') def test_prepare_ramdisk_glance_image_cleaning(self): self.node.provision_state = states.CLEANING self.node.save() self._test_prepare_ramdisk() @mock.patch.object(irmc_boot, '_setup_vmedia', spec_set=True, autospec=True) def test_prepare_ramdisk_not_deploying_not_cleaning(self, mock_is_image): """Ensure deploy ops are blocked when not deploying and not cleaning""" for state in states.STABLE_STATES: mock_is_image.reset_mock() self.node.provision_state = state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertIsNone( task.driver.boot.prepare_ramdisk(task, None)) self.assertFalse(mock_is_image.called) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_ramdisk(self, _cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_ramdisk(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def _test_prepare_instance_whole_disk_image( self, _cleanup_vmedia_boot_mock, set_boot_device_mock): self.node.driver_internal_info = {'is_whole_disk_image': True} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) set_boot_device_mock.assert_called_once_with(task, boot_devices.DISK, persistent=True) def test_prepare_instance_whole_disk_image_local(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} self.node.save() self._test_prepare_instance_whole_disk_image() def test_prepare_instance_whole_disk_image(self): self._test_prepare_instance_whole_disk_image() @mock.patch.object(irmc_boot.IRMCVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_partition_image( self, _cleanup_vmedia_boot_mock, _configure_vmedia_mock): self.node.instance_info = { 'capabilities': {'boot_option': 'netboot'}} self.node.driver_internal_info = {'root_uuid_or_disk_id': "some_uuid"} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) _cleanup_vmedia_boot_mock.assert_called_once_with(task) _configure_vmedia_mock.assert_called_once_with(mock.ANY, task, "some_uuid") @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_remove_share_file', spec_set=True, autospec=True) def test_clean_up_instance(self, _remove_share_file_mock, _cleanup_vmedia_boot_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.instance_info['irmc_boot_iso'] = 'glance://deploy_iso' task.node.driver_internal_info['irmc_boot_iso'] = 'irmc_boot.iso' task.driver.boot.clean_up_instance(task) _remove_share_file_mock.assert_called_once_with( irmc_boot._get_iso_name(task.node, label='boot')) self.assertNotIn('irmc_boot_iso', task.node.driver_internal_info) _cleanup_vmedia_boot_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_setup_vmedia_for_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_prepare_boot_iso', spec_set=True, autospec=True) def test__configure_vmedia_boot(self, _prepare_boot_iso_mock, _setup_vmedia_for_boot_mock, node_set_boot_device): root_uuid_or_disk_id = {'root uuid': 'root_uuid'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info['irmc_boot_iso'] = 'boot.iso' task.driver.boot._configure_vmedia_boot( task, root_uuid_or_disk_id) _prepare_boot_iso_mock.assert_called_once_with( task, root_uuid_or_disk_id) _setup_vmedia_for_boot_mock.assert_called_once_with( task, 'boot.iso') node_set_boot_device.assert_called_once_with( task, boot_devices.CDROM, persistent=True) def test_remote_image_share_type_values(self): cfg.CONF.set_override('remote_image_share_type', 'cifs', 'irmc') cfg.CONF.set_override('remote_image_share_type', 'nfs', 'irmc') self.assertRaises(ValueError, cfg.CONF.set_override, 'remote_image_share_type', 'fake', 'irmc') @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot.IRMCVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_with_secure_boot(self, mock_cleanup_vmedia_boot, mock_configure_vmedia_boot, mock_set_secure_boot_mode): self.node.driver_internal_info = {'root_uuid_or_disk_id': "12312642"} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.instance_info = { 'capabilities': { "secure_boot": "true", 'boot_option': 'netboot' } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) mock_cleanup_vmedia_boot.assert_called_once_with(task) mock_set_secure_boot_mode.assert_called_once_with(task.node, enable=True) mock_configure_vmedia_boot.assert_called_once_with(mock.ANY, task, "12312642") @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot.IRMCVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_with_secure_boot_false( self, mock_cleanup_vmedia_boot, mock_configure_vmedia_boot, mock_set_secure_boot_mode): self.node.driver_internal_info = {'root_uuid_or_disk_id': "12312642"} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.instance_info = { 'capabilities': { "secure_boot": "false", 'boot_option': 'netboot' } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) mock_cleanup_vmedia_boot.assert_called_once_with(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_configure_vmedia_boot.assert_called_once_with(mock.ANY, task, "12312642") @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot.IRMCVirtualMediaBoot, '_configure_vmedia_boot', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_prepare_instance_without_secure_boot( self, mock_cleanup_vmedia_boot, mock_configure_vmedia_boot, mock_set_secure_boot_mode): self.node.driver_internal_info = {'root_uuid_or_disk_id': "12312642"} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.instance_info = { 'capabilities': { 'boot_option': 'netboot' } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) mock_cleanup_vmedia_boot.assert_called_once_with(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_configure_vmedia_boot.assert_called_once_with(mock.ANY, task, "12312642") @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_instance_with_secure_boot(self, mock_cleanup_vmedia_boot, mock_set_secure_boot_mode): self.node.provision_state = states.DELETING self.node.target_provision_state = states.AVAILABLE self.node.instance_info = { 'capabilities': { "secure_boot": "true" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) mock_set_secure_boot_mode.assert_called_once_with(task.node, enable=False) mock_cleanup_vmedia_boot.assert_called_once_with(task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_instance_with_secure_boot_false( self, mock_cleanup_vmedia_boot, mock_set_secure_boot_mode): self.node.provision_state = states.DELETING self.node.target_provision_state = states.AVAILABLE self.node.instance_info = { 'capabilities': { "secure_boot": "false" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_cleanup_vmedia_boot.assert_called_once_with(task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot', spec_set=True, autospec=True) def test_clean_up_instance_without_secure_boot( self, mock_cleanup_vmedia_boot, mock_set_secure_boot_mode): self.node.provision_state = states.DELETING self.node.target_provision_state = states.AVAILABLE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_cleanup_vmedia_boot.assert_called_once_with(task) @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def test_validate_rescue(self, mock_isfile): driver_info = self.node.driver_info driver_info['irmc_rescue_iso'] = 'rescue.iso' self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate_rescue(task) def test_validate_rescue_no_rescue_ramdisk(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'Missing.*irmc_rescue_iso', task.driver.boot.validate_rescue, task) @mock.patch.object(os.path, 'isfile', return_value=False, autospec=True) def test_validate_rescue_ramdisk_not_exist(self, mock_isfile): driver_info = self.node.driver_info driver_info['irmc_rescue_iso'] = 'rescue.iso' self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex(exception.InvalidParameterValue, 'Rescue ISO file, .*' 'not found for node: .*', task.driver.boot.validate_rescue, task) class IRMCPXEBootTestCase(test_common.BaseIRMCTest): @mock.patch.object(irmc_management, 'backup_bios_config', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) def test_prepare_ramdisk_with_backup_bios(self, mock_parent_prepare, mock_backup_bios): self.node.provision_state = states.DEPLOYING self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_ramdisk(task, {}) mock_backup_bios.assert_called_once_with(task) mock_parent_prepare.assert_called_once_with( task.driver.boot, task, {}) @mock.patch.object(irmc_management, 'backup_bios_config', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) def test_prepare_ramdisk_without_backup_bios(self, mock_parent_prepare, mock_backup_bios): self.node.provision_state = states.CLEANING self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_ramdisk(task, {}) self.assertFalse(mock_backup_bios.called) mock_parent_prepare.assert_called_once_with( task.driver.boot, task, {}) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance_with_secure_boot(self, mock_prepare_instance, mock_set_secure_boot_mode): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.instance_info = { 'capabilities': { "secure_boot": "true" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) mock_set_secure_boot_mode.assert_called_once_with(task.node, enable=True) mock_prepare_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance_with_secure_boot_false( self, mock_prepare_instance, mock_set_secure_boot_mode): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.instance_info = { 'capabilities': { "secure_boot": "false" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_prepare_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', spec_set=True, autospec=True) def test_prepare_instance_without_secure_boot(self, mock_prepare_instance, mock_set_secure_boot_mode): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.prepare_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_prepare_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance_with_secure_boot(self, mock_clean_up_instance, mock_set_secure_boot_mode): self.node.provision_state = states.CLEANING self.node.target_provision_state = states.AVAILABLE self.node.instance_info = { 'capabilities': { "secure_boot": "true" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) mock_set_secure_boot_mode.assert_called_once_with(task.node, enable=False) mock_clean_up_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance_secure_boot_false(self, mock_clean_up_instance, mock_set_secure_boot_mode): self.node.provision_state = states.CLEANING self.node.target_provision_state = states.AVAILABLE self.node.instance_info = { 'capabilities': { "secure_boot": "false" } } self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_clean_up_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_common, 'set_secure_boot_mode', spec_set=True, autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_instance', spec_set=True, autospec=True) def test_clean_up_instance_without_secure_boot( self, mock_clean_up_instance, mock_set_secure_boot_mode): self.node.provision_state = states.CLEANING self.node.target_provision_state = states.AVAILABLE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.boot.clean_up_instance(task) self.assertFalse(mock_set_secure_boot_mode.called) mock_clean_up_instance.assert_called_once_with( task.driver.boot, task) @mock.patch.object(irmc_boot, 'viom', spec_set=mock_specs.SCCICLIENT_VIOM_SPEC) class IRMCVirtualMediaBootWithVolumeTestCase(test_common.BaseIRMCTest): boot_interface = 'irmc-virtual-media' def setUp(self): super(IRMCVirtualMediaBootWithVolumeTestCase, self).setUp() irmc_boot.check_share_fs_mounted_patcher.start() self.addCleanup(irmc_boot.check_share_fs_mounted_patcher.stop) driver_info = INFO_DICT d_in_info = dict(boot_from_volume='volume-uuid') self.config(enabled_storage_interfaces=['cinder']) self.node = obj_utils.create_test_node(self.context, driver='irmc', driver_info=driver_info, storage_interface='cinder', driver_internal_info=d_in_info) def _create_mock_conf(self, mock_viom): mock_conf = mock.Mock(spec_set=mock_specs.SCCICLIENT_VIOM_CONF_SPEC) mock_viom.VIOMConfiguration.return_value = mock_conf return mock_conf def _add_pci_physical_id(self, uuid, physical_id): driver_info = self.node.driver_info ids = driver_info.get('irmc_pci_physical_ids', {}) ids[uuid] = physical_id driver_info['irmc_pci_physical_ids'] = ids self.node.driver_info = driver_info self.node.save() def _create_port(self, physical_id='LAN0-1', **kwargs): uuid = uuidutils.generate_uuid() obj_utils.create_test_port(self.context, uuid=uuid, node_id=self.node.id, **kwargs) if physical_id: self._add_pci_physical_id(uuid, physical_id) def _create_iscsi_iqn_connector(self, physical_id='CNA1-1'): uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_connector( self.context, uuid=uuid, type='iqn', node_id=self.node.id, connector_id='iqn.initiator') if physical_id: self._add_pci_physical_id(uuid, physical_id) def _create_iscsi_ip_connector(self, physical_id=None, network_size='24'): uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_connector( self.context, uuid=uuid, type='ip', node_id=self.node.id, connector_id='192.168.11.11') if physical_id: self._add_pci_physical_id(uuid, physical_id) if network_size: driver_info = self.node.driver_info driver_info['irmc_storage_network_size'] = network_size self.node.driver_info = driver_info self.node.save() def _create_iscsi_target(self, target_info=None, boot_index=0, **kwargs): target_properties = { 'target_portal': '192.168.22.22:3260', 'target_iqn': 'iqn.target', 'target_lun': 1, } if target_info: target_properties.update(target_info) obj_utils.create_test_volume_target( self.context, volume_type='iscsi', node_id=self.node.id, boot_index=boot_index, properties=target_properties, **kwargs) def _create_iscsi_resources(self): self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector() self._create_iscsi_target() def _create_fc_connector(self): uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_connector( self.context, uuid=uuid, type='wwnn', node_id=self.node.id, connector_id='11:22:33:44:55') self._add_pci_physical_id(uuid, 'FC2-1') obj_utils.create_test_volume_connector( self.context, uuid=uuidutils.generate_uuid(), type='wwpn', node_id=self.node.id, connector_id='11:22:33:44:56') def _create_fc_target(self): target_properties = { 'target_wwn': 'aa:bb:cc:dd:ee', 'target_lun': 2, } obj_utils.create_test_volume_target( self.context, volume_type='fibre_channel', node_id=self.node.id, boot_index=0, properties=target_properties) def _create_fc_resources(self): self._create_fc_connector() self._create_fc_target() def _call_validate(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.validate(task) def test_validate_iscsi(self, mock_viom): self._create_port() self._create_iscsi_resources() self._call_validate() self.assertEqual([mock.call('LAN0-1'), mock.call('CNA1-1')], mock_viom.validate_physical_port_id.call_args_list) def test_validate_no_physical_id_in_lan_port(self, mock_viom): self._create_port(physical_id=None) self._create_iscsi_resources() self.assertRaises(exception.MissingParameterValue, self._call_validate) @mock.patch.object(irmc_boot, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_validate_invalid_physical_id_in_lan_port(self, mock_scci, mock_viom): self._create_port(physical_id='wrong-id') self._create_iscsi_resources() mock_viom.validate_physical_port_id.side_effect = ( Exception('fake error')) mock_scci.SCCIInvalidInputError = Exception self.assertRaises(exception.InvalidParameterValue, self._call_validate) def test_validate_iscsi_connector_no_ip(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_target() self.assertRaises(exception.MissingParameterValue, self._call_validate) def test_validate_iscsi_connector_no_iqn(self, mock_viom): self._create_port() self._create_iscsi_ip_connector(physical_id='CNA1-1') self._create_iscsi_target() self.assertRaises(exception.MissingParameterValue, self._call_validate) def test_validate_iscsi_connector_no_netmask(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector(network_size=None) self._create_iscsi_target() self.assertRaises(exception.MissingParameterValue, self._call_validate) def test_validate_iscsi_connector_invalid_netmask(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector(network_size='worng-netmask') self._create_iscsi_target() self.assertRaises(exception.InvalidParameterValue, self._call_validate) def test_validate_iscsi_connector_too_small_netmask(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector(network_size='0') self._create_iscsi_target() self.assertRaises(exception.InvalidParameterValue, self._call_validate) def test_validate_iscsi_connector_too_large_netmask(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector(network_size='32') self._create_iscsi_target() self.assertRaises(exception.InvalidParameterValue, self._call_validate) def test_validate_iscsi_connector_no_physical_id(self, mock_viom): self._create_port() self._create_iscsi_iqn_connector(physical_id=None) self._create_iscsi_ip_connector() self._create_iscsi_target() self.assertRaises(exception.MissingParameterValue, self._call_validate) @mock.patch.object(deploy_utils, 'get_single_nic_with_vif_port_id') def test_prepare_ramdisk_skip(self, mock_nic, mock_viom): self._create_iscsi_resources() with task_manager.acquire(self.context, self.node.uuid) as task: task.node.provision_state = states.DEPLOYING task.driver.boot.prepare_ramdisk(task, {}) mock_nic.assert_not_called() @mock.patch.object(irmc_boot, '_cleanup_vmedia_boot') def test_prepare_instance(self, mock_clean, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_iscsi_resources() with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.prepare_instance(task) mock_clean.assert_not_called() mock_conf.set_iscsi_volume.assert_called_once_with( 'CNA1-1', 'iqn.initiator', initiator_ip='192.168.11.11', initiator_netmask=24, target_iqn='iqn.target', target_ip='192.168.22.22', target_port='3260', target_lun=1, boot_prio=1, chap_user=None, chap_secret=None) mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('CNA1-1') self._assert_viom_apply(mock_viom, mock_conf) def _call__configure_boot_from_volume(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot._configure_boot_from_volume(task) def _assert_viom_apply(self, mock_viom, mock_conf): mock_conf.apply.assert_called_once_with() mock_conf.dump_json.assert_called_once_with() mock_viom.VIOMConfiguration.assert_called_once_with( PARSED_IFNO, identification=self.node.uuid) def test__configure_boot_from_volume_iscsi(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_iscsi_resources() self._call__configure_boot_from_volume() mock_conf.set_iscsi_volume.assert_called_once_with( 'CNA1-1', 'iqn.initiator', initiator_ip='192.168.11.11', initiator_netmask=24, target_iqn='iqn.target', target_ip='192.168.22.22', target_port='3260', target_lun=1, boot_prio=1, chap_user=None, chap_secret=None) mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('CNA1-1') self._assert_viom_apply(mock_viom, mock_conf) def test__configure_boot_from_volume_multi_lan_ports(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_port(physical_id='LAN0-2', address='52:54:00:cf:2d:32') self._create_iscsi_resources() self._call__configure_boot_from_volume() mock_conf.set_iscsi_volume.assert_called_once_with( 'CNA1-1', 'iqn.initiator', initiator_ip='192.168.11.11', initiator_netmask=24, target_iqn='iqn.target', target_ip='192.168.22.22', target_port='3260', target_lun=1, boot_prio=1, chap_user=None, chap_secret=None) self.assertEqual([mock.call('LAN0-1'), mock.call('LAN0-2')], mock_conf.set_lan_port.call_args_list) mock_viom.validate_physical_port_id.assert_called_once_with('CNA1-1') self._assert_viom_apply(mock_viom, mock_conf) def test__configure_boot_from_volume_iscsi_no_portal_port(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector() self._create_iscsi_target( target_info=dict(target_portal='192.168.22.23')) self._call__configure_boot_from_volume() mock_conf.set_iscsi_volume.assert_called_once_with( 'CNA1-1', 'iqn.initiator', initiator_ip='192.168.11.11', initiator_netmask=24, target_iqn='iqn.target', target_ip='192.168.22.23', target_port=None, target_lun=1, boot_prio=1, chap_user=None, chap_secret=None) mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('CNA1-1') self._assert_viom_apply(mock_viom, mock_conf) def test__configure_boot_from_volume_iscsi_chap(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_iscsi_iqn_connector() self._create_iscsi_ip_connector() self._create_iscsi_target( target_info=dict(auth_method='CHAP', auth_username='chapuser', auth_password='chappass')) self._call__configure_boot_from_volume() mock_conf.set_iscsi_volume.assert_called_once_with( 'CNA1-1', 'iqn.initiator', initiator_ip='192.168.11.11', initiator_netmask=24, target_iqn='iqn.target', target_ip='192.168.22.22', target_port='3260', target_lun=1, boot_prio=1, chap_user='chapuser', chap_secret='chappass') mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('CNA1-1') self._assert_viom_apply(mock_viom, mock_conf) def test__configure_boot_from_volume_fc(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_fc_connector() self._create_fc_target() self._call__configure_boot_from_volume() mock_conf.set_fc_volume.assert_called_once_with( 'FC2-1', 'aa:bb:cc:dd:ee', 2, boot_prio=1) mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('FC2-1') self._assert_viom_apply(mock_viom, mock_conf) @mock.patch.object(irmc_boot, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test__configure_boot_from_volume_apply_error(self, mock_scci, mock_viom): mock_conf = self._create_mock_conf(mock_viom) self._create_port() self._create_fc_connector() self._create_fc_target() mock_conf.apply.side_effect = Exception('fake scci error') mock_scci.SCCIError = Exception with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IRMCOperationError, task.driver.boot._configure_boot_from_volume, task) mock_conf.set_fc_volume.assert_called_once_with( 'FC2-1', 'aa:bb:cc:dd:ee', 2, boot_prio=1) mock_conf.set_lan_port.assert_called_once_with('LAN0-1') mock_viom.validate_physical_port_id.assert_called_once_with('FC2-1') self._assert_viom_apply(mock_viom, mock_conf) def test_clean_up_instance(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot.clean_up_instance(task) mock_viom.VIOMConfiguration.assert_called_once_with(PARSED_IFNO, self.node.uuid) mock_conf.terminate.assert_called_once_with(reboot=False) def test_clean_up_instance_error(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) mock_conf.terminate.side_effect = Exception('fake error') irmc_boot.scci.SCCIError = Exception with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IRMCOperationError, task.driver.boot.clean_up_instance, task) mock_viom.VIOMConfiguration.assert_called_once_with(PARSED_IFNO, self.node.uuid) mock_conf.terminate.assert_called_once_with(reboot=False) def test__cleanup_boot_from_volume(self, mock_viom): mock_conf = self._create_mock_conf(mock_viom) with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.boot._cleanup_boot_from_volume(task) mock_viom.VIOMConfiguration.assert_called_once_with(PARSED_IFNO, self.node.uuid) mock_conf.terminate.assert_called_once_with(reboot=False) class IRMCPXEBootBasicTestCase(test_pxe.PXEBootTestCase): boot_interface = 'irmc-pxe' # NOTE(etingof): add driver-specific configuration driver_info = dict(test_pxe.PXEBootTestCase.driver_info) driver_info.update(PARSED_IFNO) def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for p in pxe_base.COMMON_PROPERTIES: self.assertIn(p, properties) class IsImageHrefOrdinaryFileNameTestCase(base.TestCase): def test_is_image_href_ordinary_file_name_true(self): image = u"\u0111eploy.iso" result = irmc_boot._is_image_href_ordinary_file_name(image) self.assertTrue(result) def test_is_image_href_ordinary_file_name_false(self): for image in ('733d1c44-a2ea-414b-aca7-69decf20d810', u'glance://\u0111eploy_iso', u'http://\u0111eploy_iso', u'https://\u0111eploy_iso', u'file://\u0111eploy_iso',): result = irmc_boot._is_image_href_ordinary_file_name(image) self.assertFalse(result) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_periodic_task.py0000664000175000017500000003435513652514273027041 0ustar zuulzuul00000000000000# Copyright 2018 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC periodic tasks """ import mock from oslo_utils import uuidutils from ironic.conductor import task_manager from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import raid as irmc_raid from ironic.drivers.modules import noop from ironic.tests.unit.drivers.modules.irmc import test_common from ironic.tests.unit.objects import utils as obj_utils class iRMCPeriodicTaskTestCase(test_common.BaseIRMCTest): def setUp(self): super(iRMCPeriodicTaskTestCase, self).setUp() self.node_2 = obj_utils.create_test_node( self.context, driver='fake-hardware', uuid=uuidutils.generate_uuid()) self.driver = mock.Mock(raid=irmc_raid.IRMCRAID()) self.raid_config = { 'logical_disks': [ {'controller': 'RAIDAdapter0'}, {'irmc_raid_info': {' size': {'#text': 465, '@Unit': 'GB'}, 'logical_drive_number': 0, 'name': 'LogicalDrive_0', 'raid_level': '1'}}]} self.target_raid_config = { 'logical_disks': [ { 'key': 'value' }]} @mock.patch.object(irmc_common, 'get_irmc_report') def test__query_raid_config_fgi_status_without_node( self, report_mock): mock_manager = mock.Mock() node_list = [] mock_manager.iter_nodes.return_value = node_list raid_object = irmc_raid.IRMCRAID() raid_object._query_raid_config_fgi_status(mock_manager, None) self.assertEqual(0, report_mock.call_count) @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_without_raid_object( self, mock_acquire, report_mock): mock_manager = mock.Mock() raid_config = self.raid_config task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list task.driver.raid = noop.NoRAID() raid_object = irmc_raid.IRMCRAID() raid_object._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, report_mock.call_count) @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_without_input( self, mock_acquire, report_mock): mock_manager = mock.Mock() raid_config = self.raid_config task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list # Set none target_raid_config input task.node.target_raid_config = None task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, report_mock.call_count) @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_without_raid_config( self, mock_acquire, report_mock): mock_manager = mock.Mock() raid_config = {} task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, report_mock.call_count) @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_without_fgi_status( self, mock_acquire, report_mock): mock_manager = mock.Mock() raid_config = { 'logical_disks': [ {'controller': 'RAIDAdapter0'}, {'irmc_raid_info': {' size': {'#text': 465, '@Unit': 'GB'}, 'logical_drive_number': 0, 'name': 'LogicalDrive_0', 'raid_level': '1'}}]} task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, report_mock.call_count) @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_other_clean_state( self, mock_acquire, report_mock): mock_manager = mock.Mock() raid_config = self.raid_config task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list # Set provision state value task.node.provision_state = 'cleaning' task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, report_mock.call_count) @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed') @mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status') @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_completing_status( self, mock_acquire, report_mock, fgi_mock, clean_fail_mock): mock_manager = mock.Mock() fgi_mock.return_value = 'completing' node_list = [(self.node.uuid, 'irmc', '', self.raid_config)] mock_manager.iter_nodes.return_value = node_list task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) # Set provision state value task.node.provision_state = 'clean wait' task.node.target_raid_config = self.target_raid_config task.node.raid_config = self.raid_config task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, clean_fail_mock.call_count) report_mock.assert_called_once_with(task.node) fgi_mock.assert_called_once_with(report_mock.return_value, self.node.uuid) @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed') @mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status') @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_with_clean_fail( self, mock_acquire, report_mock, fgi_mock, clean_fail_mock): mock_manager = mock.Mock() raid_config = self.raid_config fgi_mock.return_value = None fgi_status_dict = None task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list # Set provision state value task.node.provision_state = 'clean wait' task.node.target_raid_config = self.target_raid_config task.node.raid_config = self.raid_config task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) clean_fail_mock.assert_called_once_with(task, fgi_status_dict) report_mock.assert_called_once_with(task.node) fgi_mock.assert_called_once_with(report_mock.return_value, self.node.uuid) @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._resume_cleaning') @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed') @mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status') @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_with_complete_cleaning( self, mock_acquire, report_mock, fgi_mock, clean_fail_mock, clean_mock): mock_manager = mock.Mock() raid_config = self.raid_config fgi_mock.return_value = {'0': 'Idle', '1': 'Idle'} task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list # Set provision state value task.node.provision_state = 'clean wait' task.node.target_raid_config = self.target_raid_config task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, clean_fail_mock.call_count) report_mock.assert_called_once_with(task.node) fgi_mock.assert_called_once_with(report_mock.return_value, self.node.uuid) clean_mock.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._resume_cleaning') @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed') @mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status') @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_with_two_nodes_without_raid_config( self, mock_acquire, report_mock, fgi_mock, clean_fail_mock, clean_mock): mock_manager = mock.Mock() raid_config = self.raid_config raid_config_2 = {} fgi_mock.return_value = {'0': 'Idle', '1': 'Idle'} task = mock.Mock(node=self.node, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) node_list = [(self.node_2.uuid, 'irmc', '', raid_config_2), (self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list # Set provision state value task.node.provision_state = 'clean wait' task.node.target_raid_config = self.target_raid_config task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) self.assertEqual(0, clean_fail_mock.call_count) report_mock.assert_called_once_with(task.node) fgi_mock.assert_called_once_with(report_mock.return_value, self.node.uuid) clean_mock.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._resume_cleaning') @mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed') @mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status') @mock.patch.object(irmc_common, 'get_irmc_report') @mock.patch.object(task_manager, 'acquire', autospec=True) def test__query_raid_config_fgi_status_with_two_nodes_with_fgi_status_none( self, mock_acquire, report_mock, fgi_mock, clean_fail_mock, clean_mock): mock_manager = mock.Mock() raid_config = self.raid_config raid_config_2 = self.raid_config.copy() fgi_status_dict = {} fgi_mock.side_effect = [{}, {'0': 'Idle', '1': 'Idle'}] node_list = [(self.node_2.uuid, 'fake-hardware', '', raid_config_2), (self.node.uuid, 'irmc', '', raid_config)] mock_manager.iter_nodes.return_value = node_list task = mock.Mock(node=self.node_2, driver=self.driver) mock_acquire.return_value = mock.MagicMock( __enter__=mock.MagicMock(return_value=task)) task.node.provision_state = 'clean wait' task.node.target_raid_config = self.target_raid_config task.node.save() task.driver.raid._query_raid_config_fgi_status(mock_manager, self.context) report_mock.assert_has_calls( [mock.call(task.node), mock.call(task.node)]) fgi_mock.assert_has_calls([mock.call(report_mock.return_value, self.node_2.uuid), mock.call(report_mock.return_value, self.node_2.uuid)]) clean_fail_mock.assert_called_once_with(task, fgi_status_dict) clean_mock.assert_called_once_with(task) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_inspect.py0000664000175000017500000007473113652514273025670 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Inspection Driver """ import mock from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import inspect as irmc_inspect from ironic.drivers.modules.irmc import power as irmc_power from ironic import objects from ironic.tests.unit.drivers import ( third_party_driver_mock_specs as mock_specs ) from ironic.tests.unit.drivers.modules.irmc import test_common class IRMCInspectInternalMethodsTestCase(test_common.BaseIRMCTest): @mock.patch('ironic.drivers.modules.irmc.inspect.snmp.SNMPClient', spec_set=True, autospec=True) def test__get_mac_addresses(self, snmpclient_mock): snmpclient_mock.return_value = mock.Mock( **{'get_next.side_effect': [[2, 2, 7], ['\xaa\xaa\xaa\xaa\xaa\xaa', '\xbb\xbb\xbb\xbb\xbb\xbb', '\xcc\xcc\xcc\xcc\xcc\xcc']]}) inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = irmc_inspect._get_mac_addresses(task.node) self.assertEqual(inspected_macs, result) @mock.patch.object(irmc_inspect, '_get_mac_addresses', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test__inspect_hardware( self, get_irmc_report_mock, scci_mock, _get_mac_addresses_mock): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] self.config(gpu_ids=gpu_ids, group='irmc') self.config(fpga_ids=cpu_fpgas, group='irmc') kwargs = {'sleep_flag': False} inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_capabilities = { 'trusted_boot': False, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1, 'cpu_fpga': 1} new_traits = ['CUSTOM_CPU_FPGA'] existing_traits = [] inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] report = 'fake_report' get_irmc_report_mock.return_value = report scci_mock.get_essential_properties.return_value = inspected_props scci_mock.get_capabilities_properties.return_value = ( inspected_capabilities) _get_mac_addresses_mock.return_value = inspected_macs with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: result = irmc_inspect._inspect_hardware(task.node, existing_traits, **kwargs) get_irmc_report_mock.assert_called_once_with(task.node) scci_mock.get_essential_properties.assert_called_once_with( report, irmc_inspect.IRMCInspect.ESSENTIAL_PROPERTIES) scci_mock.get_capabilities_properties.assert_called_once_with( mock.ANY, irmc_inspect.CAPABILITIES_PROPERTIES, gpu_ids, fpga_ids=cpu_fpgas, **kwargs) expected_props = dict(inspected_props) inspected_capabilities = utils.get_updated_capabilities( '', inspected_capabilities) expected_props['capabilities'] = inspected_capabilities self.assertEqual((expected_props, inspected_macs, new_traits), result) @mock.patch.object(irmc_inspect, '_get_mac_addresses', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test__inspect_hardware_exception( self, get_irmc_report_mock, scci_mock, _get_mac_addresses_mock): report = 'fake_report' kwargs = {'sleep_flag': False} get_irmc_report_mock.return_value = report side_effect = exception.SNMPFailure("fake exception") scci_mock.get_essential_properties.side_effect = side_effect irmc_inspect.scci.SCCIInvalidInputError = Exception irmc_inspect.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.HardwareInspectionFailure, irmc_inspect._inspect_hardware, task.node, **kwargs) get_irmc_report_mock.assert_called_once_with(task.node) self.assertFalse(_get_mac_addresses_mock.called) class IRMCInspectTestCase(test_common.BaseIRMCTest): def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in irmc_common.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, parse_driver_info_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.inspect.validate(task) parse_driver_info_mock.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, parse_driver_info_mock): side_effect = exception.InvalidParameterValue("Invalid Input") parse_driver_info_mock.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.inspect.validate, task) def test__init_fail_invalid_gpu_ids_input(self): # Set config flags self.config(gpu_ids='100/x079,0x20/', group='irmc') self.assertRaises(exception.InvalidParameterValue, irmc_inspect.IRMCInspect) def test__init_fail_invalid_fpga_ids_input(self): # Set config flags self.config(fpga_ids='100/x079,0x20/', group='irmc') self.assertRaises(exception.InvalidParameterValue, irmc_inspect.IRMCInspect) @mock.patch.object(irmc_inspect.LOG, 'info', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.irmc.inspect.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_inspect_hardware(self, power_state_mock, _inspect_hardware_mock, port_mock, info_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] new_traits = ['CUSTOM_CPU_FPGA'] existing_traits = [] power_state_mock.return_value = states.POWER_ON _inspect_hardware_mock.return_value = (inspected_props, inspected_macs, new_traits) new_port_mock1 = mock.MagicMock(spec=objects.Port) new_port_mock2 = mock.MagicMock(spec=objects.Port) port_mock.side_effect = [new_port_mock1, new_port_mock2] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.inspect.inspect_hardware(task) node_id = task.node.id _inspect_hardware_mock.assert_called_once_with(task.node, existing_traits) # note (naohirot): # as of mock 1.2, assert_has_calls has a bug which returns # "AssertionError: Calls not found." if mock_calls has class # method call such as below: # AssertionError: Calls not found. # Expected: [call.list_by_node_id( # , # 1)] # Actual: [call.list_by_node_id( # , # 1)] # # workaround, remove class method call from mock_calls list del port_mock.mock_calls[0] port_mock.assert_has_calls([ # workaround, comment out class method call from expected list # mock.call.list_by_node_id(task.context, node_id), mock.call(task.context, address=inspected_macs[0], node_id=node_id), mock.call(task.context, address=inspected_macs[1], node_id=node_id) ]) new_port_mock1.create.assert_called_once_with() new_port_mock2.create.assert_called_once_with() self.assertTrue(info_mock.called) task.node.refresh() self.assertEqual(inspected_props, task.node.properties) self.assertEqual(states.MANAGEABLE, result) @mock.patch.object(manager_utils, 'node_power_action', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect.objects, 'Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_inspect_hardware_with_power_off(self, power_state_mock, _inspect_hardware_mock, port_mock, info_mock, set_boot_device_mock, power_action_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] new_traits = ['CUSTOM_CPU_FPGA'] existing_traits = [] power_state_mock.return_value = states.POWER_OFF _inspect_hardware_mock.return_value = (inspected_props, inspected_macs, new_traits) new_port_mock1 = mock.MagicMock(spec=objects.Port) new_port_mock2 = mock.MagicMock(spec=objects.Port) port_mock.side_effect = [new_port_mock1, new_port_mock2] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.inspect.inspect_hardware(task) node_id = task.node.id _inspect_hardware_mock.assert_called_once_with(task.node, existing_traits, sleep_flag=True) port_mock.assert_has_calls([ mock.call(task.context, address=inspected_macs[0], node_id=node_id), mock.call(task.context, address=inspected_macs[1], node_id=node_id) ]) new_port_mock1.create.assert_called_once_with() new_port_mock2.create.assert_called_once_with() self.assertTrue(info_mock.called) task.node.refresh() self.assertEqual(inspected_props, task.node.properties) self.assertEqual(states.MANAGEABLE, result) self.assertEqual(power_action_mock.called, True) self.assertEqual(power_action_mock.call_count, 2) @mock.patch('ironic.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_inspect_hardware_inspect_exception( self, power_state_mock, _inspect_hardware_mock, port_mock): side_effect = exception.HardwareInspectionFailure("fake exception") _inspect_hardware_mock.side_effect = side_effect power_state_mock.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.HardwareInspectionFailure, task.driver.inspect.inspect_hardware, task) self.assertFalse(port_mock.called) @mock.patch.object(objects.trait.TraitList, 'get_trait_names', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect.LOG, 'warn', spec_set=True, autospec=True) @mock.patch('ironic.objects.Port', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_inspect_hardware', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_inspect_hardware_mac_already_exist( self, power_state_mock, _inspect_hardware_mock, port_mock, warn_mock, trait_mock): inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] existing_traits = ['CUSTOM_CPU_FPGA'] new_traits = list(existing_traits) _inspect_hardware_mock.return_value = (inspected_props, inspected_macs, new_traits) power_state_mock.return_value = states.POWER_ON side_effect = exception.MACAlreadyExists("fake exception") new_port_mock = port_mock.return_value new_port_mock.create.side_effect = side_effect trait_mock.return_value = existing_traits with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: result = task.driver.inspect.inspect_hardware(task) _inspect_hardware_mock.assert_called_once_with(task.node, existing_traits) self.assertEqual(2, port_mock.call_count) task.node.refresh() self.assertEqual(inspected_props, task.node.properties) self.assertEqual(states.MANAGEABLE, result) @mock.patch.object(objects.trait.TraitList, 'get_trait_names', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, '_get_mac_addresses', spec_set=True, autospec=True) @mock.patch.object(irmc_inspect, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def _test_inspect_hardware_props(self, gpu_ids, fpga_ids, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits, get_irmc_report_mock, scci_mock, _get_mac_addresses_mock, trait_mock): capabilities_props = set(irmc_inspect.CAPABILITIES_PROPERTIES) # if gpu_ids = [], pci_gpu_devices will not be inspected if len(gpu_ids) == 0: capabilities_props.remove('pci_gpu_devices') # if fpga_ids = [], cpu_fpga will not be inspected if fpga_ids is None or len(fpga_ids) == 0: capabilities_props.remove('cpu_fpga') self.config(gpu_ids=gpu_ids, group='irmc') self.config(fpga_ids=fpga_ids, group='irmc') kwargs = {'sleep_flag': False} inspected_props = { 'memory_mb': '1024', 'local_gb': 10, 'cpus': 2, 'cpu_arch': 'x86_64'} inspected_macs = ['aa:aa:aa:aa:aa:aa', 'bb:bb:bb:bb:bb:bb'] report = 'fake_report' get_irmc_report_mock.return_value = report scci_mock.get_essential_properties.return_value = inspected_props scci_mock.get_capabilities_properties.return_value = \ inspected_capabilities _get_mac_addresses_mock.return_value = inspected_macs trait_mock.return_value = existed_traits with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.properties[u'capabilities'] =\ ",".join('%(k)s:%(v)s' % {'k': k, 'v': v} for k, v in existed_capabilities.items()) result = irmc_inspect._inspect_hardware(task.node, existed_traits, **kwargs) get_irmc_report_mock.assert_called_once_with(task.node) scci_mock.get_essential_properties.assert_called_once_with( report, irmc_inspect.IRMCInspect.ESSENTIAL_PROPERTIES) scci_mock.get_capabilities_properties.assert_called_once_with( mock.ANY, capabilities_props, gpu_ids, fpga_ids=fpga_ids, **kwargs) expected_capabilities = utils.get_updated_capabilities( '', expected_capabilities) set1 = set(expected_capabilities.split(',')) set2 = set(result[0]['capabilities'].split(',')) self.assertEqual(set1, set2) self.assertEqual(expected_traits, result[2]) def test_inspect_hardware_existing_cap_in_props(self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1 } inspected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1, 'cpu_fpga': 1 } expected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1 } existed_traits = [] expected_traits = ['CUSTOM_CPU_FPGA'] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_empty_gpu_ids_fpga_ids(self): # Set config flags gpu_ids = [] cpu_fpgas = [] existed_capabilities = {} inspected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} expected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} existed_traits = [] expected_traits = [] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_pci_gpu_devices_return_zero(self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = {} inspected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 0, 'cpu_fpga': 0 } expected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} existed_traits = [] expected_traits = [] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_empty_gpu_ids_fpga_id_sand_existing_cap( self): # Set config flags gpu_ids = [] cpu_fpgas = [] existed_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} inspected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} expected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} existed_traits = [] expected_traits = [] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_gpu_cpu_fpgas_zero_and_existing_cap( self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} inspected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 0, 'cpu_fpga': 0} expected_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} existed_traits = ['CUSTOM_CPU_FPGA'] expected_traits = [] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_trusted_boot_is_false(self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = {} inspected_capabilities = { 'trusted_boot': False, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1, 'cpu_fpga': 1} expected_capabilities = { 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} existed_traits = [] expected_traits = ['CUSTOM_CPU_FPGA'] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_trusted_boot_is_false_and_existing_cap( self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} inspected_capabilities = { 'trusted_boot': False, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1, 'cpu_fpga': 1} expected_capabilities = { 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} existed_traits = ['CUSTOM_CPU_FPGA'] expected_traits = ['CUSTOM_CPU_FPGA'] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) def test_inspect_hardware_props_gpu_and_cpu_fpgas_results_are_different( self): # Set config flags gpu_ids = ['0x1000/0x0079', '0x2100/0x0080'] cpu_fpgas = ['0x1000/0x0179', '0x2100/0x0180'] existed_capabilities = { 'trusted_boot': True, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 1} inspected_capabilities = { 'trusted_boot': False, 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x', 'pci_gpu_devices': 0, 'cpu_fpga': 1} expected_capabilities = { 'irmc_firmware_version': 'iRMC S4-7.82F', 'server_model': 'TX2540M1F5', 'rom_firmware_version': 'V4.6.5.4 R1.15.0 for D3099-B1x'} existed_traits = [] expected_traits = ['CUSTOM_CPU_FPGA'] self._test_inspect_hardware_props(gpu_ids, cpu_fpgas, existed_capabilities, inspected_capabilities, expected_capabilities, existed_traits, expected_traits) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_raid.py0000664000175000017500000007743213652514273025143 0ustar zuulzuul00000000000000# Copyright 2018 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for IRMC RAID configuration """ import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.irmc import raid from ironic.tests.unit.drivers.modules.irmc import test_common class IRMCRaidConfigurationInternalMethodsTestCase(test_common.BaseIRMCTest): def setUp(self): super(IRMCRaidConfigurationInternalMethodsTestCase, self).setUp() self.raid_adapter_profile = { "Server": { "HWConfigurationIrmc": { "Adapters": { "RAIDAdapter": [ { "@AdapterId": "RAIDAdapter0", "@ConfigurationType": "Addressing", "Arrays": None, "LogicalDrives": None, "PhysicalDisks": { "PhysicalDisk": [ { "@Number": "0", "@Action": "None", "Slot": 0, }, { "@Number": "1", "@Action": "None", "Slot": 1 }, { "@Number": "2", "@Action": "None", "Slot": 2 }, { "@Number": "3", "@Action": "None", "Slot": 3 } ] } } ] } } } } self.valid_disk_slots = { "PhysicalDisk": [ { "@Number": "0", "Slot": 0, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "1", "Slot": 1, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "2", "Slot": 2, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "3", "Slot": 3, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "4", "Slot": 4, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "5", "Slot": 5, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "6", "Slot": 6, "Size": { "@Unit": "GB", "#text": 1000 } }, { "@Number": "7", "Slot": 7, "Size": { "@Unit": "GB", "#text": 1000 } } ] } @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test___fail_validation_with_none_raid_adapter_profile( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = None target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "0" } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test___fail_validation_without_raid_level( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50" } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test___fail_validation_with_raid_level_is_none(self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "" } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_without_physical_disks( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = { "Server": { "HWConfigurationIrmc": { "Adapters": { "RAIDAdapter": [ { "@AdapterId": "RAIDAdapter0", "@ConfigurationType": "Addressing", "Arrays": None, "LogicalDrives": None, "PhysicalDisks": None } ] } } } } target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "1" } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test___fail_validation_with_raid_level_outside_list( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "2" } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch( 'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity') @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_not_enough_valid_disks( self, get_raid_adapter_mock, get_physical_disk_mock, capacity_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "5" }, { "size_gb": "50", "raid_level": "1" }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_physical_disk_insufficient( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "1", "physical_disks": [ "0", "1", "2" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_physical_disk_not_enough_disks( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "5", "physical_disks": [ "0", "1" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_physical_disk_incorrect_valid_disks( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "10", "physical_disks": [ "0", "1", "2", "3", "4" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_physical_disk_outside_valid_disks_1( self, get_raid_adapter_mock, get_physical_disk_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "1", "physical_disks": [ "4", "5" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch( 'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity') @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_physical_disk_outside_valid_slots_2( self, get_raid_adapter_mock, get_physical_disk_mock, capacity_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "5", "physical_disks": [ "0", "1", "2" ] }, { "size_gb": "50", "raid_level": "0", "physical_disks": [ "4" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch( 'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity') @mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk') @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_duplicated_physical_disks( self, get_raid_adapter_mock, get_physical_disk_mock, capacity_mock): get_raid_adapter_mock.return_value = self.raid_adapter_profile target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "1", "physical_disks": [ "0", "1" ] }, { "size_gb": "50", "raid_level": "1", "physical_disks": [ "1", "2" ] }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) @mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter') def test__fail_validation_with_difference_physical_disks_type( self, get_raid_adapter_mock): get_raid_adapter_mock.return_value = { "Server": { "HWConfigurationIrmc": { "Adapters": { "RAIDAdapter": [ { "@AdapterId": "RAIDAdapter0", "@ConfigurationType": "Addressing", "Arrays": None, "LogicalDrives": None, "PhysicalDisks": { "PhysicalDisk": [ { "@Number": "0", "Slot": 0, "Type": "HDD", }, { "@Number": "1", "Slot": 1, "Type": "SSD", } ] } } ] } } } } target_raid_config = { "logical_disks": [ { "size_gb": "50", "raid_level": "1", "physical_disks": [ "0", "1" ] } ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, raid._validate_physical_disks, task.node, target_raid_config['logical_disks']) def test__fail_validate_capacity_raid_0(self): disk = { "size_gb": 3000, "raid_level": "0" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_raid_1(self): disk = { "size_gb": 3000, "raid_level": "1" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_raid_5(self): disk = { "size_gb": 3000, "raid_level": "5" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_raid_6(self): disk = { "size_gb": 3000, "raid_level": "6" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_raid_10(self): disk = { "size_gb": 3000, "raid_level": "10" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_raid_50(self): disk = { "size_gb": 5000, "raid_level": "50" } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) def test__fail_validate_capacity_with_physical_disk(self): disk = { "size_gb": 4000, "raid_level": "5", "physical_disks": [ "0", "1", "3", "4" ] } self.assertRaises(exception.InvalidParameterValue, raid._validate_logical_drive_capacity, disk, self.valid_disk_slots) @mock.patch('ironic.common.raid.update_raid_info') @mock.patch('ironic.drivers.modules.irmc.raid.client') def test__commit_raid_config_with_logical_drives(self, client_mock, update_raid_info_mock): client_mock.elcm.get_raid_adapter.return_value = { "Server": { "HWConfigurationIrmc": { "Adapters": { "RAIDAdapter": [ { "@AdapterId": "RAIDAdapter0", "@ConfigurationType": "Addressing", "Arrays": { "Array": [ { "@Number": 0, "@ConfigurationType": "Addressing", "PhysicalDiskRefs": { "PhysicalDiskRef": [ { "@Number": "0" }, { "@Number": "1" } ] } } ] }, "LogicalDrives": { "LogicalDrive": [ { "@Number": 0, "@Action": "None", "RaidLevel": "1", "Name": "LogicalDrive_0", "Size": { "@Unit": "GB", "#text": 465 }, "ArrayRefs": { "ArrayRef": [ { "@Number": 0 } ] } } ] }, "PhysicalDisks": { "PhysicalDisk": [ { "@Number": "0", "@Action": "None", "Slot": 0, "PDStatus": "Operational" }, { "@Number": "1", "@Action": "None", "Slot": 1, "PDStatus": "Operational" } ] } } ] } } } } expected_raid_config = [ {'controller': 'RAIDAdapter0'}, {'irmc_raid_info': {' size': {'#text': 465, '@Unit': 'GB'}, 'logical_drive_number': 0, 'name': 'LogicalDrive_0', 'raid_level': '1'}}, {'physical_drives': {'physical_drive': {'@Action': 'None', '@Number': '0', 'PDStatus': 'Operational', 'Slot': 0}}}, {'physical_drives': {'physical_drive': {'@Action': 'None', '@Number': '1', 'PDStatus': 'Operational', 'Slot': 1}}}] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: raid._commit_raid_config(task) client_mock.elcm.get_raid_adapter.assert_called_once_with( task.node.driver_info) update_raid_info_mock.assert_called_once_with( task.node, task.node.raid_config) self.assertEqual(task.node.raid_config['logical_disks'], expected_raid_config) class IRMCRaidConfigurationTestCase(test_common.BaseIRMCTest): def setUp(self): super(IRMCRaidConfigurationTestCase, self).setUp() self.config(enabled_raid_interfaces=['irmc']) self.raid_adapter_profile = { "Server": { "HWConfigurationIrmc": { "Adapters": { "RAIDAdapter": [ { "@AdapterId": "RAIDAdapter0", "@ConfigurationType": "Addressing", "Arrays": None, "LogicalDrives": None, "PhysicalDisks": { "PhysicalDisk": [ { "@Number": "0", "@Action": "None", "Slot": 0, }, { "@Number": "1", "@Action": "None", "Slot": 1 }, { "@Number": "2", "@Action": "None", "Slot": 2 }, { "@Number": "3", "@Action": "None", "Slot": 3 } ] } } ] } } } } def test_fail_create_raid_without_target_raid_config(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.target_raid_config = {} raid_configuration = raid.IRMCRAID() self.assertRaises(exception.MissingParameterValue, raid_configuration.create_configuration, task) @mock.patch('ironic.drivers.modules.irmc.raid._validate_physical_disks') @mock.patch('ironic.drivers.modules.irmc.raid._create_raid_adapter') @mock.patch('ironic.drivers.modules.irmc.raid._commit_raid_config') def test_create_raid_with_raid_1_and_0(self, commit_mock, create_raid_mock, validation_mock): expected_input = { "logical_disks": [ { "raid_level": "10" }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.target_raid_config = { "logical_disks": [ { "raid_level": "1+0" }, ] } task.driver.raid.create_configuration(task) create_raid_mock.assert_called_once_with(task.node) validation_mock.assert_called_once_with( task.node, expected_input['logical_disks']) commit_mock.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.irmc.raid._validate_physical_disks') @mock.patch('ironic.drivers.modules.irmc.raid._create_raid_adapter') @mock.patch('ironic.drivers.modules.irmc.raid._commit_raid_config') def test_create_raid_with_raid_5_and_0(self, commit_mock, create_raid_mock, validation_mock): expected_input = { "logical_disks": [ { "raid_level": "50" }, ] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.target_raid_config = { "logical_disks": [ { "raid_level": "5+0" }, ] } task.driver.raid.create_configuration(task) create_raid_mock.assert_called_once_with(task.node) validation_mock.assert_called_once_with( task.node, expected_input['logical_disks']) commit_mock.assert_called_once_with(task) @mock.patch('ironic.drivers.modules.irmc.raid._delete_raid_adapter') def test_delete_raid_configuration(self, delete_raid_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.raid.delete_configuration(task) delete_raid_mock.assert_called_once_with(task.node) @mock.patch('ironic.drivers.modules.irmc.raid._delete_raid_adapter') def test_delete_raid_configuration_return_cleared_raid_config( self, delete_raid_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: expected_raid_config = {} task.driver.raid.delete_configuration(task) self.assertEqual(expected_raid_config, task.node.raid_config) delete_raid_mock.assert_called_once_with(task.node) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_common.py0000664000175000017500000002571013652514273025504 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for common methods used by iRMC modules. """ import mock from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.irmc import common as irmc_common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs from ironic.tests.unit.objects import utils as obj_utils class BaseIRMCTest(db_base.DbTestCase): boot_interface = 'irmc-pxe' def setUp(self): super(BaseIRMCTest, self).setUp() self.config(enabled_hardware_types=['irmc', 'fake-hardware'], enabled_power_interfaces=['irmc', 'fake'], enabled_management_interfaces=['irmc', 'fake'], enabled_bios_interfaces=['irmc', 'no-bios', 'fake'], enabled_boot_interfaces=[self.boot_interface, 'fake'], enabled_inspect_interfaces=['irmc', 'no-inspect', 'fake']) self.info = db_utils.get_test_irmc_info() self.node = obj_utils.create_test_node( self.context, driver='irmc', boot_interface=self.boot_interface, driver_info=self.info, uuid=uuidutils.generate_uuid()) class IRMCValidateParametersTestCase(BaseIRMCTest): def test_parse_driver_info(self): info = irmc_common.parse_driver_info(self.node) self.assertEqual('1.2.3.4', info['irmc_address']) self.assertEqual('admin0', info['irmc_username']) self.assertEqual('fake0', info['irmc_password']) self.assertEqual(60, info['irmc_client_timeout']) self.assertEqual(80, info['irmc_port']) self.assertEqual('digest', info['irmc_auth_method']) self.assertEqual('ipmitool', info['irmc_sensor_method']) self.assertEqual('v2c', info['irmc_snmp_version']) self.assertEqual(161, info['irmc_snmp_port']) self.assertEqual('public', info['irmc_snmp_community']) self.assertFalse(info['irmc_snmp_security']) def test_parse_driver_option_default(self): self.node.driver_info = { "irmc_address": "1.2.3.4", "irmc_username": "admin0", "irmc_password": "fake0", } info = irmc_common.parse_driver_info(self.node) self.assertEqual('basic', info['irmc_auth_method']) self.assertEqual(443, info['irmc_port']) self.assertEqual(60, info['irmc_client_timeout']) self.assertEqual('ipmitool', info['irmc_sensor_method']) def test_parse_driver_info_missing_address(self): del self.node.driver_info['irmc_address'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_username(self): del self.node.driver_info['irmc_username'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_password(self): del self.node.driver_info['irmc_password'] self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_timeout(self): self.node.driver_info['irmc_client_timeout'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_port(self): self.node.driver_info['irmc_port'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_auth_method(self): self.node.driver_info['irmc_auth_method'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_sensor_method(self): self.node.driver_info['irmc_sensor_method'] = 'qwe' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_missing_multiple_params(self): del self.node.driver_info['irmc_password'] del self.node.driver_info['irmc_address'] e = self.assertRaises(exception.MissingParameterValue, irmc_common.parse_driver_info, self.node) self.assertIn('irmc_password', str(e)) self.assertIn('irmc_address', str(e)) def test_parse_driver_info_invalid_snmp_version(self): self.node.driver_info['irmc_snmp_version'] = 'v3x' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_port(self): self.node.driver_info['irmc_snmp_port'] = '161' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_community(self): self.node.driver_info['irmc_snmp_version'] = 'v2c' self.node.driver_info['irmc_snmp_community'] = 100 self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_invalid_snmp_security(self): self.node.driver_info['irmc_snmp_version'] = 'v3' self.node.driver_info['irmc_snmp_security'] = 100 self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) def test_parse_driver_info_empty_snmp_security(self): self.node.driver_info['irmc_snmp_version'] = 'v3' self.node.driver_info['irmc_snmp_security'] = '' self.assertRaises(exception.InvalidParameterValue, irmc_common.parse_driver_info, self.node) class IRMCCommonMethodsTestCase(BaseIRMCTest): @mock.patch.object(irmc_common, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_get_irmc_client(self, mock_scci): self.info['irmc_port'] = 80 self.info['irmc_auth_method'] = 'digest' self.info['irmc_client_timeout'] = 60 mock_scci.get_client.return_value = 'get_client' returned_mock_scci_get_client = irmc_common.get_irmc_client(self.node) mock_scci.get_client.assert_called_with( self.info['irmc_address'], self.info['irmc_username'], self.info['irmc_password'], port=self.info['irmc_port'], auth_method=self.info['irmc_auth_method'], client_timeout=self.info['irmc_client_timeout']) self.assertEqual('get_client', returned_mock_scci_get_client) def test_update_ipmi_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: ipmi_info = { "ipmi_address": "1.2.3.4", "ipmi_username": "admin0", "ipmi_password": "fake0", } task.node.driver_info = self.info irmc_common.update_ipmi_properties(task) actual_info = task.node.driver_info expected_info = dict(self.info, **ipmi_info) self.assertEqual(expected_info, actual_info) @mock.patch.object(irmc_common, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_get_irmc_report(self, mock_scci): self.info['irmc_port'] = 80 self.info['irmc_auth_method'] = 'digest' self.info['irmc_client_timeout'] = 60 mock_scci.get_report.return_value = 'get_report' returned_mock_scci_get_report = irmc_common.get_irmc_report(self.node) mock_scci.get_report.assert_called_with( self.info['irmc_address'], self.info['irmc_username'], self.info['irmc_password'], port=self.info['irmc_port'], auth_method=self.info['irmc_auth_method'], client_timeout=self.info['irmc_client_timeout']) self.assertEqual('get_report', returned_mock_scci_get_report) def test_out_range_port(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'port', 60, 'irmc') def test_out_range_auth_method(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'auth_method', 'fake', 'irmc') def test_out_range_sensor_method(self): self.assertRaises(ValueError, cfg.CONF.set_override, 'sensor_method', 'fake', 'irmc') @mock.patch.object(irmc_common, 'elcm', spec_set=mock_specs.SCCICLIENT_IRMC_ELCM_SPEC) def test_set_secure_boot_mode_enable(self, mock_elcm): mock_elcm.set_secure_boot_mode.return_value = 'set_secure_boot_mode' info = irmc_common.parse_driver_info(self.node) irmc_common.set_secure_boot_mode(self.node, True) mock_elcm.set_secure_boot_mode.assert_called_once_with( info, True) @mock.patch.object(irmc_common, 'elcm', spec_set=mock_specs.SCCICLIENT_IRMC_ELCM_SPEC) def test_set_secure_boot_mode_disable(self, mock_elcm): mock_elcm.set_secure_boot_mode.return_value = 'set_secure_boot_mode' info = irmc_common.parse_driver_info(self.node) irmc_common.set_secure_boot_mode(self.node, False) mock_elcm.set_secure_boot_mode.assert_called_once_with( info, False) @mock.patch.object(irmc_common, 'elcm', spec_set=mock_specs.SCCICLIENT_IRMC_ELCM_SPEC) @mock.patch.object(irmc_common, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) def test_set_secure_boot_mode_fail(self, mock_scci, mock_elcm): irmc_common.scci.SCCIError = Exception mock_elcm.set_secure_boot_mode.side_effect = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IRMCOperationError, irmc_common.set_secure_boot_mode, task.node, True) info = irmc_common.parse_driver_info(task.node) mock_elcm.set_secure_boot_mode.assert_called_once_with( info, True) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ng.xml0000664000175000017500000001025713652514273027464 0ustar zuulzuul00000000000000 20 0 0 1 55 0 Ambient 1 1 0 degree C unspecified 168 42 4 1 148 37 24 6 20 0 0 2 7 0 Systemboard 1 Temperature 1 0 degree C unspecified 80 80 75 75 20 0 0 35 29 0 4 Fan 18 0 RPM unspecified 10 600 20 0 0 36 1 FAN2 SYS 4 Fan 18 0 RPM unspecified 10 600 ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_management.py0000664000175000017500000005321413652514273026330 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Management Driver """ import os import xml.etree.ElementTree as ET import mock from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import fake from ironic.drivers.modules import ipmitool from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import management as irmc_management from ironic.drivers.modules.irmc import power as irmc_power from ironic.drivers import utils as driver_utils from ironic.tests.unit.drivers.modules.irmc import test_common from ironic.tests.unit.drivers import third_party_driver_mock_specs \ as mock_specs @mock.patch.object(irmc_management.irmc, 'elcm', spec_set=mock_specs.SCCICLIENT_IRMC_ELCM_SPEC) @mock.patch.object(manager_utils, 'node_power_action', specset=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', return_value=states.POWER_ON, specset=True, autospec=True) class IRMCManagementFunctionsTestCase(test_common.BaseIRMCTest): def setUp(self): super(IRMCManagementFunctionsTestCase, self).setUp() self.info = irmc_common.parse_driver_info(self.node) irmc_management.irmc.scci.SCCIError = Exception irmc_management.irmc.scci.SCCIInvalidInputError = ValueError def test_backup_bios_config(self, mock_get_power, mock_power_action, mock_elcm): self.config(clean_priority_restore_irmc_bios_config=10, group='irmc') bios_config = {'Server': {'System': {'BiosConfig': {'key1': 'val1'}}}} mock_elcm.backup_bios_config.return_value = { 'bios_config': bios_config} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_management.backup_bios_config(task) self.assertEqual(bios_config, task.node.driver_internal_info[ 'irmc_bios_config']) self.assertEqual(1, mock_elcm.backup_bios_config.call_count) def test_backup_bios_config_skipped(self, mock_get_power, mock_power_action, mock_elcm): self.config(clean_priority_restore_irmc_bios_config=0, group='irmc') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: irmc_management.backup_bios_config(task) self.assertNotIn('irmc_bios_config', task.node.driver_internal_info) self.assertFalse(mock_elcm.backup_bios_config.called) def test_backup_bios_config_failed(self, mock_get_power, mock_power_action, mock_elcm): self.config(clean_priority_restore_irmc_bios_config=10, group='irmc') mock_elcm.backup_bios_config.side_effect = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IRMCOperationError, irmc_management.backup_bios_config, task) self.assertNotIn('irmc_bios_config', task.node.driver_internal_info) self.assertEqual(1, mock_elcm.backup_bios_config.call_count) def test__restore_bios_config(self, mock_get_power, mock_power_action, mock_elcm): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Set bios data for the node info task.node.driver_internal_info['irmc_bios_config'] = 'data' irmc_management._restore_bios_config(task) self.assertEqual(1, mock_elcm.restore_bios_config.call_count) def test__restore_bios_config_failed(self, mock_get_power, mock_power_action, mock_elcm): mock_elcm.restore_bios_config.side_effect = Exception with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Set bios data for the node info task.node.driver_internal_info['irmc_bios_config'] = 'data' self.assertRaises(exception.IRMCOperationError, irmc_management._restore_bios_config, task) # Backed up BIOS config is still in the node object self.assertEqual('data', task.node.driver_internal_info[ 'irmc_bios_config']) self.assertTrue(mock_elcm.restore_bios_config.called) def test__restore_bios_config_corrupted(self, mock_get_power, mock_power_action, mock_elcm): mock_elcm.restore_bios_config.side_effect = \ irmc_management.irmc.scci.SCCIInvalidInputError with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: # Set bios data for the node info task.node.driver_internal_info['irmc_bios_config'] = 'data' self.assertRaises(exception.IRMCOperationError, irmc_management._restore_bios_config, task) # Backed up BIOS config is removed from the node object self.assertNotIn('irmc_bios_config', task.node.driver_internal_info) self.assertTrue(mock_elcm.restore_bios_config.called) class IRMCManagementTestCase(test_common.BaseIRMCTest): def setUp(self): super(IRMCManagementTestCase, self).setUp() self.info = irmc_common.parse_driver_info(self.node) def test_get_properties(self): expected = irmc_common.COMMON_PROPERTIES expected.update(ipmitool.COMMON_PROPERTIES) expected.update(ipmitool.CONSOLE_PROPERTIES) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: # Remove the boot and deploy interfaces properties task.driver.boot = fake.FakeBoot() task.driver.deploy = fake.FakeDeploy() self.assertEqual(expected, task.driver.get_properties()) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.management.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = exception.InvalidParameterValue("Invalid Input") mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.management.validate, task) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS, boot_devices.SAFE] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(irmc_management.ipmitool, "send_raw", spec_set=True, autospec=True) def _test_management_interface_set_boot_device_ok( self, boot_mode, params, expected_raw_code, send_raw_mock): send_raw_mock.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = '' if boot_mode: driver_utils.add_node_capability(task, 'boot_mode', boot_mode) irmc_management.IRMCManagement().set_boot_device(task, **params) send_raw_mock.assert_has_calls([ mock.call(task, "0x00 0x08 0x03 0x08"), mock.call(task, expected_raw_code)]) def test_management_interface_set_boot_device_ok_pxe(self): params = {'device': boot_devices.PXE, 'persistent': False} self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0x80 0x04 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0x80 0x04 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xa0 0x04 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0xc0 0x04 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0xc0 0x04 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xe0 0x04 0x00 0x00 0x00") def test_management_interface_set_boot_device_ok_disk(self): params = {'device': boot_devices.DISK, 'persistent': False} self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0x80 0x08 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0x80 0x08 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xa0 0x08 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0xc0 0x08 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0xc0 0x08 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xe0 0x08 0x00 0x00 0x00") def test_management_interface_set_boot_device_ok_cdrom(self): params = {'device': boot_devices.CDROM, 'persistent': False} self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0x80 0x20 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0x80 0x20 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xa0 0x20 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0xc0 0x20 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0xc0 0x20 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xe0 0x20 0x00 0x00 0x00") def test_management_interface_set_boot_device_ok_bios(self): params = {'device': boot_devices.BIOS, 'persistent': False} self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0x80 0x18 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0x80 0x18 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xa0 0x18 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0xc0 0x18 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0xc0 0x18 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xe0 0x18 0x00 0x00 0x00") def test_management_interface_set_boot_device_ok_safe(self): params = {'device': boot_devices.SAFE, 'persistent': False} self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0x80 0x0c 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0x80 0x0c 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xa0 0x0c 0x00 0x00 0x00") params['persistent'] = True self._test_management_interface_set_boot_device_ok( None, params, "0x00 0x08 0x05 0xc0 0x0c 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'bios', params, "0x00 0x08 0x05 0xc0 0x0c 0x00 0x00 0x00") self._test_management_interface_set_boot_device_ok( 'uefi', params, "0x00 0x08 0x05 0xe0 0x0c 0x00 0x00 0x00") @mock.patch.object(irmc_management.ipmitool, "send_raw", spec_set=True, autospec=True) def test_management_interface_set_boot_device_ng(self, send_raw_mock): """uefi mode, next boot only, unknown device.""" send_raw_mock.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_utils.add_node_capability(task, 'boot_mode', 'uefi') self.assertRaises(exception.InvalidParameterValue, irmc_management.IRMCManagement().set_boot_device, task, "unknown") @mock.patch.object(irmc_management.irmc, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_scci_ok( self, mock_get_irmc_report, mock_scci): """'irmc_sensor_method' = 'scci' specified and OK data.""" with open(os.path.join(os.path.dirname(__file__), 'fake_sensors_data_ok.xml'), "r") as report: fake_txt = report.read() fake_xml = ET.fromstring(fake_txt) mock_get_irmc_report.return_value = fake_xml mock_scci.get_sensor_data.return_value = fake_xml.find( "./System/SensorDataRecords") with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' sensor_dict = irmc_management.IRMCManagement().get_sensors_data( task) expected = { 'Fan (4)': { 'FAN1 SYS (29)': { 'Units': 'RPM', 'Sensor ID': 'FAN1 SYS (29)', 'Sensor Reading': '600 RPM' }, 'FAN2 SYS (29)': { 'Units': 'None', 'Sensor ID': 'FAN2 SYS (29)', 'Sensor Reading': 'None None' } }, 'Temperature (1)': { 'Systemboard 1 (7)': { 'Units': 'degree C', 'Sensor ID': 'Systemboard 1 (7)', 'Sensor Reading': '80 degree C' }, 'Ambient (55)': { 'Units': 'degree C', 'Sensor ID': 'Ambient (55)', 'Sensor Reading': '42 degree C' } } } self.assertEqual(expected, sensor_dict) @mock.patch.object(irmc_management.irmc, 'scci', spec_set=mock_specs.SCCICLIENT_IRMC_SCCI_SPEC) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_scci_ng( self, mock_get_irmc_report, mock_scci): """'irmc_sensor_method' = 'scci' specified and NG data.""" with open(os.path.join(os.path.dirname(__file__), 'fake_sensors_data_ng.xml'), "r") as report: fake_txt = report.read() fake_xml = ET.fromstring(fake_txt) mock_get_irmc_report.return_value = fake_xml mock_scci.get_sensor_data.return_value = fake_xml.find( "./System/SensorDataRecords") with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' sensor_dict = irmc_management.IRMCManagement().get_sensors_data( task) self.assertEqual(len(sensor_dict), 0) @mock.patch.object(ipmitool.IPMIManagement, 'get_sensors_data', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_ipmitool_ok( self, get_sensors_data_mock): """'irmc_sensor_method' = 'ipmitool' specified.""" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'ipmitool' task.driver.management.get_sensors_data(task) get_sensors_data_mock.assert_called_once_with( task.driver.management, task) @mock.patch.object(irmc_common, 'get_irmc_report', spec_set=True, autospec=True) def test_management_interface_get_sensors_data_exception( self, get_irmc_report_mock): """'FailedToGetSensorData Exception.""" get_irmc_report_mock.side_effect = exception.InvalidParameterValue( "Fake Error") irmc_management.irmc.scci.SCCIInvalidInputError = Exception irmc_management.irmc.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['irmc_sensor_method'] = 'scci' e = self.assertRaises( exception.FailedToGetSensorData, irmc_management.IRMCManagement().get_sensors_data, task) self.assertEqual("Failed to get sensor data for node %s. " "Error: Fake Error" % self.node.uuid, str(e)) @mock.patch.object(irmc_management.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test_management_interface_inject_nmi_ok(self, mock_get_irmc_client, mock_log): irmc_client = mock_get_irmc_client.return_value with task_manager.acquire(self.context, self.node.uuid) as task: irmc_management.IRMCManagement().inject_nmi(task) irmc_client.assert_called_once_with( irmc_management.irmc.scci.POWER_RAISE_NMI) self.assertFalse(mock_log.called) @mock.patch.object(irmc_management.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test_management_interface_inject_nmi_fail(self, mock_get_irmc_client, mock_log): irmc_client = mock_get_irmc_client.return_value irmc_client.side_effect = Exception() irmc_management.irmc.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IRMCOperationError, irmc_management.IRMCManagement().inject_nmi, task) irmc_client.assert_called_once_with( irmc_management.irmc.scci.POWER_RAISE_NMI) self.assertTrue(mock_log.called) @mock.patch.object(irmc_management, '_restore_bios_config', spec_set=True, autospec=True) def test_management_interface_restore_irmc_bios_config(self, mock_restore_bios): with task_manager.acquire(self.context, self.node.uuid) as task: result = task.driver.management.restore_irmc_bios_config(task) self.assertIsNone(result) mock_restore_bios.assert_called_once_with(task) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_bios.py0000664000175000017500000001462113652514273025147 0ustar zuulzuul00000000000000# Copyright 2018 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for IRMC BIOS configuration """ import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.irmc import bios as irmc_bios from ironic.drivers.modules.irmc import common as irmc_common from ironic import objects from ironic.tests.unit.drivers.modules.irmc import test_common class IRMCBIOSTestCase(test_common.BaseIRMCTest): def setUp(self): super(IRMCBIOSTestCase, self).setUp() @mock.patch.object(irmc_common, 'parse_driver_info', autospec=True) def test_validate(self, parse_driver_info_mock): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.bios.validate(task) parse_driver_info_mock.assert_called_once_with(task.node) @mock.patch.object(irmc_bios.irmc.elcm, 'set_bios_configuration', autospec=True) @mock.patch.object(irmc_bios.irmc.elcm, 'get_bios_settings', autospec=True) def test_apply_configuration(self, get_bios_settings_mock, set_bios_configuration_mock): settings = [{ "name": "launch_csm_enabled", "value": True }, { "name": "hyper_threading_enabled", "value": True }, { "name": "cpu_vt_enabled", "value": True }] with task_manager.acquire(self.context, self.node.uuid) as task: irmc_info = irmc_common.parse_driver_info(task.node) task.node.save = mock.Mock() get_bios_settings_mock.return_value = settings task.driver.bios.apply_configuration(task, settings) set_bios_configuration_mock.assert_called_once_with(irmc_info, settings) @mock.patch.object(irmc_bios.irmc.elcm, 'set_bios_configuration', autospec=True) def test_apply_configuration_failed(self, set_bios_configuration_mock): settings = [{ "name": "launch_csm_enabled", "value": True }, { "name": "hyper_threading_enabled", "value": True }, { "name": "setting", "value": True }] irmc_bios.irmc.scci.SCCIError = Exception set_bios_configuration_mock.side_effect = Exception with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IRMCOperationError, task.driver.bios.apply_configuration, task, settings) def test_factory_reset(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.UnsupportedDriverExtension, task.driver.bios.factory_reset, task) @mock.patch.object(objects.BIOSSettingList, 'sync_node_setting') @mock.patch.object(objects.BIOSSettingList, 'create') @mock.patch.object(objects.BIOSSettingList, 'save') @mock.patch.object(objects.BIOSSettingList, 'delete') @mock.patch.object(irmc_bios.irmc.elcm, 'get_bios_settings', autospec=True) def test_cache_bios_settings(self, get_bios_settings_mock, delete_mock, save_mock, create_mock, sync_node_setting_mock): settings = [{ "name": "launch_csm_enabled", "value": True }, { "name": "hyper_threading_enabled", "value": True }, { "name": "cpu_vt_enabled", "value": True }] with task_manager.acquire(self.context, self.node.uuid) as task: irmc_info = irmc_common.parse_driver_info(task.node) get_bios_settings_mock.return_value = settings sync_node_setting_mock.return_value = \ ( [ { "name": "launch_csm_enabled", "value": True }], [ { "name": "hyper_threading_enabled", "value": True }], [ { "name": "cpu_vt_enabled", "value": True }], [] ) task.driver.bios.cache_bios_settings(task) get_bios_settings_mock.assert_called_once_with(irmc_info) sync_node_setting_mock.assert_called_once_with(task.context, task.node.id, settings) create_mock.assert_called_once_with( task.context, task.node.id, sync_node_setting_mock.return_value[0]) save_mock.assert_called_once_with( task.context, task.node.id, sync_node_setting_mock.return_value[1]) delete_names = \ [setting['name'] for setting in sync_node_setting_mock.return_value[2]] delete_mock.assert_called_once_with(task.context, task.node.id, delete_names) @mock.patch.object(irmc_bios.irmc.elcm, 'get_bios_settings', autospec=True) def test_cache_bios_settings_failed(self, get_bios_settings_mock): irmc_bios.irmc.scci.SCCIError = Exception get_bios_settings_mock.side_effect = Exception with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IRMCOperationError, task.driver.bios.cache_bios_settings, task) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/test_power.py0000664000175000017500000004543113652514273025352 0ustar zuulzuul00000000000000# Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test class for iRMC Power Driver """ import mock from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.irmc import boot as irmc_boot from ironic.drivers.modules.irmc import common as irmc_common from ironic.drivers.modules.irmc import power as irmc_power from ironic.tests.unit.drivers.modules.irmc import test_common class IRMCPowerInternalMethodsTestCase(test_common.BaseIRMCTest): def test__is_expected_power_state(self): target_state = states.SOFT_POWER_OFF boot_status_value = irmc_power.BOOT_STATUS_VALUE['unknown'] self.assertTrue(irmc_power._is_expected_power_state( target_state, boot_status_value)) target_state = states.SOFT_POWER_OFF boot_status_value = irmc_power.BOOT_STATUS_VALUE['off'] self.assertTrue(irmc_power._is_expected_power_state( target_state, boot_status_value)) target_state = states.SOFT_REBOOT boot_status_value = irmc_power.BOOT_STATUS_VALUE['os-running'] self.assertTrue(irmc_power._is_expected_power_state( target_state, boot_status_value)) target_state = states.SOFT_POWER_OFF boot_status_value = irmc_power.BOOT_STATUS_VALUE['os-running'] self.assertFalse(irmc_power._is_expected_power_state( target_state, boot_status_value)) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) @mock.patch('ironic.drivers.modules.irmc.power.snmp.SNMPClient', spec_set=True, autospec=True) def test__wait_power_state_soft_power_off(self, snmpclient_mock): target_state = states.SOFT_POWER_OFF self.config(snmp_polling_interval=1, group='irmc') self.config(soft_power_off_timeout=3, group='conductor') snmpclient_mock.return_value = mock.Mock( **{'get.side_effect': [8, 8, 2]}) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._wait_power_state(task, target_state) task.node.refresh() self.assertIsNone(task.node.last_error) self.assertEqual(states.POWER_OFF, task.node.power_state) self.assertEqual(states.NOSTATE, task.node.target_power_state) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) @mock.patch('ironic.drivers.modules.irmc.power.snmp.SNMPClient', spec_set=True, autospec=True) def test__wait_power_state_soft_reboot(self, snmpclient_mock): target_state = states.SOFT_REBOOT self.config(snmp_polling_interval=1, group='irmc') self.config(soft_power_off_timeout=3, group='conductor') snmpclient_mock.return_value = mock.Mock( **{'get.side_effect': [10, 6, 8]}) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._wait_power_state(task, target_state) task.node.refresh() self.assertIsNone(task.node.last_error) self.assertEqual(states.POWER_ON, task.node.power_state) self.assertEqual(states.NOSTATE, task.node.target_power_state) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', lambda *args, **kwargs: None) @mock.patch('ironic.drivers.modules.irmc.power.snmp.SNMPClient', spec_set=True, autospec=True) def test__wait_power_state_timeout(self, snmpclient_mock): target_state = states.SOFT_POWER_OFF self.config(snmp_polling_interval=1, group='irmc') self.config(soft_power_off_timeout=2, group='conductor') snmpclient_mock.return_value = mock.Mock( **{'get.side_effect': [8, 8, 8]}) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, irmc_power._wait_power_state, task, target_state, timeout=None) task.node.refresh() self.assertIsNotNone(task.node.last_error) self.assertEqual(states.ERROR, task.node.power_state) self.assertEqual(states.NOSTATE, task.node.target_power_state) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_power_on_ok( self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with(task) irmc_client.assert_called_once_with(irmc_power.scci.POWER_ON) self.assertFalse(_wait_power_state_mock.called) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) def test__set_power_state_power_off_ok(self, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) irmc_client.assert_called_once_with(irmc_power.scci.POWER_OFF) self.assertFalse(_wait_power_state_mock.called) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_reboot_ok( self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.REBOOT with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with(task) irmc_client.assert_called_once_with(irmc_power.scci.POWER_RESET) self.assertFalse(_wait_power_state_mock.called) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_soft_reboot_ok( self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.SOFT_REBOOT with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with(task) irmc_client.assert_called_once_with(irmc_power.scci.POWER_SOFT_CYCLE) _wait_power_state_mock.assert_has_calls( [mock.call(task, states.SOFT_POWER_OFF, timeout=None), mock.call(task, states.SOFT_REBOOT, timeout=None)]) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_soft_power_off_ok(self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value target_state = states.SOFT_POWER_OFF with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: irmc_power._set_power_state(task, target_state) self.assertFalse(attach_boot_iso_if_needed_mock.called) irmc_client.assert_called_once_with(irmc_power.scci.POWER_SOFT_OFF) _wait_power_state_mock.assert_called_once_with(task, target_state, timeout=None) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_invalid_target_state( self, attach_boot_iso_if_needed_mock, _wait_power_state_mock): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, irmc_power._set_power_state, task, states.ERROR) self.assertFalse(attach_boot_iso_if_needed_mock.called) self.assertFalse(_wait_power_state_mock.called) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_scci_exception(self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): irmc_client = get_irmc_client_mock.return_value irmc_client.side_effect = Exception() irmc_power.scci.SCCIClientError = Exception with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, irmc_power._set_power_state, task, states.POWER_ON) attach_boot_iso_if_needed_mock.assert_called_once_with( task) self.assertFalse(_wait_power_state_mock.called) @mock.patch.object(irmc_power, '_wait_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_common, 'get_irmc_client', spec_set=True, autospec=True) @mock.patch.object(irmc_boot, 'attach_boot_iso_if_needed') def test__set_power_state_snmp_exception(self, attach_boot_iso_if_needed_mock, get_irmc_client_mock, _wait_power_state_mock): target_state = states.SOFT_REBOOT _wait_power_state_mock.side_effect = exception.SNMPFailure( "fake exception") with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.IRMCOperationError, irmc_power._set_power_state, task, target_state) attach_boot_iso_if_needed_mock.assert_called_once_with( task) get_irmc_client_mock.return_value.assert_called_once_with( irmc_power.STATES_MAP[target_state]) _wait_power_state_mock.assert_called_once_with( task, states.SOFT_POWER_OFF, timeout=None) class IRMCPowerTestCase(test_common.BaseIRMCTest): def test_get_properties(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: properties = task.driver.get_properties() for prop in irmc_common.COMMON_PROPERTIES: self.assertIn(prop, properties) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate(self, mock_drvinfo): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.driver.power.validate(task) mock_drvinfo.assert_called_once_with(task.node) @mock.patch.object(irmc_common, 'parse_driver_info', spec_set=True, autospec=True) def test_validate_fail(self, mock_drvinfo): side_effect = exception.InvalidParameterValue("Invalid Input") mock_drvinfo.side_effect = side_effect with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.validate, task) @mock.patch('ironic.drivers.modules.irmc.power.ipmitool.IPMIPower', spec_set=True, autospec=True) def test_get_power_state(self, mock_IPMIPower): ipmi_power = mock_IPMIPower.return_value ipmi_power.get_power_state.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.assertEqual(states.POWER_ON, task.driver.power.get_power_state(task)) ipmi_power.get_power_state.assert_called_once_with(task) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) def test_set_power_state(self, mock_set_power): mock_set_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_set_power.assert_called_once_with(task, states.POWER_ON, timeout=None) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) def test_set_power_state_timeout(self, mock_set_power): mock_set_power.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: task.driver.power.set_power_state(task, states.POWER_ON, timeout=2) mock_set_power.assert_called_once_with(task, states.POWER_ON, timeout=2) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_reboot(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_ON task.driver.power.reboot(task) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.REBOOT, timeout=None) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_reboot_timeout(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_ON task.driver.power.reboot(task, timeout=2) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.REBOOT, timeout=2) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_power_on(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_OFF task.driver.power.reboot(task) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.POWER_ON, timeout=None) @mock.patch.object(irmc_power, '_set_power_state', spec_set=True, autospec=True) @mock.patch.object(irmc_power.IRMCPower, 'get_power_state', spec_set=True, autospec=True) def test_reboot_power_on_timeout(self, mock_get_power, mock_set_power): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: mock_get_power.return_value = states.POWER_OFF task.driver.power.reboot(task, timeout=2) mock_get_power.assert_called_once_with( task.driver.power, task) mock_set_power.assert_called_once_with(task, states.POWER_ON, timeout=2) ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/fake_sensors_data_ok.xml0000664000175000017500000001022113652514273027460 0ustar zuulzuul00000000000000 20 0 0 1 55 0 Ambient 1 Temperature 1 0 degree C unspecified 168 42 4 1 148 37 24 6 20 0 0 2 7 0 Systemboard 1 1 Temperature 1 0 degree C unspecified 80 80 75 75 20 0 0 35 29 0 FAN1 SYS 4 Fan 18 0 RPM unspecified 10 600 20 0 0 36 29 1 FAN2 SYS 4 Fan 18 0 unspecified 10 ironic-15.0.0/ironic/tests/unit/drivers/modules/irmc/__init__.py0000664000175000017500000000000013652514273024675 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_ipmitool.py0000664000175000017500000041307513652514273025123 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2012 Hewlett-Packard Development Company, L.P. # Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """Test class for IPMITool driver module.""" import contextlib import os import random import stat import subprocess import tempfile import time import types import fixtures from ironic_lib import utils as ironic_utils import mock from oslo_concurrency import processutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager import ironic.conf from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import console_utils from ironic.drivers.modules import ipmitool as ipmi from ironic.drivers import utils as driver_utils from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = ironic.conf.CONF INFO_DICT = db_utils.get_test_ipmi_info() # BRIDGE_INFO_DICT will have all the bridging parameters appended BRIDGE_INFO_DICT = INFO_DICT.copy() BRIDGE_INFO_DICT.update(db_utils.get_test_ipmi_bridging_parameters()) class IPMIToolCheckInitTestCase(base.TestCase): @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIPower() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_1(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = exception.PathNotFound(dir="foo_dir") self.assertRaises(exception.PathNotFound, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_2(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = exception.DirectoryNotWritable( dir="foo_dir") self.assertRaises(exception.DirectoryNotWritable, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_raises_3(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None mock_check_dir.side_effect = exception.InsufficientDiskSpace( path="foo_dir", required=1, actual=0) self.assertRaises(exception.InsufficientDiskSpace, ipmi.IPMIPower) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_power_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.IPMIPower() mock_support.assert_called_with(mock.ANY) self.assertFalse(mock_check_dir.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_management_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIManagement() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_management_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = False ipmi.IPMIManagement() mock_support.assert_called_with(mock.ANY) self.assertFalse(mock_check_dir.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_vendor_passthru_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.VendorPassthru() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_vendor_passthru_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.VendorPassthru() mock_support.assert_called_with(mock.ANY) self.assertFalse(mock_check_dir.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMIShellinaboxConsole() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls_already_checked(self, mock_check_dir, mock_support): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.IPMIShellinaboxConsole() mock_support.assert_called_with(mock.ANY) self.assertFalse(mock_check_dir.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls_for_socat(self, mock_check_dir, mock_support): with mock.patch.object(ipmi, 'TMP_DIR_CHECKED'): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = None ipmi.IPMISocatConsole() mock_support.assert_called_with(mock.ANY) mock_check_dir.assert_called_once_with() @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'check_dir', autospec=True) def test_console_init_calls_for_socat_already_checked(self, mock_check_dir, mock_support): with mock.patch.object(ipmi, 'TMP_DIR_CHECKED'): mock_support.return_value = True ipmi.TMP_DIR_CHECKED = True ipmi.IPMISocatConsole() mock_support.assert_called_with(mock.ANY) self.assertFalse(mock_check_dir.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(subprocess, 'check_call', autospec=True) class IPMIToolCheckOptionSupportedTestCase(base.TestCase): def test_check_timing_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('timing'), mock.call('timing', True)] ipmi._check_option_support(['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_timing_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = subprocess.CalledProcessError(1, 'ipmitool') mock_support.return_value = None expected = [mock.call('timing'), mock.call('timing', False)] ipmi._check_option_support(['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_timing_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = OSError() mock_support.return_value = None expected = [mock.call('timing')] self.assertRaises(OSError, ipmi._check_option_support, ['timing']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('single_bridge'), mock.call('single_bridge', True)] ipmi._check_option_support(['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = subprocess.CalledProcessError(1, 'ipmitool') mock_support.return_value = None expected = [mock.call('single_bridge'), mock.call('single_bridge', False)] ipmi._check_option_support(['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_single_bridge_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = OSError() mock_support.return_value = None expected = [mock.call('single_bridge')] self.assertRaises(OSError, ipmi._check_option_support, ['single_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [mock.call('dual_bridge'), mock.call('dual_bridge', True)] ipmi._check_option_support(['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_fail(self, mock_chkcall, mock_support): mock_chkcall.side_effect = subprocess.CalledProcessError(1, 'ipmitool') mock_support.return_value = None expected = [mock.call('dual_bridge'), mock.call('dual_bridge', False)] ipmi._check_option_support(['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_dual_bridge_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = OSError() mock_support.return_value = None expected = [mock.call('dual_bridge')] self.assertRaises(OSError, ipmi._check_option_support, ['dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_pass(self, mock_chkcall, mock_support): mock_chkcall.return_value = (None, None) mock_support.return_value = None expected = [ mock.call('timing'), mock.call('timing', True), mock.call('single_bridge'), mock.call('single_bridge', True), mock.call('dual_bridge'), mock.call('dual_bridge', True)] ipmi._check_option_support(['timing', 'single_bridge', 'dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_fail(self, mock_chkcall, mock_support): options = ['timing', 'single_bridge', 'dual_bridge'] mock_chkcall.side_effect = [subprocess.CalledProcessError( 1, 'ipmitool')] * len(options) mock_support.return_value = None expected = [ mock.call('timing'), mock.call('timing', False), mock.call('single_bridge'), mock.call('single_bridge', False), mock.call('dual_bridge'), mock.call('dual_bridge', False)] ipmi._check_option_support(options) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) def test_check_all_options_no_ipmitool(self, mock_chkcall, mock_support): mock_chkcall.side_effect = OSError() mock_support.return_value = None # exception is raised once ipmitool was not found for an command expected = [mock.call('timing')] self.assertRaises(OSError, ipmi._check_option_support, ['timing', 'single_bridge', 'dual_bridge']) self.assertTrue(mock_chkcall.called) self.assertEqual(expected, mock_support.call_args_list) awesome_password_filename = 'awesome_password_filename' @contextlib.contextmanager def _make_password_file_stub(password): yield awesome_password_filename class IPMIToolPrivateMethodTestCaseMeta(type): """Generate and inject parametrized test cases""" ipmitool_errors = [ 'insufficient resources for session', 'Node busy', 'Timeout', 'Out of space', 'BMC initialization in progress' ] def __new__(mcs, name, bases, attrs): def gen_test_methods(message): @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def exec_ipmitool_exception_retry( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False mock_exec.side_effect = [ processutils.ProcessExecutionError( stderr=message ), (None, None) ] # Directly set the configuration values such that # the logic will cause _exec_ipmitool to retry twice. self.config(min_command_interval=1, group='ipmi') self.config(command_retry_timeout=2, group='ipmi') ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(2, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def exec_ipmitool_exception_retries_exceeded( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False mock_exec.side_effect = [processutils.ProcessExecutionError( stderr=message )] # Directly set the configuration values such that # the logic will cause _exec_ipmitool to timeout. self.config(min_command_interval=1, group='ipmi') self.config(command_retry_timeout=1, group='ipmi') self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(1, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def exec_ipmitool_exception_non_retryable_failure( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} mock_support.return_value = False additional_msg = "RAKP 2 HMAC is invalid" # Return a retryable error, then an error that cannot # be retried thus resulting in a single retry # attempt by _exec_ipmitool. mock_exec.side_effect = [ processutils.ProcessExecutionError( stderr=message ), processutils.ProcessExecutionError( stderr="Some more info: %s" % additional_msg ), processutils.ProcessExecutionError( stderr="Unknown" ), ] # Directly set the configuration values such that # the logic will cause _exec_ipmitool to retry up # to 3 times. self.config(min_command_interval=1, group='ipmi') self.config(command_retry_timeout=3, group='ipmi') self.config(additional_retryable_ipmi_errors=[additional_msg], group='ipmi') self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') self.assertEqual(3, mock_exec.call_count) return (exec_ipmitool_exception_retry, exec_ipmitool_exception_retries_exceeded, exec_ipmitool_exception_non_retryable_failure) # NOTE(etingof): this loop will inject some methods into the # class being built to be then picked up by unittest to test # ironic's handling of specific `ipmitool` errors for ipmi_message in mcs.ipmitool_errors: for fun in gen_test_methods(ipmi_message): suffix = ipmi_message.lower().replace(' ', '_') test_name = "test_%s_%s" % (fun.__name__, suffix) attrs[test_name] = fun return type.__new__(mcs, name, bases, attrs) class Base(db_base.DbTestCase): def setUp(self): super(Base, self).setUp() self.config(enabled_power_interfaces=['fake', 'ipmitool'], enabled_management_interfaces=['fake', 'ipmitool'], enabled_vendor_interfaces=['fake', 'ipmitool', 'no-vendor'], enabled_console_interfaces=['fake', 'ipmitool-socat', 'ipmitool-shellinabox', 'no-console']) self.config(debug=True, group="ipmi") self.node = obj_utils.create_test_node( self.context, console_interface='ipmitool-socat', management_interface='ipmitool', power_interface='ipmitool', vendor_interface='ipmitool', driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) self.console = ipmi.IPMISocatConsole() self.management = ipmi.IPMIManagement() self.power = ipmi.IPMIPower() self.vendor = ipmi.VendorPassthru() class IPMIToolPrivateMethodTestCase( Base, metaclass=IPMIToolPrivateMethodTestCaseMeta): def setUp(self): super(IPMIToolPrivateMethodTestCase, self).setUp() # power actions use oslo_service.BackoffLoopingCall, # mock random.SystemRandom gauss distribution self._mock_system_random_distribution() mock_sleep_fixture = self.useFixture( fixtures.MockPatchObject(time, 'sleep', autospec=True)) self.mock_sleep = mock_sleep_fixture.mock # NOTE(etingof): besides the conventional unittest methods that follow, # the metaclass will inject some more `test_` methods aimed at testing # the handling of specific errors potentially returned by the `ipmitool` def _mock_system_random_distribution(self): # random.SystemRandom with gauss distribution is used by oslo_service's # BackoffLoopingCall, it multiplies default interval (equals to 1) by # 2 * return_value, so if you want BackoffLoopingCall to "sleep" for # 1 second, return_value should be 0.5. m = mock.patch.object(random.SystemRandom, 'gauss', return_value=0.5) m.start() self.addCleanup(m.stop) def _test__make_password_file(self, input_password, exception_to_raise=None): pw_file = None try: with ipmi._make_password_file(input_password) as pw_file: if exception_to_raise is not None: raise exception_to_raise self.assertTrue(os.path.isfile(pw_file)) self.assertEqual(0o600, os.stat(pw_file)[stat.ST_MODE] & 0o777) with open(pw_file, "r") as f: password = f.read() self.assertEqual(str(input_password), password) finally: if pw_file is not None: self.assertFalse(os.path.isfile(pw_file)) def test__make_password_file_str_password(self): self._test__make_password_file(self.info['password']) def test__make_password_file_with_numeric_password(self): self._test__make_password_file(12345) def test__make_password_file_caller_exception(self): # Test caller raising exception result = self.assertRaises( ValueError, self._test__make_password_file, 12345, ValueError('we should fail')) self.assertEqual('we should fail', str(result)) @mock.patch.object(tempfile, 'NamedTemporaryFile', new=mock.MagicMock(side_effect=OSError('Test Error'))) def test__make_password_file_tempfile_known_exception(self): # Test OSError exception in _make_password_file for # tempfile.NamedTemporaryFile self.assertRaises( exception.PasswordFileFailedToCreate, self._test__make_password_file, 12345) @mock.patch.object( tempfile, 'NamedTemporaryFile', new=mock.MagicMock(side_effect=OverflowError('Test Error'))) def test__make_password_file_tempfile_unknown_exception(self): # Test exception in _make_password_file for tempfile.NamedTemporaryFile result = self.assertRaises( OverflowError, self._test__make_password_file, 12345) self.assertEqual('Test Error', str(result)) def test__make_password_file_write_exception(self): # Test exception in _make_password_file for write() mock_namedtemp = mock.mock_open(mock.MagicMock(name='JLV')) with mock.patch('tempfile.NamedTemporaryFile', mock_namedtemp): mock_filehandle = mock_namedtemp.return_value mock_write = mock_filehandle.write mock_write.side_effect = OSError('Test 2 Error') self.assertRaises( exception.PasswordFileFailedToCreate, self._test__make_password_file, 12345) def test__parse_driver_info(self): # make sure we get back the expected things _OPTIONS = ['address', 'username', 'password', 'uuid'] for option in _OPTIONS: self.assertIsNotNone(self.info[option]) info = dict(INFO_DICT) # test the default value for 'priv_level' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual('ADMINISTRATOR', ret['priv_level']) # ipmi_username / ipmi_password are not mandatory del info['ipmi_username'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) del info['ipmi_password'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) # make sure error is raised when ipmi_address is missing info = dict(INFO_DICT) del info['ipmi_address'] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) # test the invalid priv_level value info = dict(INFO_DICT) info['ipmi_priv_level'] = 'ABCD' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_invalid_bridging_type( self, mock_support): info = BRIDGE_INFO_DICT.copy() # make sure error is raised when ipmi_bridging has unexpected value info['ipmi_bridging'] = 'junk' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) self.assertFalse(mock_support.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_no_bridging( self, mock_support): _OPTIONS = ['address', 'username', 'password', 'uuid'] _BRIDGING_OPTIONS = ['local_address', 'transit_channel', 'transit_address', 'target_channel', 'target_address'] info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'no' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) # ensure that _is_option_supported was not called self.assertFalse(mock_support.called) # check if we got all the required options for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) # check if bridging parameters were set to None for option in _BRIDGING_OPTIONS: self.assertIsNone(ret[option]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_pass( self, mock_support): _OPTIONS = ['address', 'username', 'password', 'uuid', 'local_address', 'transit_channel', 'transit_address', 'target_channel', 'target_address'] node = obj_utils.get_test_node(self.context, driver_info=BRIDGE_INFO_DICT) expected = [mock.call('dual_bridge')] # test double bridging and make sure we get back expected result mock_support.return_value = True ret = ipmi._parse_driver_info(node) self.assertEqual(expected, mock_support.call_args_list) for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) info = BRIDGE_INFO_DICT.copy() # ipmi_local_address / ipmi_username / ipmi_password are not mandatory for optional_arg in ['ipmi_local_address', 'ipmi_username', 'ipmi_password']: del info[optional_arg] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertEqual(mock.call('dual_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_not_supported( self, mock_support): node = obj_utils.get_test_node(self.context, driver_info=BRIDGE_INFO_DICT) # if dual bridge is not supported then check if error is raised mock_support.return_value = False self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) mock_support.assert_called_once_with('dual_bridge') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_dual_bridging_missing_parameters( self, mock_support): info = BRIDGE_INFO_DICT.copy() mock_support.return_value = True # make sure error is raised when dual bridging is selected and the # required parameters for dual bridging are not provided for param in ['ipmi_transit_channel', 'ipmi_target_address', 'ipmi_transit_address', 'ipmi_target_channel']: del info[param] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) self.assertEqual(mock.call('dual_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_pass( self, mock_support): _OPTIONS = ['address', 'username', 'password', 'uuid', 'local_address', 'target_channel', 'target_address'] info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver_info=info) expected = [mock.call('single_bridge')] # test single bridging and make sure we get back expected things mock_support.return_value = True ret = ipmi._parse_driver_info(node) self.assertEqual(expected, mock_support.call_args_list) for option in _OPTIONS: self.assertIsNotNone(ret[option]) # test the default value for 'priv_level' self.assertEqual('ADMINISTRATOR', ret['priv_level']) # check if dual bridge params are set to None self.assertIsNone(ret['transit_channel']) self.assertIsNone(ret['transit_address']) # ipmi_local_address / ipmi_username / ipmi_password are not mandatory for optional_arg in ['ipmi_local_address', 'ipmi_username', 'ipmi_password']: del info[optional_arg] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertEqual(mock.call('single_bridge'), mock_support.call_args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_not_supported( self, mock_support): info = BRIDGE_INFO_DICT.copy() info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver_info=info) # if single bridge is not supported then check if error is raised mock_support.return_value = False self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) mock_support.assert_called_once_with('single_bridge') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) def test__parse_driver_info_with_single_bridging_missing_parameters( self, mock_support): info = dict(BRIDGE_INFO_DICT) info['ipmi_bridging'] = 'single' mock_support.return_value = True # make sure error is raised when single bridging is selected and the # required parameters for single bridging are not provided for param in ['ipmi_target_channel', 'ipmi_target_address']: del info[param] node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.MissingParameterValue, ipmi._parse_driver_info, node) self.assertEqual(mock.call('single_bridge'), mock_support.call_args) def test__parse_driver_info_numeric_password(self): # ipmi_password must not be converted to int / float # even if it includes just numbers. info = dict(INFO_DICT) info['ipmi_password'] = 12345678 node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual(u'12345678', ret['password']) self.assertIsInstance(ret['password'], str) def test__parse_driver_info_ipmi_prot_version_1_5(self): info = dict(INFO_DICT) info['ipmi_protocol_version'] = '1.5' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual('1.5', ret['protocol_version']) def test__parse_driver_info_invalid_ipmi_prot_version(self): info = dict(INFO_DICT) info['ipmi_protocol_version'] = '9000' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) def test__parse_driver_info_invalid_ipmi_port(self): info = dict(INFO_DICT) info['ipmi_port'] = '700000' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) def test__parse_driver_info_ipmi_hex_kg_key(self): info = dict(INFO_DICT) info['ipmi_hex_kg_key'] = 'A115023E08E23F7F8DC4BB443A1A75F160763A43' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual(info['ipmi_hex_kg_key'], ret['hex_kg_key']) def test__parse_driver_info_ipmi_hex_kg_key_odd_chars(self): info = dict(INFO_DICT) info['ipmi_hex_kg_key'] = 'A115023E08E23F7F8DC4BB443A1A75F160763A4' node = obj_utils.get_test_node(self.context, driver_info=info) self.assertRaises(exception.InvalidParameterValue, ipmi._parse_driver_info, node) def test__parse_driver_info_ipmi_port_valid(self): info = dict(INFO_DICT) info['ipmi_port'] = '623' node = obj_utils.get_test_node(self.context, driver_info=info) ret = ipmi._parse_driver_info(node) self.assertEqual(623, ret['dest_port']) @mock.patch.object(ipmi.LOG, 'warning', spec_set=True, autospec=True) def test__parse_driver_info_undefined_credentials(self, mock_log): info = dict(INFO_DICT) del info['ipmi_username'] del info['ipmi_password'] node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) calls = [ mock.call(u'ipmi_username is not defined or empty for node ' u'%s: NULL user will be utilized.', self.node.uuid), mock.call(u'ipmi_password is not defined or empty for node ' u'%s: NULL password will be utilized.', self.node.uuid), ] mock_log.assert_has_calls(calls) @mock.patch.object(ipmi.LOG, 'warning', spec_set=True, autospec=True) def test__parse_driver_info_have_credentials( self, mock_log): """Ensure no warnings generated if have credentials""" info = dict(INFO_DICT) node = obj_utils.get_test_node(self.context, driver_info=info) ipmi._parse_driver_info(node) self.assertFalse(mock_log.called) def test__parse_driver_info_terminal_port_specified(self): info = dict(INFO_DICT) info['ipmi_terminal_port'] = 10000 node = obj_utils.get_test_node(self.context, driver_info=info) driver_info = ipmi._parse_driver_info(node) self.assertEqual(driver_info['port'], 10000) def test__parse_driver_info_terminal_port_allocated(self): info = dict(INFO_DICT) internal_info = {'allocated_ipmi_terminal_port': 10001} node = obj_utils.get_test_node(self.context, driver_info=info, driver_internal_info=internal_info) driver_info = ipmi._parse_driver_info(node) self.assertEqual(driver_info['port'], 10001) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_first_call_to_address(self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) self.assertFalse(self.mock_sleep.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_second_call_to_address_sleep( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_exec.side_effect = [(None, None), (None, None)] ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) ipmi._exec_ipmitool(self.info, 'D E F') self.assertTrue(self.mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_second_call_to_address_no_sleep( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_exec.side_effect = [(None, None), (None, None)] ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) # act like enough time has passed ipmi.LAST_CMD_TIME[self.info['address']] = ( time.time() - CONF.ipmi.min_command_interval) ipmi._exec_ipmitool(self.info, 'D E F') self.assertFalse(self.mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_two_calls_to_diff_address( self, mock_exec, mock_support): ipmi.LAST_CMD_TIME = {} args = [[ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ], [ 'ipmitool', '-I', 'lanplus', '-H', '127.127.127.127', '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'D', 'E', 'F', ]] expected = [mock.call('timing'), mock.call('timing')] mock_support.return_value = False mock_exec.side_effect = [(None, None), (None, None)] ipmi._exec_ipmitool(self.info, 'A B C') mock_exec.assert_called_with(*args[0]) self.info['address'] = '127.127.127.127' ipmi._exec_ipmitool(self.info, 'D E F') self.assertFalse(self.mock_sleep.called) self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_with(*args[1]) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_timing( self, mock_exec, mock_support): args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_timing( self, mock_exec, mock_support): args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-R', '12', '-N', '5', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = True mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) def test__exec_ipmitool_wait(self): mock_popen = mock.MagicMock() mock_popen.poll.side_effect = [1, 1, 1, 1, 1] ipmi._exec_ipmitool_wait(1, {'uuid': ''}, mock_popen) self.assertTrue(mock_popen.terminate.called) self.assertTrue(mock_popen.kill.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_username( self, mock_exec, mock_support): # An undefined username is treated the same as an empty username and # will cause no user (-U) to be specified. self.info['username'] = None args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_empty_username( self, mock_exec, mock_support): # An empty username is treated the same as an undefined username and # will cause no user (-U) to be specified. self.info['username'] = "" args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object( ipmi, '_make_password_file', wraps=_make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_without_password(self, mock_exec, _make_password_file_mock, mock_support): # An undefined password is treated the same as an empty password and # will cause a NULL (\0) password to be used""" self.info['password'] = None args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) _make_password_file_mock.assert_called_once_with('\0') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object( ipmi, '_make_password_file', wraps=_make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_empty_password(self, mock_exec, _make_password_file_mock, mock_support): # An empty password is treated the same as an undefined password and # will cause a NULL (\0) password to be used""" self.info['password'] = "" args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) _make_password_file_mock.assert_called_once_with('\0') @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_dual_bridging(self, mock_exec, mock_support): node = obj_utils.get_test_node(self.context, driver_info=BRIDGE_INFO_DICT) # when support for dual bridge command is called returns True mock_support.return_value = True info = ipmi._parse_driver_info(node) args = [ 'ipmitool', '-I', 'lanplus', '-H', info['address'], '-L', info['priv_level'], '-U', info['username'], '-m', info['local_address'], '-B', info['transit_channel'], '-T', info['transit_address'], '-b', info['target_channel'], '-t', info['target_address'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] expected = [mock.call('dual_bridge'), mock.call('timing')] # When support for timing command is called returns False mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(info, 'A B C') self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_single_bridging(self, mock_exec, mock_support): single_bridge_info = dict(BRIDGE_INFO_DICT) single_bridge_info['ipmi_bridging'] = 'single' node = obj_utils.get_test_node(self.context, driver_info=single_bridge_info) # when support for single bridge command is called returns True mock_support.return_value = True info = ipmi._parse_driver_info(node) info['transit_channel'] = info['transit_address'] = None args = [ 'ipmitool', '-I', 'lanplus', '-H', info['address'], '-L', info['priv_level'], '-U', info['username'], '-m', info['local_address'], '-b', info['target_channel'], '-t', info['target_address'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] expected = [mock.call('single_bridge'), mock.call('timing')] # When support for timing command is called returns False mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(info, 'A B C') self.assertEqual(expected, mock_support.call_args_list) mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_exception(self, mock_exec, mock_support): args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.side_effect = processutils.ProcessExecutionError("x") self.assertRaises(processutils.ProcessExecutionError, ipmi._exec_ipmitool, self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) self.assertEqual(1, mock_exec.call_count) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_IPMI_version_1_5( self, mock_exec, mock_support): self.info['protocol_version'] = '1.5' # Assert it uses "-I lan" (1.5) instead of "-I lanplus" (2.0) args = [ 'ipmitool', '-I', 'lan', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_port(self, mock_exec, mock_support): self.info['dest_port'] = '1623' ipmi.LAST_CMD_TIME = {} args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-p', '1623', '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C') mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args) self.assertFalse(self.mock_sleep.called) @mock.patch.object(ipmi, '_is_option_supported', autospec=True) @mock.patch.object(ipmi, '_make_password_file', _make_password_file_stub) @mock.patch.object(utils, 'execute', autospec=True) def test__exec_ipmitool_with_check_exit_code(self, mock_exec, mock_support): args = [ 'ipmitool', '-I', 'lanplus', '-H', self.info['address'], '-L', self.info['priv_level'], '-U', self.info['username'], '-v', '-f', awesome_password_filename, 'A', 'B', 'C', ] mock_support.return_value = False mock_exec.return_value = (None, None) ipmi._exec_ipmitool(self.info, 'A B C', check_exit_code=[0, 1]) mock_support.assert_called_once_with('timing') mock_exec.assert_called_once_with(*args, check_exit_code=[0, 1]) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_on(self, mock_exec): mock_exec.return_value = ["Chassis Power is on\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status", kill_on_timeout=True) self.assertEqual(states.POWER_ON, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_off(self, mock_exec): mock_exec.return_value = ["Chassis Power is off\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status", kill_on_timeout=True) self.assertEqual(states.POWER_OFF, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_error(self, mock_exec): mock_exec.return_value = ["Chassis Power is badstate\n", None] state = ipmi._power_status(self.info) mock_exec.assert_called_once_with(self.info, "power status", kill_on_timeout=True) self.assertEqual(states.ERROR, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__power_status_exception(self, mock_exec): mock_exec.side_effect = processutils.ProcessExecutionError("error") self.assertRaises(exception.IPMIFailure, ipmi._power_status, self.info) mock_exec.assert_called_once_with(self.info, "power status", kill_on_timeout=True) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', autospec=True) def test__power_on_max_retries(self, sleep_mock, mock_exec): self.config(command_retry_timeout=2, group='ipmi') def side_effect(driver_info, command, **kwargs): resp_dict = {"power status": ["Chassis Power is off\n", None], "power on": [None, None]} return resp_dict.get(command, ["Bad\n", None]) mock_exec.side_effect = side_effect expected = [mock.call(self.info, "power on"), mock.call(self.info, "power status", kill_on_timeout=True), mock.call(self.info, "power status", kill_on_timeout=True)] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, ipmi._power_on, task, self.info, timeout=2) self.assertEqual(expected, mock_exec.call_args_list) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', autospec=True) def test__soft_power_off(self, sleep_mock, mock_exec): def side_effect(driver_info, command, **kwargs): resp_dict = {"power status": ["Chassis Power is off\n", None], "power soft": [None, None]} return resp_dict.get(command, ["Bad\n", None]) mock_exec.side_effect = side_effect expected = [mock.call(self.info, "power soft"), mock.call(self.info, "power status", kill_on_timeout=True)] with task_manager.acquire(self.context, self.node.uuid) as task: state = ipmi._soft_power_off(task, self.info) self.assertEqual(expected, mock_exec.call_args_list) self.assertEqual(states.POWER_OFF, state) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', autospec=True) def test__soft_power_off_max_retries(self, sleep_mock, mock_exec): def side_effect(driver_info, command, **kwargs): resp_dict = {"power status": ["Chassis Power is on\n", None], "power soft": [None, None]} return resp_dict.get(command, ["Bad\n", None]) mock_exec.side_effect = side_effect expected = [mock.call(self.info, "power soft"), mock.call(self.info, "power status", kill_on_timeout=True), mock.call(self.info, "power status", kill_on_timeout=True)] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, ipmi._soft_power_off, task, self.info, timeout=2) self.assertEqual(expected, mock_exec.call_args_list) @mock.patch.object(ipmi, '_power_status', autospec=True) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) @mock.patch('oslo_utils.eventletutils.EventletEvent.wait', autospec=True) def test___set_and_wait_no_needless_status_polling( self, sleep_mock, mock_exec, mock_status): # Check that if the call to power state change fails, it doesn't # call power_status(). self.config(command_retry_timeout=2, group='ipmi') mock_exec.side_effect = exception.IPMIFailure(cmd='power on') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, ipmi._power_on, task, self.info) self.assertFalse(mock_status.called) class IPMIToolDriverTestCase(Base): @mock.patch.object(ipmi, "_parse_driver_info", autospec=True) def test_power_validate(self, mock_parse): mock_parse.return_value = {} with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.validate(task) mock_parse.assert_called_once_with(mock.ANY) def test_get_properties(self): expected = ipmi.COMMON_PROPERTIES self.assertEqual(expected, self.power.get_properties()) expected = list(ipmi.COMMON_PROPERTIES) + list(ipmi.CONSOLE_PROPERTIES) self.assertEqual(sorted(expected), sorted(self.console.get_properties())) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(sorted(expected), sorted(task.driver.get_properties())) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_get_power_state(self, mock_exec): returns = iter([["Chassis Power is off\n", None], ["Chassis Power is on\n", None], ["\n", None]]) expected = [mock.call(self.info, "power status", kill_on_timeout=True), mock.call(self.info, "power status", kill_on_timeout=True), mock.call(self.info, "power status", kill_on_timeout=True)] mock_exec.side_effect = returns with task_manager.acquire(self.context, self.node.uuid) as task: pstate = self.power.get_power_state(task) self.assertEqual(states.POWER_OFF, pstate) pstate = self.power.get_power_state(task) self.assertEqual(states.POWER_ON, pstate) pstate = self.power.get_power_state(task) self.assertEqual(states.ERROR, pstate) self.assertEqual(mock_exec.call_args_list, expected) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_get_power_state_exception(self, mock_exec): mock_exec.side_effect = processutils.ProcessExecutionError("error") with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.power.get_power_state, task) mock_exec.assert_called_once_with(self.info, "power status", kill_on_timeout=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_ON) mock_on.assert_called_once_with(task, self.info, timeout=None) self.assertFalse(mock_off.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_timeout_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_ON, timeout=2) mock_on.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_off.called) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_with_next_boot(self, mock_off, mock_on, mock_next_boot): self.config(command_retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_ON) mock_next_boot.assert_called_once_with(task, self.info) mock_on.assert_called_once_with(task, self.info, timeout=None) self.assertFalse(mock_off.called) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_with_next_boot_timeout(self, mock_off, mock_on, mock_next_boot): self.config(command_retry_timeout=0, group='ipmi') mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_ON, timeout=2) mock_next_boot.assert_called_once_with(task, self.info) mock_on.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_off.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_off_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_OFF) mock_off.assert_called_once_with(task, self.info, timeout=None) self.assertFalse(mock_on.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_off_timeout_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: self.power.set_power_state(task, states.POWER_OFF, timeout=2) mock_off.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_on.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_soft_power_off', autospec=True) def test_set_soft_power_off_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node['uuid']) as task: self.power.set_power_state(task, states.SOFT_POWER_OFF) mock_off.assert_called_once_with(task, self.info, timeout=None) self.assertFalse(mock_on.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_soft_power_off', autospec=True) def test_set_soft_power_off_timeout_ok(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node['uuid']) as task: self.power.set_power_state(task, states.SOFT_POWER_OFF, timeout=2) mock_off.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_on.called) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_soft_power_off', autospec=True) def test_set_soft_reboot_ok(self, mock_off, mock_on, mock_next_boot): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node['uuid']) as task: self.power.set_power_state(task, states.SOFT_REBOOT) mock_next_boot.assert_called_once_with(task, self.info) mock_off.assert_called_once_with(task, self.info, timeout=None) mock_on.assert_called_once_with(task, self.info, timeout=None) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_soft_power_off', autospec=True) def test_set_soft_reboot_timeout_ok(self, mock_off, mock_on, mock_next_boot): self.config(command_retry_timeout=0, group='ipmi') mock_off.return_value = states.POWER_OFF mock_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node['uuid']) as task: self.power.set_power_state(task, states.SOFT_REBOOT, timeout=2) mock_next_boot.assert_called_once_with(task, self.info) mock_off.assert_called_once_with(task, self.info, timeout=2) mock_on.assert_called_once_with(task, self.info, timeout=2) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_soft_power_off', autospec=True) def test_set_soft_reboot_timeout_fail(self, mock_off, mock_on, mock_next_boot): self.config(command_retry_timeout=0, group='ipmi') mock_off.side_effect = exception.PowerStateFailure( pstate=states.POWER_ON) with task_manager.acquire(self.context, self.node['uuid']) as task: self.assertRaises(exception.PowerStateFailure, self.power.set_power_state, task, states.SOFT_REBOOT, timeout=2) mock_off.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_next_boot.called) self.assertFalse(mock_on.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_fail(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_on.side_effect = exception.PowerStateFailure( pstate=states.POWER_ON) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, self.power.set_power_state, task, states.POWER_ON) mock_on.assert_called_once_with(task, self.info, timeout=None) self.assertFalse(mock_off.called) @mock.patch.object(ipmi, '_power_on', autospec=True) @mock.patch.object(ipmi, '_power_off', autospec=True) def test_set_power_on_timeout_fail(self, mock_off, mock_on): self.config(command_retry_timeout=0, group='ipmi') mock_on.side_effect = exception.PowerStateFailure(pstate=states.ERROR) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, self.power.set_power_state, task, states.POWER_ON, timeout=2) mock_on.assert_called_once_with(task, self.info, timeout=2) self.assertFalse(mock_off.called) def test_set_power_invalid_state(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.power.set_power_state, task, "fake state") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.send_raw(task, http_method='POST', raw_bytes='0x00 0x01') mock_exec.assert_called_once_with(self.info, 'raw 0x00 0x01') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_fail(self, mock_exec): mock_exec.side_effect = exception.PasswordFileFailedToCreate('error') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.vendor.send_raw, task, http_method='POST', raw_bytes='0x00 0x01') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.bmc_reset(task, 'POST') mock_exec.assert_called_once_with(self.info, 'bmc reset warm') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_cold(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.bmc_reset(task, 'POST', warm=False) mock_exec.assert_called_once_with(self.info, 'bmc reset cold') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__bmc_reset_fail(self, mock_exec): mock_exec.side_effect = processutils.ProcessExecutionError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.vendor.bmc_reset, task, 'POST') @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_ON) def test_reboot_ok(self, mock_on, mock_off, mock_next_boot): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_off.return_value = states.POWER_OFF mock_on.return_value = states.POWER_ON manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_off(task, self.info, timeout=None), mock.call.power_on(task, self.info, timeout=None)] self.power.reboot(task) mock_next_boot.assert_called_once_with(task, self.info) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_OFF) def test_reboot_already_off(self, mock_on, mock_off, mock_next_boot): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_off.return_value = states.POWER_OFF mock_on.return_value = states.POWER_ON manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_on(task, self.info, timeout=None)] self.power.reboot(task) mock_next_boot.assert_called_once_with(task, self.info) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(driver_utils, 'ensure_next_boot_device', autospec=True) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_ON) def test_reboot_timeout_ok(self, mock_on, mock_off, mock_next_boot): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_off(task, self.info, timeout=2), mock.call.power_on(task, self.info, timeout=2)] self.power.reboot(task, timeout=2) mock_next_boot.assert_called_once_with(task, self.info) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_ON) def test_reboot_fail_power_off(self, mock_on, mock_off): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_off.side_effect = exception.PowerStateFailure( pstate=states.POWER_OFF) manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_off(task, self.info, timeout=None)] self.assertRaises(exception.PowerStateFailure, self.power.reboot, task) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_ON) def test_reboot_fail_power_on(self, mock_on, mock_off): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_off.return_value = states.POWER_OFF mock_on.side_effect = exception.PowerStateFailure( pstate=states.POWER_ON) manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_off(task, self.info, timeout=None), mock.call.power_on(task, self.info, timeout=None)] self.assertRaises(exception.PowerStateFailure, self.power.reboot, task) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(ipmi, '_power_off', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_on', spec_set=types.FunctionType) @mock.patch.object(ipmi, '_power_status', lambda driver_info: states.POWER_ON) def test_reboot_timeout_fail(self, mock_on, mock_off): manager = mock.MagicMock() # NOTE(rloo): if autospec is True, then manager.mock_calls is empty mock_on.side_effect = exception.PowerStateFailure( pstate=states.POWER_ON) manager.attach_mock(mock_off, 'power_off') manager.attach_mock(mock_on, 'power_on') with task_manager.acquire(self.context, self.node.uuid) as task: expected = [mock.call.power_off(task, self.info, timeout=2), mock.call.power_on(task, self.info, timeout=2)] self.assertRaises(exception.PowerStateFailure, self.power.reboot, task, timeout=2) self.assertEqual(expected, manager.mock_calls) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) def test_vendor_passthru_validate__parse_driver_info_fail(self, info_mock): info_mock.side_effect = exception.InvalidParameterValue("bad") with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.vendor.validate, task, method='send_raw', raw_bytes='0x00 0x01') info_mock.assert_called_once_with(task.node) def test_vendor_passthru_validate__send_raw_bytes_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.validate(task, method='send_raw', http_method='POST', raw_bytes='0x00 0x01') def test_vendor_passthru_validate__send_raw_bytes_fail(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.MissingParameterValue, self.vendor.validate, task, method='send_raw') @mock.patch.object(ipmi.VendorPassthru, 'send_raw', autospec=True) def test_vendor_passthru_call_send_raw_bytes(self, raw_bytes_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.vendor.send_raw(task, http_method='POST', raw_bytes='0x00 0x01') raw_bytes_mock.assert_called_once_with( self.vendor, task, http_method='POST', raw_bytes='0x00 0x01') def test_vendor_passthru_validate__bmc_reset_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.validate(task, method='bmc_reset') def test_vendor_passthru_validate__bmc_reset_warm_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.validate(task, method='bmc_reset', warm=True) def test_vendor_passthru_validate__bmc_reset_cold_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.vendor.validate(task, method='bmc_reset', warm=False) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def _vendor_passthru_call_bmc_reset(self, warm, expected, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.vendor.bmc_reset(task, 'POST', warm=warm) mock_exec.assert_called_once_with( mock.ANY, 'bmc reset %s' % expected) def test_vendor_passthru_call_bmc_reset_warm(self): for param in (True, 'true', 'on', 'y', 'yes'): self._vendor_passthru_call_bmc_reset(param, 'warm') def test_vendor_passthru_call_bmc_reset_cold(self): for param in (False, 'false', 'off', 'n', 'no'): self._vendor_passthru_call_bmc_reset(param, 'cold') def test_vendor_passthru_vendor_routes(self): expected = ['send_raw', 'bmc_reset'] with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: vendor_routes = task.driver.vendor.vendor_routes self.assertIsInstance(vendor_routes, dict) self.assertEqual(sorted(expected), sorted(vendor_routes)) def test_vendor_passthru_driver_routes(self): with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: driver_routes = task.driver.vendor.driver_routes self.assertIsInstance(driver_routes, dict) self.assertEqual({}, driver_routes) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.management.set_boot_device(task, boot_devices.PXE) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(driver_utils, 'force_persistent_boot', autospec=True) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_no_force_set_boot_device(self, mock_exec, mock_force_boot): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = task.node.driver_info driver_info['ipmi_force_boot_device'] = 'False' task.node.driver_info = driver_info self.info['force_boot_device'] = 'False' self.management.set_boot_device(task, boot_devices.PXE) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) self.assertFalse(mock_force_boot.called) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_force_set_boot_device_ok(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = task.node.driver_info driver_info['ipmi_force_boot_device'] = 'True' task.node.driver_info = driver_info self.info['force_boot_device'] = 'True' self.management.set_boot_device(task, boot_devices.PXE) task.node.refresh() self.assertIs( False, task.node.driver_internal_info['is_next_boot_persistent'] ) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_persistent(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = task.node.driver_info driver_info['ipmi_force_boot_device'] = 'True' task.node.driver_info = driver_info self.info['force_boot_device'] = 'True' self.management.set_boot_device(task, boot_devices.PXE, True) self.assertEqual( boot_devices.PXE, task.node.driver_internal_info['persistent_boot_device']) mock_calls = [mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) def test_management_interface_set_boot_device_bad_device(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, self.management.set_boot_device, task, 'fake-device') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_without_timeout_1(self, mock_exec): mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = task.node.driver_info driver_info['ipmi_disable_boot_timeout'] = 'False' task.node.driver_info = driver_info self.management.set_boot_device(task, boot_devices.PXE) mock_calls = [mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_without_timeout_2(self, mock_exec): CONF.set_override('disable_boot_timeout', False, 'ipmi') mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.management.set_boot_device(task, boot_devices.PXE) mock_calls = [mock.call(self.info, "chassis bootdev pxe")] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_exec_failed(self, mock_exec): mock_exec.side_effect = processutils.ProcessExecutionError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.IPMIFailure, self.management.set_boot_device, task, boot_devices.PXE) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_unknown_exception(self, mock_exec): class FakeException(Exception): pass mock_exec.side_effect = FakeException('boom') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(FakeException, self.management.set_boot_device, task, boot_devices.PXE) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_uefi(self, mock_exec, mock_boot_mode): mock_boot_mode.return_value = 'uefi' mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.management.set_boot_device(task, boot_devices.PXE) mock_calls = [ mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "chassis bootdev pxe options=efiboot") ] mock_exec.assert_has_calls(mock_calls) @mock.patch.object(boot_mode_utils, 'get_boot_mode_for_deploy') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_set_boot_device_uefi_and_persistent( self, mock_exec, mock_boot_mode): mock_boot_mode.return_value = 'uefi' mock_exec.return_value = [None, None] with task_manager.acquire(self.context, self.node.uuid) as task: self.management.set_boot_device(task, boot_devices.PXE, persistent=True) mock_calls = [ mock.call(self.info, "raw 0x00 0x08 0x03 0x08"), mock.call(self.info, "raw 0x00 0x08 0x05 0xe0 0x04 0x00 0x00 0x00") ] mock_exec.assert_has_calls(mock_calls) def test_management_interface_get_supported_boot_devices(self): with task_manager.acquire(self.context, self.node.uuid) as task: expected = [boot_devices.PXE, boot_devices.DISK, boot_devices.CDROM, boot_devices.BIOS, boot_devices.SAFE] self.assertEqual(sorted(expected), sorted(task.driver.management. get_supported_boot_devices(task))) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device(self, mock_exec): # output, expected boot device bootdevs = [('Boot Device Selector : ' 'Force Boot from default Hard-Drive\n', boot_devices.DISK), ('Boot Device Selector : ' 'Force Boot from default Hard-Drive, request Safe-Mode\n', boot_devices.SAFE), ('Boot Device Selector : ' 'Force Boot into BIOS Setup\n', boot_devices.BIOS), ('Boot Device Selector : ' 'Force PXE\n', boot_devices.PXE), ('Boot Device Selector : ' 'Force Boot from CD/DVD\n', boot_devices.CDROM)] with task_manager.acquire(self.context, self.node.uuid) as task: for out, expected_device in bootdevs: mock_exec.return_value = (out, '') expected_response = {'boot_device': expected_device, 'persistent': False} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_unknown_dev(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.return_value = ('Boot Device Selector : Fake\n', '') response = task.driver.management.get_boot_device(task) self.assertIsNone(response['boot_device']) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_fail(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.side_effect = processutils.ProcessExecutionError() self.assertRaises(exception.IPMIFailure, task.driver.management.get_boot_device, task) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_get_boot_device_persistent(self, mock_exec): outputs = [('Options apply to only next boot\n' 'Boot Device Selector : Force PXE\n', False), ('Options apply to all future boots\n' 'Boot Device Selector : Force PXE\n', True)] with task_manager.acquire(self.context, self.node.uuid) as task: for out, expected_persistent in outputs: mock_exec.return_value = (out, '') expected_response = {'boot_device': boot_devices.PXE, 'persistent': expected_persistent} self.assertEqual(expected_response, task.driver.management.get_boot_device(task)) mock_exec.assert_called_with(mock.ANY, "chassis bootparam get 5") def test_get_force_boot_device_persistent(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.node.driver_info['ipmi_force_boot_device'] = 'True' task.node.driver_internal_info['persistent_boot_device'] = 'pxe' bootdev = self.management.get_boot_device(task) self.assertEqual('pxe', bootdev['boot_device']) self.assertTrue(bootdev['persistent']) def test_management_interface_validate_good(self): with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.management.validate(task) def test_management_interface_validate_fail(self): # Missing IPMI driver_info information node = obj_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid(), management_interface='ipmitool') with task_manager.acquire(self.context, node.uuid) as task: self.assertRaises(exception.MissingParameterValue, task.driver.management.validate, task) @mock.patch.object(ipmi.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_inject_nmi_ok(self, mock_exec, mock_log): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.management.inject_nmi(task) mock_exec.assert_called_once_with(driver_info, "power diag") self.assertFalse(mock_log.called) @mock.patch.object(ipmi.LOG, 'error', spec_set=True, autospec=True) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_management_interface_inject_nmi_fail(self, mock_exec, mock_log): mock_exec.side_effect = exception.PasswordFileFailedToCreate('error') with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.assertRaises(exception.IPMIFailure, self.management.inject_nmi, task) mock_exec.assert_called_once_with(driver_info, "power diag") self.assertTrue(mock_log.called) def test__parse_ipmi_sensor_data_ok(self): fake_sensors_data = """ Sensor ID : Temp (0x1) Entity ID : 3.1 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : -58 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : Temp (0x2) Entity ID : 3.2 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : 50 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : FAN MOD 1A RPM (0x30) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8400 (+/- 75) RPM Status : ok Nominal Reading : 5325.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 Sensor ID : FAN MOD 1B RPM (0x31) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8550 (+/- 75) RPM Status : ok Nominal Reading : 7800.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 """ expected_return = { 'Fan': { 'FAN MOD 1A RPM (0x30)': { 'Status': 'ok', 'Sensor Reading': '8400 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1A RPM (0x30)', 'Nominal Reading': '5325.000' }, 'FAN MOD 1B RPM (0x31)': { 'Status': 'ok', 'Sensor Reading': '8550 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1B RPM (0x31)', 'Nominal Reading': '7800.000' } }, 'Temperature': { 'Temp (0x1)': { 'Status': 'ok', 'Sensor Reading': '-58 (+/- 1) degrees C', 'Entity ID': '3.1 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x1)', 'Nominal Reading': '50.000' }, 'Temp (0x2)': { 'Status': 'ok', 'Sensor Reading': '50 (+/- 1) degrees C', 'Entity ID': '3.2 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x2)', 'Nominal Reading': '50.000' } } } ret = ipmi._parse_ipmi_sensors_data(self.node, fake_sensors_data) self.assertEqual(expected_return, ret) def test__parse_ipmi_sensor_data_missing_sensor_reading(self): fake_sensors_data = """ Sensor ID : Temp (0x1) Entity ID : 3.1 (Processor) Sensor Type (Analog) : Temperature Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : Temp (0x2) Entity ID : 3.2 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : 50 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : FAN MOD 1A RPM (0x30) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8400 (+/- 75) RPM Status : ok Nominal Reading : 5325.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 """ expected_return = { 'Fan': { 'FAN MOD 1A RPM (0x30)': { 'Status': 'ok', 'Sensor Reading': '8400 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1A RPM (0x30)', 'Nominal Reading': '5325.000' } }, 'Temperature': { 'Temp (0x2)': { 'Status': 'ok', 'Sensor Reading': '50 (+/- 1) degrees C', 'Entity ID': '3.2 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x2)', 'Nominal Reading': '50.000' } } } ret = ipmi._parse_ipmi_sensors_data(self.node, fake_sensors_data) self.assertEqual(expected_return, ret) def test__parse_ipmi_sensor_data_debug(self): fake_sensors_data = """ << Message tag : 0x00 << RMCP+ status : no errors << Maximum privilege level : admin << Console Session ID : 0xa0a2a3a4 << BMC Session ID : 0x02006a01 << Negotiated authenticatin algorithm : hmac_sha1 << Negotiated integrity algorithm : hmac_sha1_96 << Negotiated encryption algorithm : aes_cbc_128 Sensor ID : Temp (0x2) Entity ID : 3.2 (Processor) Sensor Type (Analog) : Temperature Sensor Reading : 50 (+/- 1) degrees C Status : ok Nominal Reading : 50.000 Normal Minimum : 11.000 Normal Maximum : 69.000 Upper critical : 90.000 Upper non-critical : 85.000 Positive Hysteresis : 1.000 Negative Hysteresis : 1.000 Sensor ID : FAN MOD 1A RPM (0x30) Entity ID : 7.1 (System Board) Sensor Type (Analog) : Fan Sensor Reading : 8400 (+/- 75) RPM Status : ok Nominal Reading : 5325.000 Normal Minimum : 10425.000 Normal Maximum : 14775.000 Lower critical : 4275.000 Positive Hysteresis : 375.000 Negative Hysteresis : 375.000 """ expected_return = { 'Fan': { 'FAN MOD 1A RPM (0x30)': { 'Status': 'ok', 'Sensor Reading': '8400 (+/- 75) RPM', 'Entity ID': '7.1 (System Board)', 'Normal Minimum': '10425.000', 'Positive Hysteresis': '375.000', 'Normal Maximum': '14775.000', 'Sensor Type (Analog)': 'Fan', 'Lower critical': '4275.000', 'Negative Hysteresis': '375.000', 'Sensor ID': 'FAN MOD 1A RPM (0x30)', 'Nominal Reading': '5325.000' } }, 'Temperature': { 'Temp (0x2)': { 'Status': 'ok', 'Sensor Reading': '50 (+/- 1) degrees C', 'Entity ID': '3.2 (Processor)', 'Normal Minimum': '11.000', 'Positive Hysteresis': '1.000', 'Upper non-critical': '85.000', 'Normal Maximum': '69.000', 'Sensor Type (Analog)': 'Temperature', 'Negative Hysteresis': '1.000', 'Upper critical': '90.000', 'Sensor ID': 'Temp (0x2)', 'Nominal Reading': '50.000' } } } ret = ipmi._parse_ipmi_sensors_data(self.node, fake_sensors_data) self.assertEqual(expected_return, ret) def test__parse_ipmi_sensor_data_failed(self): fake_sensors_data = "abcdef" self.assertRaises(exception.FailedToParseSensorData, ipmi._parse_ipmi_sensors_data, self.node, fake_sensors_data) fake_sensors_data = "abc:def:ghi" self.assertRaises(exception.FailedToParseSensorData, ipmi._parse_ipmi_sensors_data, self.node, fake_sensors_data) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_dump_sdr_ok(self, mock_exec): mock_exec.return_value = (None, None) with task_manager.acquire(self.context, self.node.uuid) as task: ipmi.dump_sdr(task, 'foo_file') mock_exec.assert_called_once_with(self.info, 'sdr dump foo_file') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_dump_sdr_fail(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: mock_exec.side_effect = processutils.ProcessExecutionError() self.assertRaises(exception.IPMIFailure, ipmi.dump_sdr, task, 'foo_file') mock_exec.assert_called_once_with(self.info, 'sdr dump foo_file') @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test_send_raw_bytes_returns(self, mock_exec): fake_ret = ('foo', 'bar') mock_exec.return_value = fake_ret with task_manager.acquire(self.context, self.node.uuid) as task: ret = ipmi.send_raw(task, 'fake raw') self.assertEqual(fake_ret, ret) @mock.patch.object(console_utils, 'acquire_port', autospec=True) def test__allocate_port(self, mock_acquire): mock_acquire.return_value = 1234 with task_manager.acquire(self.context, self.node.uuid) as task: port = ipmi._allocate_port(task) mock_acquire.assert_called_once_with() self.assertEqual(port, 1234) info = task.node.driver_internal_info self.assertEqual(info['allocated_ipmi_terminal_port'], 1234) @mock.patch.object(console_utils, 'release_port', autospec=True) def test__release_allocated_port(self, mock_release): info = self.node.driver_internal_info info['allocated_ipmi_terminal_port'] = 1234 self.node.driver_internal_info = info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: ipmi._release_allocated_port(task) mock_release.assert_called_once_with(1234) info = task.node.driver_internal_info self.assertIsNone(info.get('allocated_ipmi_terminal_port')) class IPMIToolShellinaboxTestCase(db_base.DbTestCase): console_interface = 'ipmitool-shellinabox' console_class = ipmi.IPMIShellinaboxConsole def setUp(self): super(IPMIToolShellinaboxTestCase, self).setUp() self.config(enabled_console_interfaces=[self.console_interface, 'no-console']) self.node = obj_utils.create_test_node( self.context, console_interface=self.console_interface, driver_info=INFO_DICT) self.info = ipmi._parse_driver_info(self.node) self.console = self.console_class() def test_console_validate(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = 123 task.driver.console.validate(task) def test_console_validate_missing_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info.pop('ipmi_terminal_port', None) self.assertRaises(exception.MissingParameterValue, task.driver.console.validate, task) def test_console_validate_missing_port_auto_allocate(self): self.config(port_range='10000:20000', group='console') with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info.pop('ipmi_terminal_port', None) task.driver.console.validate(task) def test_console_validate_invalid_port(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = '' self.assertRaises(exception.InvalidParameterValue, task.driver.console.validate, task) def test_console_validate_wrong_ipmi_protocol_version(self): with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info['ipmi_terminal_port'] = 123 task.node.driver_info['ipmi_protocol_version'] = '1.5' self.assertRaises(exception.InvalidParameterValue, task.driver.console.validate, task) def test__get_ipmi_cmd(self): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) ipmi_cmd = self.console._get_ipmi_cmd(driver_info, 'pw_file') expected_ipmi_cmd = ("/:%(uid)s:%(gid)s:HOME:ipmitool " "-I lanplus -H %(address)s -L ADMINISTRATOR " "-U %(user)s -f pw_file" % {'uid': os.getuid(), 'gid': os.getgid(), 'address': driver_info['address'], 'user': driver_info['username']}) self.assertEqual(expected_ipmi_cmd, ipmi_cmd) def test__get_ipmi_cmd_without_user(self): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) driver_info['username'] = None ipmi_cmd = self.console._get_ipmi_cmd(driver_info, 'pw_file') expected_ipmi_cmd = ("/:%(uid)s:%(gid)s:HOME:ipmitool " "-I lanplus -H %(address)s -L ADMINISTRATOR " "-f pw_file" % {'uid': os.getuid(), 'gid': os.getgid(), 'address': driver_info['address']}) self.assertEqual(expected_ipmi_cmd, ipmi_cmd) @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) def test_start_console(self, mock_start, mock_alloc): mock_start.return_value = None mock_alloc.return_value = 10000 with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) driver_info = ipmi._parse_driver_info(task.node) driver_info.update(port=10000) mock_start.assert_called_once_with( self.console, driver_info, console_utils.start_shellinabox_console) @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) def test_start_console_with_port(self, mock_start, mock_info, mock_alloc): mock_start.return_value = None mock_info.return_value = {'port': 10000} with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) mock_start.assert_called_once_with( self.console, {'port': 10000}, console_utils.start_shellinabox_console) mock_alloc.assert_not_called() @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) def test_start_console_alloc_port(self, mock_start, mock_info, mock_alloc): mock_start.return_value = None mock_info.return_value = {'port': None} mock_alloc.return_value = 1234 with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) mock_start.assert_called_once_with( self.console, {'port': 1234}, console_utils.start_shellinabox_console) mock_alloc.assert_called_once_with(mock.ANY) @mock.patch.object(ipmi.IPMIConsole, '_get_ipmi_cmd', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test__start_console(self, mock_start, mock_ipmi_cmd): mock_start.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console._start_console( driver_info, console_utils.start_shellinabox_console) mock_start.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) mock_ipmi_cmd.assert_called_once_with(self.console, driver_info, mock.ANY) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test__start_console_fail(self, mock_start): mock_start.side_effect = exception.ConsoleSubprocessFailed( error='error') with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.assertRaises(exception.ConsoleSubprocessFailed, self.console._start_console, driver_info, console_utils.start_shellinabox_console) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test__start_console_fail_nodir(self, mock_start): mock_start.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.assertRaises(exception.ConsoleError, self.console._start_console, driver_info, console_utils.start_shellinabox_console) mock_start.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(console_utils, 'make_persistent_password_file', autospec=True) @mock.patch.object(console_utils, 'start_shellinabox_console', autospec=True) def test__start_console_empty_password(self, mock_start, mock_pass): driver_info = self.node.driver_info del driver_info['ipmi_password'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console._start_console( driver_info, console_utils.start_shellinabox_console) mock_pass.assert_called_once_with(mock.ANY, '\0') mock_start.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) @mock.patch.object(ipmi, '_release_allocated_port', autospec=True) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console(self, mock_stop, mock_release): mock_stop.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: self.console.stop_console(task) mock_stop.assert_called_once_with(self.info['uuid']) mock_release.assert_called_once_with(mock.ANY) @mock.patch.object(console_utils, 'stop_shellinabox_console', autospec=True) def test_stop_console_fail(self, mock_stop): mock_stop.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) @mock.patch.object(console_utils, 'get_shellinabox_console_url', autospec=True) def test_get_console(self, mock_get): url = 'http://localhost:4201' mock_get.return_value = url expected = {'type': 'shellinabox', 'url': url} with task_manager.acquire(self.context, self.node.uuid) as task: console_info = self.console.get_console(task) self.assertEqual(expected, console_info) mock_get.assert_called_once_with(self.info['port']) class IPMIToolSocatDriverTestCase(IPMIToolShellinaboxTestCase): console_interface = 'ipmitool-shellinabox' console_class = ipmi.IPMISocatConsole def test__get_ipmi_cmd(self): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) ipmi_cmd = self.console._get_ipmi_cmd(driver_info, 'pw_file') expected_ipmi_cmd = ("ipmitool -I lanplus -H %(address)s " "-L ADMINISTRATOR -U %(user)s " "-f pw_file" % {'address': driver_info['address'], 'user': driver_info['username']}) self.assertEqual(expected_ipmi_cmd, ipmi_cmd) def test__get_ipmi_cmd_without_user(self): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) driver_info['username'] = None ipmi_cmd = self.console._get_ipmi_cmd(driver_info, 'pw_file') expected_ipmi_cmd = ("ipmitool -I lanplus -H %(address)s " "-L ADMINISTRATOR " "-f pw_file" % {'address': driver_info['address']}) self.assertEqual(expected_ipmi_cmd, ipmi_cmd) def test_console_validate_missing_port_auto_allocate(self): self.config(port_range='10000:20000', group='console') with task_manager.acquire( self.context, self.node.uuid, shared=True) as task: task.node.driver_info.pop('ipmi_terminal_port', None) task.driver.console.validate(task) @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) @mock.patch.object(ipmi.IPMISocatConsole, '_exec_stop_console', autospec=True) def test_start_console(self, mock_stop, mock_start, mock_alloc): mock_start.return_value = None mock_stop.return_value = None mock_alloc.return_value = 10000 with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) driver_info = ipmi._parse_driver_info(task.node) driver_info.update(port=10000) mock_stop.assert_called_once_with(self.console, driver_info) mock_start.assert_called_once_with( self.console, driver_info, console_utils.start_socat_console) @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) @mock.patch.object(ipmi.IPMISocatConsole, '_exec_stop_console', autospec=True) def test_start_console_with_port(self, mock_stop, mock_start, mock_info, mock_alloc): mock_start.return_value = None mock_info.return_value = {'port': 10000} with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) mock_stop.assert_called_once_with(self.console, mock.ANY) mock_start.assert_called_once_with( self.console, {'port': 10000}, console_utils.start_socat_console) mock_alloc.assert_not_called() @mock.patch.object(ipmi, '_allocate_port', autospec=True) @mock.patch.object(ipmi, '_parse_driver_info', autospec=True) @mock.patch.object(ipmi.IPMIConsole, '_start_console', autospec=True) @mock.patch.object(ipmi.IPMISocatConsole, '_exec_stop_console', autospec=True) def test_start_console_alloc_port(self, mock_stop, mock_start, mock_info, mock_alloc): mock_start.return_value = None mock_info.return_value = {'port': None} mock_alloc.return_value = 1234 with task_manager.acquire(self.context, self.node.uuid) as task: self.console.start_console(task) mock_stop.assert_called_once_with(self.console, mock.ANY) mock_start.assert_called_once_with( self.console, {'port': 1234}, console_utils.start_socat_console) mock_alloc.assert_called_once_with(mock.ANY) @mock.patch.object(ipmi.IPMISocatConsole, '_get_ipmi_cmd', autospec=True) @mock.patch.object(console_utils, 'start_socat_console', autospec=True) def test__start_console(self, mock_start, mock_ipmi_cmd): mock_start.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console._start_console( driver_info, console_utils.start_socat_console) mock_start.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) mock_ipmi_cmd.assert_called_once_with(self.console, driver_info, mock.ANY) @mock.patch.object(console_utils, 'start_socat_console', autospec=True) def test__start_console_fail(self, mock_start): mock_start.side_effect = exception.ConsoleSubprocessFailed( error='error') with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.assertRaises(exception.ConsoleSubprocessFailed, self.console._start_console, driver_info, console_utils.start_socat_console) mock_start.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) @mock.patch.object(console_utils, 'start_socat_console', autospec=True) def test__start_console_fail_nodir(self, mock_start): mock_start.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.assertRaises(exception.ConsoleError, self.console._start_console, driver_info, console_utils.start_socat_console) mock_start.assert_called_once_with(self.node.uuid, mock.ANY, mock.ANY) @mock.patch.object(console_utils, 'make_persistent_password_file', autospec=True) @mock.patch.object(console_utils, 'start_socat_console', autospec=True) def test__start_console_empty_password(self, mock_start, mock_pass): driver_info = self.node.driver_info del driver_info['ipmi_password'] self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console._start_console( driver_info, console_utils.start_socat_console) mock_pass.assert_called_once_with(mock.ANY, '\0') mock_start.assert_called_once_with(self.info['uuid'], self.info['port'], mock.ANY) @mock.patch.object(ipmi, '_release_allocated_port', autospec=True) @mock.patch.object(ipmi.IPMISocatConsole, '_exec_stop_console', autospec=True) @mock.patch.object(console_utils, 'stop_socat_console', autospec=True) def test_stop_console(self, mock_stop, mock_exec_stop, mock_release): mock_stop.return_value = None with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console.stop_console(task) mock_stop.assert_called_once_with(self.info['uuid']) mock_exec_stop.assert_called_once_with(self.console, driver_info) mock_release.assert_called_once_with(mock.ANY) @mock.patch.object(ipmi.IPMISocatConsole, '_exec_stop_console', autospec=True) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(console_utils, 'stop_socat_console', autospec=True) def test_stop_console_fail(self, mock_stop, mock_unlink, mock_exec_stop): mock_stop.side_effect = exception.ConsoleError() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.ConsoleError, self.console.stop_console, task) mock_stop.assert_called_once_with(self.node.uuid) mock_unlink.assert_called_once_with( ipmi._console_pwfile_path(self.node.uuid)) self.assertFalse(mock_exec_stop.called) @mock.patch.object(ipmi, '_exec_ipmitool', autospec=True) def test__exec_stop_console(self, mock_exec): with task_manager.acquire(self.context, self.node.uuid) as task: driver_info = ipmi._parse_driver_info(task.node) self.console._exec_stop_console(driver_info) mock_exec.assert_called_once_with( driver_info, 'sol deactivate', check_exit_code=[0, 1]) @mock.patch.object(console_utils, 'get_socat_console_url', autospec=True) def test_get_console(self, mock_get_url): url = 'tcp://localhost:4201' mock_get_url.return_value = url expected = {'type': 'socat', 'url': url} with task_manager.acquire(self.context, self.node.uuid) as task: console_info = self.console.get_console(task) self.assertEqual(expected, console_info) mock_get_url.assert_called_once_with(self.info['port']) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_snmp.py0000664000175000017500000025251513652514273024244 0ustar zuulzuul00000000000000# Copyright 2013,2014 Cray Inc # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test class for SNMP power driver module.""" import time import mock from oslo_config import cfg from pysnmp import error as snmp_error from pysnmp import hlapi as pysnmp from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules import snmp from ironic.drivers.modules.snmp import SNMPDriverAuto from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF INFO_DICT = db_utils.get_test_snmp_info() class SNMPClientTestCase(base.TestCase): def setUp(self): super(SNMPClientTestCase, self).setUp() self.address = '1.2.3.4' self.port = '6700' self.oid = (1, 3, 6, 1, 1, 1, 0) self.value = 'value' @mock.patch.object(pysnmp, 'SnmpEngine', autospec=True) def test___init__(self, mock_snmpengine): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1) mock_snmpengine.assert_called_once_with() self.assertEqual(self.address, client.address) self.assertEqual(self.port, client.port) self.assertEqual(snmp.SNMP_V1, client.version) self.assertIsNone(client.read_community) self.assertIsNone(client.write_community) self.assertNotIn('user', client.__dict__) self.assertEqual(mock_snmpengine.return_value, client.snmp_engine) @mock.patch.object(pysnmp, 'CommunityData', autospec=True) def test__get_auth_v1_read(self, mock_community): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1, read_community='public', write_community='private') client._get_auth() mock_community.assert_called_once_with(client.read_community, mpModel=0) @mock.patch.object(pysnmp, 'CommunityData', autospec=True) def test__get_auth_v1_write(self, mock_community): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1, read_community='public', write_community='private') client._get_auth(write_mode=True) mock_community.assert_called_once_with(client.write_community, mpModel=0) @mock.patch.object(pysnmp, 'UsmUserData', autospec=True) def test__get_auth_v3(self, mock_user): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_auth() mock_user.assert_called_once_with( client.user, authKey=client.auth_key, authProtocol=client.auth_proto, privKey=client.priv_key, privProtocol=client.priv_proto, ) @mock.patch.object(pysnmp, 'ContextData', autospec=True) def test__get_context(self, mock_context): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1) client._get_context() mock_context.assert_called_once_with(None, '') @mock.patch.object(pysnmp, 'UdpTransportTarget', autospec=True) def test__get_transport(self, mock_transport): client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_transport() mock_transport.assert_called_once_with( (client.address, client.port), retries=CONF.snmp.udp_transport_retries, timeout=CONF.snmp.udp_transport_timeout) @mock.patch.object(pysnmp, 'UdpTransportTarget', autospec=True) def test__get_transport_err(self, mock_transport): mock_transport.side_effect = snmp_error.PySnmpError client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(snmp_error.PySnmpError, client._get_transport) mock_transport.assert_called_once_with( (client.address, client.port), retries=CONF.snmp.udp_transport_retries, timeout=CONF.snmp.udp_transport_timeout) @mock.patch.object(pysnmp, 'UdpTransportTarget', autospec=True) def test__get_transport_custom_timeout(self, mock_transport): self.config(udp_transport_timeout=2.0, group='snmp') client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_transport() mock_transport.assert_called_once_with((client.address, client.port), retries=5, timeout=2.0) @mock.patch.object(pysnmp, 'UdpTransportTarget', autospec=True) def test__get_transport_custom_retries(self, mock_transport): self.config(udp_transport_retries=10, group='snmp') client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client._get_transport() mock_transport.assert_called_once_with((client.address, client.port), retries=10, timeout=1.0) @mock.patch.object(pysnmp, 'getCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get(self, mock_auth, mock_context, mock_transport, mock_getcmd): var_bind = (self.oid, self.value) mock_getcmd.return_value = iter([("", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) val = client.get(self.oid) self.assertEqual(var_bind[1], val) self.assertEqual(1, mock_getcmd.call_count) @mock.patch.object(pysnmp, 'nextCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next(self, mock_auth, mock_context, mock_transport, mock_nextcmd): var_bind = (self.oid, self.value) mock_nextcmd.return_value = iter([("", None, 0, [var_bind]), ("", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) val = client.get_next(self.oid) self.assertEqual([self.value, self.value], val) self.assertEqual(1, mock_nextcmd.call_count) @mock.patch.object(pysnmp, 'getCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_err_transport(self, mock_auth, mock_context, mock_transport, mock_getcmd): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_getcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get, self.oid) self.assertFalse(mock_getcmd.called) @mock.patch.object(pysnmp, 'nextCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next_err_transport(self, mock_auth, mock_context, mock_transport, mock_nextcmd): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_nextcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get_next, self.oid) self.assertFalse(mock_nextcmd.called) @mock.patch.object(pysnmp, 'getCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_err_engine(self, mock_auth, mock_context, mock_transport, mock_getcmd): var_bind = (self.oid, self.value) mock_getcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get, self.oid) self.assertEqual(1, mock_getcmd.call_count) @mock.patch.object(pysnmp, 'nextCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_get_next_err_engine(self, mock_auth, mock_context, mock_transport, mock_nextcmd): var_bind = (self.oid, self.value) mock_nextcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.get_next, self.oid) self.assertEqual(1, mock_nextcmd.call_count) @mock.patch.object(pysnmp, 'setCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set(self, mock_auth, mock_context, mock_transport, mock_setcmd): var_bind = (self.oid, self.value) mock_setcmd.return_value = iter([("", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) client.set(self.oid, self.value) self.assertEqual(1, mock_setcmd.call_count) @mock.patch.object(pysnmp, 'setCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set_err_transport(self, mock_auth, mock_context, mock_transport, mock_setcmd): mock_transport.side_effect = snmp_error.PySnmpError var_bind = (self.oid, self.value) mock_setcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.set, self.oid, self.value) self.assertFalse(mock_setcmd.called) @mock.patch.object(pysnmp, 'setCmd', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_context', autospec=True) @mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True) def test_set_err_engine(self, mock_auth, mock_context, mock_transport, mock_setcmd): var_bind = (self.oid, self.value) mock_setcmd.return_value = iter([("engine error", None, 0, [var_bind])]) client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3) self.assertRaises(exception.SNMPFailure, client.set, self.oid, self.value) self.assertEqual(1, mock_setcmd.call_count) class SNMPValidateParametersTestCase(db_base.DbTestCase): def _get_test_node(self, driver_info): return obj_utils.get_test_node( self.context, driver_info=driver_info) def test__parse_driver_info_default(self): # Make sure we get back the expected things. node = self._get_test_node(INFO_DICT) info = snmp._parse_driver_info(node) self.assertEqual(INFO_DICT['snmp_driver'], info['driver']) self.assertEqual(INFO_DICT['snmp_address'], info['address']) self.assertEqual(INFO_DICT['snmp_port'], str(info['port'])) self.assertEqual(INFO_DICT['snmp_outlet'], str(info['outlet'])) self.assertEqual(INFO_DICT['snmp_version'], info['version']) self.assertEqual(INFO_DICT['snmp_community'], info['read_community']) self.assertEqual(INFO_DICT['snmp_community'], info['write_community']) self.assertNotIn('user', info) def test__parse_driver_info_apc(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc', info['driver']) def test__parse_driver_info_apc_masterswitch(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitch') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_masterswitch', info['driver']) def test__parse_driver_info_apc_masterswitchplus(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitchplus') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_masterswitchplus', info['driver']) def test__parse_driver_info_apc_rackpdu(self): # Make sure the APC driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='apc_rackpdu') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('apc_rackpdu', info['driver']) def test__parse_driver_info_aten(self): # Make sure the Aten driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='aten') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('aten', info['driver']) def test__parse_driver_info_cyberpower(self): # Make sure the CyberPower driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='cyberpower') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('cyberpower', info['driver']) def test__parse_driver_info_eatonpower(self): # Make sure the Eaton Power driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='eatonpower') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('eatonpower', info['driver']) def test__parse_driver_info_teltronix(self): # Make sure the Teltronix driver type is parsed. info = db_utils.get_test_snmp_info(snmp_driver='teltronix') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('teltronix', info['driver']) def test__parse_driver_info_snmp_v1(self): # Make sure SNMPv1 is parsed with a community string. info = db_utils.get_test_snmp_info(snmp_version='1', snmp_community='public') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info['version']) self.assertEqual('public', info['read_community']) self.assertEqual('public', info['write_community']) def test__parse_driver_info_snmp_v2c(self): # Make sure SNMPv2c is parsed with a community string. info = db_utils.get_test_snmp_info(snmp_version='2c', snmp_community='private') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('2c', info['version']) self.assertEqual('private', info['read_community']) self.assertEqual('private', info['write_community']) def test__parse_driver_info_read_write_community(self): # Make sure separate read/write community name take precedence info = db_utils.get_test_snmp_info(snmp_version='1', snmp_community='impossible', snmp_community_read='public', snmp_community_write='private') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info['version']) self.assertEqual('public', info['read_community']) self.assertEqual('private', info['write_community']) def test__parse_driver_info_read_community(self): # Make sure separate read community name take precedence info = db_utils.get_test_snmp_info(snmp_version='1', snmp_community='foo', snmp_community_read='bar') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info['version']) self.assertEqual('bar', info['read_community']) self.assertEqual('foo', info['write_community']) def test__parse_driver_info_write_community(self): # Make sure separate write community name take precedence info = db_utils.get_test_snmp_info(snmp_version='1', snmp_community='foo', snmp_community_write='bar') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info['version']) self.assertEqual('foo', info['read_community']) self.assertEqual('bar', info['write_community']) def test__parse_driver_info_snmp_v3(self): # Make sure SNMPv3 is parsed with user string. info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('3', info['version']) self.assertEqual('pass', info['user']) def test__parse_driver_info_snmp_v3_auth_default_proto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='12345678') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('12345678', info['auth_key']) self.assertEqual(snmp.snmp_auth_protocols['md5'], info['auth_protocol']) def test__parse_driver_info_snmp_v3_auth_key_proto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='12345678', snmp_auth_protocol='sha') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('12345678', info['auth_key']) self.assertEqual(snmp.snmp_auth_protocols['sha'], info['auth_protocol']) def test__parse_driver_info_snmp_v3_auth_nokey(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_protocol='sha') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, 'missing.*authentication key', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_auth_badproto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='12345678', snmp_auth_protocol='whatever') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, '.*?unknown SNMPv3 authentication protocol.*', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_auth_short_key(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='1234567') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, '.*?short SNMPv3 authentication key.*', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_priv_default_proto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='12345678', snmp_priv_key='87654321') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('87654321', info['priv_key']) self.assertEqual(snmp.snmp_priv_protocols['des'], info['priv_protocol']) def test__parse_driver_info_snmp_v3_priv_key_proto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_auth_key='12345678', snmp_priv_protocol='3des', snmp_priv_key='87654321') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('87654321', info['priv_key']) self.assertEqual(snmp.snmp_priv_protocols['3des'], info['priv_protocol']) def test__parse_driver_info_snmp_v3_priv_nokey(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_priv_protocol='3des') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, '.*?SNMPv3 privacy requires authentication.*', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_priv_badproto(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_priv_key='12345678', snmp_priv_protocol='whatever') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, '.*?unknown SNMPv3 privacy protocol.*', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_priv_short_key(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_priv_key='1234567') node = self._get_test_node(info) self.assertRaisesRegex( exception.InvalidParameterValue, '.*?short SNMPv3 privacy key.*', snmp._parse_driver_info, node ) def test__parse_driver_info_snmp_v3_compat(self): # Make sure SNMPv3 is parsed with a security string. info = db_utils.get_test_snmp_info(snmp_version='3', snmp_security='pass') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('3', info['version']) self.assertEqual('pass', info['user']) def test__parse_driver_info_snmp_v3_context_engine_id(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_context_engine_id='whatever') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('whatever', info['context_engine_id']) def test__parse_driver_info_snmp_v3_context_name(self): info = db_utils.get_test_snmp_info(snmp_version='3', snmp_user='pass', snmp_context_name='whatever') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('whatever', info['context_name']) def test__parse_driver_info_snmp_port_default(self): # Make sure default SNMP UDP port numbers are correct info = dict(INFO_DICT) del info['snmp_port'] node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual(161, info['port']) def test__parse_driver_info_snmp_port(self): # Make sure non-default SNMP UDP port numbers can be configured info = db_utils.get_test_snmp_info(snmp_port='10161') node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual(10161, info['port']) def test__parse_driver_info_missing_driver(self): # Make sure exception is raised when the driver type is missing. info = dict(INFO_DICT) del info['snmp_driver'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_invalid_driver(self): # Make sure exception is raised when the driver type is invalid. info = db_utils.get_test_snmp_info(snmp_driver='invalidpower') node = self._get_test_node(info) self.assertRaises(exception.InvalidParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_address(self): # Make sure exception is raised when the address is missing. info = dict(INFO_DICT) del info['snmp_address'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_outlet(self): # Make sure exception is raised when the outlet is missing. info = dict(INFO_DICT) del info['snmp_outlet'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_invalid_outlet(self): # Make sure exception is raised when the outlet is not integer. info = dict(INFO_DICT) info['snmp_outlet'] = 'nn' node = self._get_test_node(info) self.assertRaises(exception.InvalidParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_default_version(self): # Make sure version defaults to 1 when it is missing. info = dict(INFO_DICT) del info['snmp_version'] node = self._get_test_node(info) info = snmp._parse_driver_info(node) self.assertEqual('1', info['version']) self.assertEqual(INFO_DICT['snmp_community'], info['read_community']) self.assertEqual(INFO_DICT['snmp_community'], info['write_community']) def test__parse_driver_info_invalid_version(self): # Make sure exception is raised when version is invalid. info = db_utils.get_test_snmp_info(snmp_version='42', snmp_community='public', snmp_user='pass') node = self._get_test_node(info) self.assertRaises(exception.InvalidParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_default_version_and_missing_community(self): # Make sure exception is raised when version and community are missing. info = dict(INFO_DICT) del info['snmp_version'] del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_community_snmp_v1(self): # Make sure exception is raised when community is missing with SNMPv1. info = dict(INFO_DICT) del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_community_snmp_v2c(self): # Make sure exception is raised when community is missing with SNMPv2c. info = db_utils.get_test_snmp_info(snmp_version='2c') del info['snmp_community'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) def test__parse_driver_info_missing_user(self): # Make sure exception is raised when user is missing with SNMPv3. info = db_utils.get_test_snmp_info(snmp_version='3') del info['snmp_user'] node = self._get_test_node(info) self.assertRaises(exception.MissingParameterValue, snmp._parse_driver_info, node) @mock.patch.object(snmp, '_get_client', autospec=True) class SNMPDeviceDriverTestCase(db_base.DbTestCase): """Tests for the SNMP device-specific driver classes. The SNMP client object is mocked to allow various error cases to be tested. """ pdus = { (1, 3, 6, 1, 4, 1, 318, 1, 1, 4): 'apc_masterswitch', # also try longer sysObjectID (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 1, 2, 3, 4): 'apc_masterswitch', (1, 3, 6, 1, 4, 1, 318, 1, 1, 6): 'apc_masterswitchplus', (1, 3, 6, 1, 4, 1, 318, 1, 1, 12): 'apc_rackpdu', (1, 3, 6, 1, 4, 1, 21317): 'aten', (1, 3, 6, 1, 4, 1, 3808): 'cyberpower', (1, 3, 6, 1, 4, 1, 23620): 'teltronix', # TODO(etingof): SNMPDriverEatonPower misses the `.oid` attribute # and therefore fails tests # (1, 3, 6, 1, 4, 1, 534): 'eatonpower', } def setUp(self): super(SNMPDeviceDriverTestCase, self).setUp() self.config(enabled_power_interfaces=['fake', 'snmp']) snmp._memoized = {} self.node = obj_utils.get_test_node( self.context, power_interface='snmp', driver_info=INFO_DICT) def _update_driver_info(self, **kwargs): self.node["driver_info"].update(**kwargs) def _set_snmp_driver(self, snmp_driver): self._update_driver_info(snmp_driver=snmp_driver) def _get_snmp_failure(self): return exception.SNMPFailure(operation='test-operation', error='test-error') def test_power_state_on(self, mock_get_client): # Ensure the power on state is queried correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def test_power_state_off(self, mock_get_client): # Ensure the power off state is queried correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def test_power_state_error(self, mock_get_client): # Ensure an unexpected power state returns an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.ERROR, pstate) def test_power_state_snmp_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a query are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_state) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_on(self, mock_get_client): # Ensure the device is powered on correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def test_power_off(self, mock_get_client): # Ensure the device is powered off correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the state to change following a power on mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_off_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the state to change following a power off mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off] pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_OFF, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_on_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power on mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_off_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power off mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_on_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power on set operation # are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_on) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) def test_power_off_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power off set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_off) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) def test_power_on_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power on get operation # are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_on) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_off_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a power off get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_off) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_on_timeout(self, mock_sleep, mock_get_client): # Ensure that a power on consistency poll timeout causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_off_timeout(self, mock_sleep, mock_get_client): # Ensure that a power off consistency poll timeout causes an error mock_client = mock_get_client.return_value CONF.snmp.power_timeout = 5 driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_reset(self, mock_get_client): # Ensure the device is reset correctly mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_off_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the power off state change following a power # reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 3 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for the power on state change following a power # reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 3 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_off_delay_on_delay(self, mock_sleep, mock_get_client): # Ensure driver waits for both state changes following a power reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_on, driver.value_power_off, driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 4 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_off_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power off during a reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = 42 pstate = driver.power_reset() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_on_invalid_state(self, mock_sleep, mock_get_client): # Ensure driver retries when querying unexpected states following a # power on during a reset mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) attempts = CONF.snmp.power_timeout // driver.retry_interval mock_client.get.side_effect = ([driver.value_power_off] + [42] * attempts) pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * (1 + attempts) mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_off_timeout(self, mock_sleep, mock_get_client): # Ensure that a power off consistency poll timeout during a reset # causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_reset() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) attempts = CONF.snmp.power_timeout // driver.retry_interval calls = [mock.call(driver._snmp_oid())] * attempts mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) @mock.patch("oslo_utils.eventletutils.EventletEvent.wait", autospec=True) def test_power_reset_on_timeout(self, mock_sleep, mock_get_client): # Ensure that a power on consistency poll timeout during a reset # causes an error mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) attempts = CONF.snmp.power_timeout // driver.retry_interval mock_client.get.side_effect = ([driver.value_power_off] * (1 + attempts)) pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * (1 + attempts) mock_client.get.assert_has_calls(calls) self.assertEqual(states.ERROR, pstate) def test_power_reset_off_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power off set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_reset) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) self.assertFalse(mock_client.get.called) def test_power_reset_off_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power off get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = self._get_snmp_failure() self.assertRaises(exception.SNMPFailure, driver.power_reset) mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) def test_power_reset_on_snmp_set_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power on set # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.set.side_effect = [None, self._get_snmp_failure()] mock_client.get.return_value = driver.value_power_off self.assertRaises(exception.SNMPFailure, driver.power_reset) calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) mock_client.get.assert_called_once_with(driver._snmp_oid()) @mock.patch.object(time, 'sleep', autospec=True) def test_power_reset_delay_option(self, mock_sleep, mock_get_client): # Test for 'reboot_delay' config option self.config(reboot_delay=5, group='snmp') mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) calls = [mock.call(5)] mock_sleep.assert_has_calls(calls) def test_power_reset_on_snmp_get_failure(self, mock_get_client): # Ensure SNMP failure exceptions raised during a reset power on get # operation are propagated mock_client = mock_get_client.return_value driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, self._get_snmp_failure()] self.assertRaises(exception.SNMPFailure, driver.power_reset) calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid()), mock.call(driver._snmp_oid())] mock_client.get.assert_has_calls(calls) def _test_simple_device_power_state_on(self, snmp_driver, mock_get_client): # Ensure a simple device driver queries power on correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def _test_simple_device_power_state_off(self, snmp_driver, mock_get_client): # Ensure a simple device driver queries power off correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_state() mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def _test_simple_device_power_on(self, snmp_driver, mock_get_client): # Ensure a simple device driver powers on correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_on pstate = driver.power_on() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_on) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_ON, pstate) def _test_simple_device_power_off(self, snmp_driver, mock_get_client): # Ensure a simple device driver powers off correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.value_power_off pstate = driver.power_off() mock_client.set.assert_called_once_with(driver._snmp_oid(), driver.value_power_off) mock_client.get.assert_called_once_with(driver._snmp_oid()) self.assertEqual(states.POWER_OFF, pstate) def _test_simple_device_power_reset(self, snmp_driver, mock_get_client): # Ensure a simple device driver resets correctly mock_client = mock_get_client.return_value self._set_snmp_driver(snmp_driver) driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.value_power_off, driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(), driver.value_power_off), mock.call(driver._snmp_oid(), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid())] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) def test_apc_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # driver self._update_driver_info(snmp_driver="apc", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc', mock_get_client) def test_apc_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc', mock_get_client) def test_apc_power_on(self, mock_get_client): self._test_simple_device_power_on('apc', mock_get_client) def test_apc_power_off(self, mock_get_client): self._test_simple_device_power_off('apc', mock_get_client) def test_apc_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc', mock_get_client) def test_apc_masterswitch_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # masterswitch driver self._update_driver_info(snmp_driver="apc_masterswitch", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_masterswitch_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_masterswitch', mock_get_client) def test_apc_masterswitch_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_masterswitch', mock_get_client) def test_apc_masterswitchplus_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # masterswitchplus driver self._update_driver_info(snmp_driver="apc_masterswitchplus", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 6, 5, 1, 1, 5, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(3, driver.value_power_off) def test_apc_masterswitchplus_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_masterswitchplus', mock_get_client) def test_apc_masterswitchplus_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_masterswitchplus', mock_get_client) def test_apc_rackpdu_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the APC # rackpdu driver self._update_driver_info(snmp_driver="apc_rackpdu", snmp_outlet="6") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 12, 3, 3, 1, 1, 4, 6) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_apc_rackpdu_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_on(self, mock_get_client): self._test_simple_device_power_on('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_off(self, mock_get_client): self._test_simple_device_power_off('apc_rackpdu', mock_get_client) def test_apc_rackpdu_power_reset(self, mock_get_client): self._test_simple_device_power_reset('apc_rackpdu', mock_get_client) def test_aten_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # Aten driver self._update_driver_info(snmp_driver="aten", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 21317, 1, 3, 2, 2, 2, 2, 3, 0) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(2, driver.value_power_on) self.assertEqual(1, driver.value_power_off) def test_aten_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('aten', mock_get_client) def test_aten_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('aten', mock_get_client) def test_aten_power_on(self, mock_get_client): self._test_simple_device_power_on('aten', mock_get_client) def test_aten_power_off(self, mock_get_client): self._test_simple_device_power_off('aten', mock_get_client) def test_aten_power_reset(self, mock_get_client): self._test_simple_device_power_reset('aten', mock_get_client) def test_cyberpower_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # CyberPower driver self._update_driver_info(snmp_driver="cyberpower", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 3808, 1, 1, 3, 3, 3, 1, 1, 4, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(2, driver.value_power_off) def test_cyberpower_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('cyberpower', mock_get_client) def test_cyberpower_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('cyberpower', mock_get_client) def test_cyberpower_power_on(self, mock_get_client): self._test_simple_device_power_on('cyberpower', mock_get_client) def test_cyberpower_power_off(self, mock_get_client): self._test_simple_device_power_off('cyberpower', mock_get_client) def test_cyberpower_power_reset(self, mock_get_client): self._test_simple_device_power_reset('cyberpower', mock_get_client) def test_teltronix_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # Teltronix driver self._update_driver_info(snmp_driver="teltronix", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 23620, 1, 2, 2, 1, 4, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(2, driver.value_power_on) self.assertEqual(1, driver.value_power_off) def test_teltronix_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('teltronix', mock_get_client) def test_teltronix_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('teltronix', mock_get_client) def test_teltronix_power_on(self, mock_get_client): self._test_simple_device_power_on('teltronix', mock_get_client) def test_teltronix_power_off(self, mock_get_client): self._test_simple_device_power_off('teltronix', mock_get_client) def test_teltronix_power_reset(self, mock_get_client): self._test_simple_device_power_reset('teltronix', mock_get_client) def test_auto_power_state_unknown_pdu(self, mock_get_client): mock_client = mock_get_client.return_value mock_client.get.return_value = 'unknown' self._update_driver_info(snmp_driver="auto") self.assertRaises(exception.InvalidParameterValue, snmp._get_driver, self.node) def test_auto_power_state_pdu_discovery_failure(self, mock_get_client): mock_client = mock_get_client.return_value mock_client.get.side_effect = exception.SNMPFailure(operation='get', error='') self._update_driver_info(snmp_driver="auto") self.assertRaises(exception.SNMPFailure, snmp._get_driver, self.node) def test_auto_power_state_on(self, mock_get_client): for sys_obj_oid, expected_snmp_driver in self.pdus.items(): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.return_value = sys_obj_oid snmp._memoized.clear() self._update_driver_info(snmp_driver="auto") driver = snmp._get_driver(self.node) second_node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) second_node["driver_info"].update(snmp_driver=expected_snmp_driver) second_node_driver = snmp._get_driver(second_node) mock_client.get.return_value = second_node_driver.value_power_on pstate = driver.power_state() mock_client.get.assert_called_with(second_node_driver.oid) self.assertEqual(states.POWER_ON, pstate) def test_auto_power_state_off(self, mock_get_client): for sys_obj_oid, expected_snmp_driver in self.pdus.items(): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.return_value = sys_obj_oid snmp._memoized.clear() self._update_driver_info(snmp_driver="auto",) driver = snmp._get_driver(self.node) second_node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) second_node["driver_info"].update(snmp_driver=expected_snmp_driver) second_node_driver = snmp._get_driver(second_node) mock_client.get.return_value = second_node_driver.value_power_off pstate = driver.power_state() mock_client.get.assert_called_with(second_node_driver.oid) self.assertEqual(states.POWER_OFF, pstate) def test_auto_power_on(self, mock_get_client): for sys_obj_oid, expected_snmp_driver in self.pdus.items(): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.return_value = sys_obj_oid snmp._memoized.clear() self._update_driver_info(snmp_driver="auto",) driver = snmp._get_driver(self.node) second_node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) second_node["driver_info"].update(snmp_driver=expected_snmp_driver) second_node_driver = snmp._get_driver(second_node) mock_client.get.return_value = second_node_driver.value_power_on pstate = driver.power_on() mock_client.set.assert_called_once_with( second_node_driver.oid, second_node_driver.value_power_on) self.assertEqual(states.POWER_ON, pstate) def test_auto_power_off(self, mock_get_client): for sys_obj_oid, expected_snmp_driver in self.pdus.items(): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.return_value = sys_obj_oid snmp._memoized.clear() self._update_driver_info(snmp_driver="auto") driver = snmp._get_driver(self.node) second_node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) second_node["driver_info"].update(snmp_driver=expected_snmp_driver) second_node_driver = snmp._get_driver(second_node) mock_client.get.return_value = second_node_driver.value_power_off pstate = driver.power_off() mock_client.set.assert_called_once_with( second_node_driver.oid, second_node_driver.value_power_off) self.assertEqual(states.POWER_OFF, pstate) def test_auto_power_reset(self, mock_get_client): for sys_obj_oid, expected_snmp_driver in self.pdus.items(): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.side_effect = [sys_obj_oid, sys_obj_oid] snmp._memoized.clear() self._update_driver_info(snmp_driver="auto") driver = snmp._get_driver(self.node) second_node = obj_utils.get_test_node( self.context, driver='fake_snmp', driver_info=INFO_DICT) second_node["driver_info"].update(snmp_driver=expected_snmp_driver) second_node_driver = snmp._get_driver(second_node) mock_client.get.side_effect = [second_node_driver.value_power_off, second_node_driver.value_power_on] pstate = driver.power_reset() calls = [mock.call(second_node_driver.oid, second_node_driver.value_power_off), mock.call(second_node_driver.oid, second_node_driver.value_power_on)] mock_client.set.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the Eaton # Power driver self._update_driver_info(snmp_driver="eatonpower", snmp_outlet="3") driver = snmp._get_driver(self.node) status_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 2, 3) poweron_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 3, 3) poweroff_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 4, 3) self.assertEqual(status_oid, driver._snmp_oid(driver.oid_status)) self.assertEqual(poweron_oid, driver._snmp_oid(driver.oid_poweron)) self.assertEqual(poweroff_oid, driver._snmp_oid(driver.oid_poweroff)) self.assertEqual(0, driver.status_off) self.assertEqual(1, driver.status_on) self.assertEqual(2, driver.status_pending_off) self.assertEqual(3, driver.status_pending_on) def test_eaton_power_power_state_on(self, mock_get_client): # Ensure the Eaton Power driver queries on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_on pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_state_off(self, mock_get_client): # Ensure the Eaton Power driver queries off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_off pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_state_pending_off(self, mock_get_client): # Ensure the Eaton Power driver queries pending off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_pending_off pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_state_pending_on(self, mock_get_client): # Ensure the Eaton Power driver queries pending on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_pending_on pstate = driver.power_state() mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_on(self, mock_get_client): # Ensure the Eaton Power driver powers on correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_on pstate = driver.power_on() mock_client.set.assert_called_once_with( driver._snmp_oid(driver.oid_poweron), driver.value_power_on) mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_ON, pstate) def test_eaton_power_power_off(self, mock_get_client): # Ensure the Eaton Power driver powers off correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.return_value = driver.status_off pstate = driver.power_off() mock_client.set.assert_called_once_with( driver._snmp_oid(driver.oid_poweroff), driver.value_power_off) mock_client.get.assert_called_once_with( driver._snmp_oid(driver.oid_status)) self.assertEqual(states.POWER_OFF, pstate) def test_eaton_power_power_reset(self, mock_get_client): # Ensure the Eaton Power driver resets correctly mock_client = mock_get_client.return_value self._set_snmp_driver("eatonpower") driver = snmp._get_driver(self.node) mock_client.get.side_effect = [driver.status_off, driver.status_on] pstate = driver.power_reset() calls = [mock.call(driver._snmp_oid(driver.oid_poweroff), driver.value_power_off), mock.call(driver._snmp_oid(driver.oid_poweron), driver.value_power_on)] mock_client.set.assert_has_calls(calls) calls = [mock.call(driver._snmp_oid(driver.oid_status))] * 2 mock_client.get.assert_has_calls(calls) self.assertEqual(states.POWER_ON, pstate) def test_baytech_mrp27_power_snmp_objects(self, mock_get_client): # Ensure the correct SNMP object OIDs and values are used by the # Baytech MRP 27 Power driver self._update_driver_info(snmp_driver="baytech_mrp27", snmp_outlet="3") driver = snmp._get_driver(self.node) oid = (1, 3, 6, 1, 4, 1, 4779, 1, 3, 5, 3, 1, 3, 1, 3) self.assertEqual(oid, driver._snmp_oid()) self.assertEqual(1, driver.value_power_on) self.assertEqual(0, driver.value_power_off) def test_baytech_mrp27_power_state_on(self, mock_get_client): self._test_simple_device_power_state_on('baytech_mrp27', mock_get_client) def test_baytech_mrp27_power_state_off(self, mock_get_client): self._test_simple_device_power_state_off('baytech_mrp27', mock_get_client) def test_baytech_mrp27_power_on(self, mock_get_client): self._test_simple_device_power_on('baytech_mrp27', mock_get_client) def test_baytech_mrp27_power_off(self, mock_get_client): self._test_simple_device_power_off('baytech_mrp27', mock_get_client) def test_baytech_mrp27_power_reset(self, mock_get_client): self._test_simple_device_power_reset('baytech_mrp27', mock_get_client) def test_auto_power_on_cached_driver(self, mock_get_client): mock_client = mock_get_client.return_value mock_client.reset_mock() mock_client.get.return_value = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4) self._update_driver_info(snmp_driver="auto") for i in range(5): snmp._get_driver(self.node) mock_client.get.assert_called_once_with(SNMPDriverAuto.SYS_OBJ_OID) @mock.patch.object(snmp.SNMPDriverAPCRackPDU, "_snmp_power_on") def test_snmp_auto_cache_supports_pdu_replacement( self, broken_pdu_power_on_mock, mock_get_client): broken_pdu_exception = exception.SNMPFailure(operation=1, error=2) broken_pdu_power_on_mock.side_effect = broken_pdu_exception broken_pdu_oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 12) hashable_node_info = frozenset( {('address', '1.2.3.4'), ('port', 161), ('community', 'public'), ('version', '1'), ('driver', 'auto')}) snmp._memoized = {hashable_node_info: broken_pdu_oid} self._update_driver_info(snmp_driver="auto") mock_client = mock_get_client.return_value mock_client.get.return_value = broken_pdu_oid driver = snmp._get_driver(self.node) mock_client.reset_mock() replacement_pdu_oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4) mock_client.get.side_effect = [replacement_pdu_oid, driver.driver.value_power_on] pstate = driver.power_on() mock_client.set.assert_called_once_with( driver.driver.oid, driver.driver.value_power_on) self.assertEqual(states.POWER_ON, pstate) @mock.patch.object(snmp, '_get_driver', autospec=True) class SNMPDriverTestCase(db_base.DbTestCase): """SNMP power driver interface tests. In this test case, the SNMP power driver interface is exercised. The device-specific SNMP driver is mocked to allow various error cases to be tested. """ def setUp(self): super(SNMPDriverTestCase, self).setUp() self.config(enabled_power_interfaces=['fake', 'snmp']) self.node = obj_utils.create_test_node(self.context, power_interface='snmp', vendor_interface='no-vendor', driver_info=INFO_DICT) def _get_snmp_failure(self): return exception.SNMPFailure(operation='test-operation', error='test-error') def test_get_properties(self, mock_get_driver): expected = snmp.COMMON_PROPERTIES with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected, task.driver.get_properties()) def test_get_power_state_on(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.POWER_ON, pstate) def test_get_power_state_off(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.POWER_OFF, pstate) def test_get_power_state_error(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: pstate = task.driver.power.get_power_state(task) mock_driver.power_state.assert_called_once_with() self.assertEqual(states.ERROR, pstate) def test_get_power_state_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_state.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.get_power_state, task) mock_driver.power_state.assert_called_once_with() @mock.patch.object(snmp.LOG, 'warning') def test_set_power_state_on(self, mock_log, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() self.assertFalse(mock_log.called) @mock.patch.object(snmp.LOG, 'warning') def test_set_power_state_on_timeout(self, mock_log, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_ON, timeout=222) mock_driver.power_on.assert_called_once_with() self.assertTrue(mock_log.called) def test_set_power_state_off(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.return_value = states.POWER_OFF with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.set_power_state(task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() def test_set_power_state_error(self, mock_get_driver): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.InvalidParameterValue, task.driver.power.set_power_state, task, states.ERROR) def test_set_power_state_on_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.set_power_state, task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() def test_set_power_state_off_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.set_power_state, task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() def test_set_power_state_on_error(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_on.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_ON) mock_driver.power_on.assert_called_once_with() def test_set_power_state_off_error(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_off.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.set_power_state, task, states.POWER_OFF) mock_driver.power_off.assert_called_once_with() @mock.patch.object(snmp.LOG, 'warning') def test_reboot(self, mock_log, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task) mock_driver.power_reset.assert_called_once_with() self.assertFalse(mock_log.called) @mock.patch.object(snmp.LOG, 'warning') def test_reboot_timeout(self, mock_log, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.return_value = states.POWER_ON with task_manager.acquire(self.context, self.node.uuid) as task: task.driver.power.reboot(task, timeout=1) mock_driver.power_reset.assert_called_once_with() self.assertTrue(mock_log.called) def test_reboot_snmp_failure(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.side_effect = self._get_snmp_failure() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.SNMPFailure, task.driver.power.reboot, task) mock_driver.power_reset.assert_called_once_with() def test_reboot_error(self, mock_get_driver): mock_driver = mock_get_driver.return_value mock_driver.power_reset.return_value = states.ERROR with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PowerStateFailure, task.driver.power.reboot, task) mock_driver.power_reset.assert_called_once_with() ironic-15.0.0/ironic/tests/unit/drivers/modules/test_inspect_utils.py0000664000175000017500000001025413652514273026144 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import importutils from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules import inspect_utils as utils from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils sushy = importutils.try_import('sushy') @mock.patch('time.sleep', lambda sec: None) class InspectFunctionTestCase(db_base.DbTestCase): def setUp(self): super(InspectFunctionTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, boot_interface='pxe') @mock.patch.object(utils.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(objects, 'Port', spec_set=True, autospec=True) def test_create_ports_if_not_exist(self, port_mock, log_mock): macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} node_id = self.node.id port_dict1 = {'address': 'aa:aa:aa:aa:aa:aa', 'node_id': node_id} port_dict2 = {'address': 'bb:bb:bb:bb:bb:bb', 'node_id': node_id} port_obj1, port_obj2 = mock.MagicMock(), mock.MagicMock() port_mock.side_effect = [port_obj1, port_obj2] with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.create_ports_if_not_exist(task, macs) self.assertTrue(log_mock.called) expected_calls = [mock.call(task.context, **port_dict1), mock.call(task.context, **port_dict2)] port_mock.assert_has_calls(expected_calls, any_order=True) port_obj1.create.assert_called_once_with() port_obj2.create.assert_called_once_with() @mock.patch.object(utils.LOG, 'warning', spec_set=True, autospec=True) @mock.patch.object(objects.Port, 'create', spec_set=True, autospec=True) def test_create_ports_if_not_exist_mac_exception(self, create_mock, log_mock): create_mock.side_effect = exception.MACAlreadyExists('f') macs = {'Port 1': 'aa:aa:aa:aa:aa:aa', 'Port 2': 'bb:bb:bb:bb:bb:bb'} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.create_ports_if_not_exist(task, macs) self.assertEqual(2, log_mock.call_count) @mock.patch.object(utils.LOG, 'info', spec_set=True, autospec=True) @mock.patch.object(objects, 'Port', spec_set=True, autospec=True) def test_create_ports_if_not_exist_attempts_port_creation_blindly( self, port_mock, log_info_mock): macs = {'aa:bb:cc:dd:ee:ff': sushy.STATE_ENABLED, 'aa:bb:aa:aa:aa:aa': sushy.STATE_DISABLED} node_id = self.node.id port_dict1 = {'address': 'aa:bb:cc:dd:ee:ff', 'node_id': node_id} port_dict2 = {'address': 'aa:bb:aa:aa:aa:aa', 'node_id': node_id} with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.create_ports_if_not_exist( task, macs, get_mac_address=lambda x: x[0]) self.assertTrue(log_info_mock.called) expected_calls = [mock.call(task.context, **port_dict1), mock.call(task.context, **port_dict2)] port_mock.assert_has_calls(expected_calls, any_order=True) self.assertEqual(2, port_mock.return_value.create.call_count) ironic-15.0.0/ironic/tests/unit/drivers/modules/storage/0000775000175000017500000000000013652514443023307 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/storage/test_external.py0000664000175000017500000000557413652514273026556 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP. # Copyright 2016 IBM Corp # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules.storage import external from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils class ExternalInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(ExternalInterfaceTestCase, self).setUp() self.config(enabled_storage_interfaces=['noop', 'external'], enabled_boot_interfaces=['fake', 'pxe']) self.interface = external.ExternalStorage() @mock.patch.object(external, 'LOG', autospec=True) def test_validate_fails_with_ipxe_not_enabled(self, mock_log): """Ensure a validation failure is raised when iPXE not enabled.""" self.node = object_utils.create_test_node( self.context, storage_interface='external', boot_interface='pxe') object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='foo.address') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='2345') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) self.assertTrue(mock_log.error.called) # Prevents creating iPXE boot script @mock.patch('ironic.drivers.modules.ipxe.iPXEBoot.__init__', lambda self: None) def test_should_write_image(self): self.node = object_utils.create_test_node( self.context, storage_interface='external') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: self.assertFalse(self.interface.should_write_image(task)) self.node.instance_info = {'image_source': 'fake-value'} self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertTrue(self.interface.should_write_image(task)) ironic-15.0.0/ironic/tests/unit/drivers/modules/storage/test_cinder.py0000664000175000017500000006621113652514273026173 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP. # Copyright 2016 IBM Corp # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils from ironic.common import cinder as cinder_common from ironic.common import exception from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.storage import cinder from ironic.drivers import utils as driver_utils from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils class CinderInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(CinderInterfaceTestCase, self).setUp() self.config(action_retries=3, action_retry_interval=0, group='cinder') self.config(enabled_boot_interfaces=['fake', 'pxe'], enabled_storage_interfaces=['noop', 'cinder']) self.interface = cinder.CinderStorage() self.node = object_utils.create_test_node(self.context, boot_interface='fake', storage_interface='cinder') @mock.patch.object(cinder, 'LOG', autospec=True) def test__fail_validation(self, mock_log): """Ensure the validate helper logs and raises exceptions.""" fake_error = 'an error!' expected = ("Failed to validate cinder storage interface for node " "%s. an error!" % self.node.uuid) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.InvalidParameterValue, self.interface._fail_validation, task, fake_error) mock_log.error.assert_called_with(expected) @mock.patch.object(cinder, 'LOG', autospec=True) def test__generate_connector_raises_with_insufficent_data(self, mock_log): with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.StorageError, self.interface._generate_connector, task) self.assertTrue(mock_log.error.called) def test__generate_connector_iscsi(self): expected = { 'initiator': 'iqn.address', 'ip': 'ip.address', 'host': self.node.uuid, 'multipath': True} object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='ip', connector_id='ip.address', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: return_value = self.interface._generate_connector(task) self.assertDictEqual(expected, return_value) @mock.patch.object(cinder, 'LOG', autospec=True) def test__generate_connector_iscsi_and_unknown(self, mock_log): """Validate we return and log with valid and invalid connectors.""" expected = { 'initiator': 'iqn.address', 'host': self.node.uuid, 'multipath': True} object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='foo', connector_id='bar', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: return_value = self.interface._generate_connector(task) self.assertDictEqual(expected, return_value) self.assertEqual(1, mock_log.warning.call_count) @mock.patch.object(cinder, 'LOG', autospec=True) def test__generate_connector_unknown_raises_excption(self, mock_log): """Validate an exception is raised with only an invalid connector.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='foo', connector_id='bar') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises( exception.StorageError, self.interface._generate_connector, task) self.assertEqual(1, mock_log.warning.call_count) self.assertEqual(1, mock_log.error.call_count) def test__generate_connector_single_path(self): """Validate an exception is raised with only an invalid connector.""" expected = { 'initiator': 'iqn.address', 'host': self.node.uuid} object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') with task_manager.acquire(self.context, self.node.id) as task: return_value = self.interface._generate_connector(task) self.assertDictEqual(expected, return_value) def test__generate_connector_multiple_fc_wwns(self): """Validate handling of WWPNs and WWNNs.""" expected = { 'wwpns': ['wwpn1', 'wwpn2'], 'wwnns': ['wwnn3', 'wwnn4'], 'host': self.node.uuid, 'multipath': True} object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwpn', connector_id='wwpn1', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwpn', connector_id='wwpn2', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='wwnn3', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='wwnn4', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: return_value = self.interface._generate_connector(task) self.assertDictEqual(expected, return_value) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_no_settings(self, mock_log, mock_fail): with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertFalse(mock_fail.called) self.assertFalse(mock_log.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_failure_if_iscsi_boot_no_connectors(self, mock_log): valid_types = ', '.join(cinder.VALID_ISCSI_TYPES) expected_msg = ("Failed to validate cinder storage interface for node " "%(id)s. In order to enable the 'iscsi_boot' " "capability for the node, an associated " "volume_connector type must be valid for " "iSCSI (%(options)s)." % {'id': self.node.uuid, 'options': valid_types}) with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) mock_log.error.assert_called_once_with(expected_msg) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_failure_if_fc_boot_no_connectors(self, mock_log): valid_types = ', '.join(cinder.VALID_FC_TYPES) expected_msg = ("Failed to validate cinder storage interface for node " "%(id)s. In order to enable the 'fibre_channel_boot' " "capability for the node, an associated " "volume_connector type must be valid for " "Fibre Channel (%(options)s)." % {'id': self.node.uuid, 'options': valid_types}) with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'fibre_channel_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) mock_log.error.assert_called_once_with(expected_msg) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_iscsi_connector(self, mock_log, mock_fail): """Perform validate with only an iSCSI connector in place.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertFalse(mock_log.called) self.assertFalse(mock_fail.called) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_fc_connectors(self, mock_log, mock_fail): """Perform validate with only FC connectors in place""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwpn', connector_id='wwpn.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='wwnn.address', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertFalse(mock_log.called) self.assertFalse(mock_fail.called) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_connectors_and_boot(self, mock_log, mock_fail): """Perform validate with volume connectors and boot capabilities.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwpn', connector_id='wwpn.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='wwnn.address', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'fibre_channel_boot', 'True') driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.interface.validate(task) self.assertFalse(mock_log.called) self.assertFalse(mock_fail.called) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_iscsi_targets(self, mock_log, mock_fail): """Validate success with full iscsi scenario.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.interface.validate(task) self.assertFalse(mock_log.called) self.assertFalse(mock_fail.called) @mock.patch.object(cinder.CinderStorage, '_fail_validation', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_success_fc_targets(self, mock_log, mock_fail): """Validate success with full fc scenario.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwpn', connector_id='fc.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='fc.address', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fibre_channel', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'fibre_channel_boot', 'True') self.interface.validate(task) self.assertFalse(mock_log.called) self.assertFalse(mock_fail.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_fails_with_ipxe_not_enabled(self, mock_log): """Ensure a validation failure is raised when iPXE not enabled.""" self.node.boot_interface = 'pxe' self.node.save() object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='foo.address') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='2345') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) self.assertTrue(mock_log.error.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_fails_when_fc_connectors_unequal(self, mock_log): """Validate should fail with only wwnn FC connector in place""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='wwnn', connector_id='wwnn.address') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.StorageError, self.interface.validate, task) self.assertTrue(mock_log.error.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_fail_on_unknown_volume_types(self, mock_log): """Ensure exception is raised when connector/target do not match.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='foo.address') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='wetcat', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) self.assertTrue(mock_log.error.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_fails_iscsi_conn_fc_target(self, mock_log): """Validate failure of iSCSI connectors with FC target.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='foo.address') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fibre_channel', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) self.assertTrue(mock_log.error.called) @mock.patch.object(cinder, 'LOG', autospec=True) def test_validate_fails_fc_conn_iscsi_target(self, mock_log): """Validate failure of FC connectors with iSCSI target.""" object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='fibre_channel', connector_id='foo.address') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: driver_utils.add_node_capability(task, 'fibre_channel_boot', 'True') self.assertRaises(exception.InvalidParameterValue, self.interface.validate, task) self.assertTrue(mock_log.error.called) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder_common, 'attach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG') def test_attach_detach_volumes_no_volumes(self, mock_log, mock_attach, mock_detach): with task_manager.acquire(self.context, self.node.id) as task: self.interface.attach_volumes(task) self.interface.detach_volumes(task) self.assertFalse(mock_attach.called) self.assertFalse(mock_detach.called) self.assertFalse(mock_log.called) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder_common, 'attach_volumes', autospec=True) def test_attach_detach_volumes_fails_without_connectors(self, mock_attach, mock_detach): """Without connectors, attach and detach should fail.""" object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.StorageError, self.interface.attach_volumes, task) self.assertFalse(mock_attach.called) self.assertRaises(exception.StorageError, self.interface.detach_volumes, task) self.assertFalse(mock_detach.called) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder_common, 'attach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) @mock.patch.object(objects.volume_target.VolumeTarget, 'list_by_volume_id') def test_attach_detach_called_with_target_and_connector(self, mock_target_list, mock_log, mock_attach, mock_detach): target_uuid = uuidutils.generate_uuid() test_volume_target = object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=target_uuid) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') expected_target_properties = { 'volume_id': '1234', 'ironic_volume_uuid': target_uuid, 'new_property': 'foo'} mock_attach.return_value = [{ 'driver_volume_type': 'iscsi', 'data': expected_target_properties}] mock_target_list.return_value = [test_volume_target] with task_manager.acquire(self.context, self.node.id) as task: self.interface.attach_volumes(task) self.assertFalse(mock_log.called) self.assertTrue(mock_attach.called) task.volume_targets[0].refresh() self.assertEqual(expected_target_properties, task.volume_targets[0]['properties']) self.interface.detach_volumes(task) self.assertFalse(mock_log.called) self.assertTrue(mock_detach.called) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder_common, 'attach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_attach_volumes_failure(self, mock_log, mock_attach, mock_detach): """Verify detach is called upon attachment failing.""" object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='5678', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') mock_attach.side_effect = exception.StorageError('foo') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.StorageError, self.interface.attach_volumes, task) self.assertTrue(mock_attach.called) self.assertTrue(mock_detach.called) # Replacing the mock to not return an error, should still raise an # exception. mock_attach.reset_mock() mock_detach.reset_mock() @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder_common, 'attach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_attach_volumes_failure_no_attach_error(self, mock_log, mock_attach, mock_detach): """Verify that detach is called on volume/connector mismatch. Volume attachment fails if the number of attachments completed does not match the number of configured targets. """ object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='5678', uuid=uuidutils.generate_uuid()) object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') mock_attach.return_value = {'mock_return'} with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.StorageError, self.interface.attach_volumes, task) self.assertTrue(mock_attach.called) self.assertTrue(mock_detach.called) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_detach_volumes_failure(self, mock_log, mock_detach): object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') with task_manager.acquire(self.context, self.node.id) as task: # The first attempt should succeed. # The second attempt should throw StorageError # Third attempt, should log errors but not raise an exception. mock_detach.side_effect = [None, exception.StorageError('bar'), None] # This should generate 1 mock_detach call and succeed self.interface.detach_volumes(task) task.node.provision_state = states.DELETED # This should generate the other 2 moc_detach calls and warn self.interface.detach_volumes(task) self.assertEqual(3, mock_detach.call_count) self.assertEqual(1, mock_log.warning.call_count) @mock.patch.object(cinder_common, 'detach_volumes', autospec=True) @mock.patch.object(cinder, 'LOG') def test_detach_volumes_failure_raises_exception(self, mock_log, mock_detach): object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') object_utils.create_test_volume_connector( self.context, node_id=self.node.id, type='iqn', connector_id='iqn.address') with task_manager.acquire(self.context, self.node.id) as task: mock_detach.side_effect = exception.StorageError('bar') self.assertRaises(exception.StorageError, self.interface.detach_volumes, task) # Check that we warn every retry except the last one. self.assertEqual(3, mock_log.warning.call_count) self.assertEqual(1, mock_log.error.call_count) # CONF.cinder.action_retries + 1, number of retries is set to 3. self.assertEqual(4, mock_detach.call_count) def test_should_write_image(self): object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire(self.context, self.node.id) as task: self.assertFalse(self.interface.should_write_image(task)) self.node.instance_info = {'image_source': 'fake-value'} self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertTrue(self.interface.should_write_image(task)) ironic-15.0.0/ironic/tests/unit/drivers/modules/storage/__init__.py0000664000175000017500000000000013652514273025407 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/test_image_cache.py0000664000175000017500000010567013652514273025473 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for ImageCache class and helper functions.""" import datetime import os import tempfile import time import uuid import mock from oslo_utils import uuidutils from ironic.common import exception from ironic.common import image_service from ironic.common import images from ironic.common import utils from ironic.drivers.modules import image_cache from ironic.tests import base def touch(filename): open(filename, 'w').close() class TestImageCacheFetch(base.TestCase): def setUp(self): super(TestImageCacheFetch, self).setUp() self.master_dir = tempfile.mkdtemp() self.cache = image_cache.ImageCache(self.master_dir, None, None) self.dest_dir = tempfile.mkdtemp() self.dest_path = os.path.join(self.dest_dir, 'dest') self.uuid = uuidutils.generate_uuid() self.master_path = ''.join([os.path.join(self.master_dir, self.uuid), '.converted']) @mock.patch.object(image_cache, '_fetch', autospec=True) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) def test_fetch_image_no_master_dir(self, mock_download, mock_clean_up, mock_fetch): self.cache.master_dir = None self.cache.fetch_image(self.uuid, self.dest_path) self.assertFalse(mock_download.called) mock_fetch.assert_called_once_with( None, self.uuid, self.dest_path, True) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=True, autospec=True) def test_fetch_image_dest_and_master_uptodate( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) self.assertFalse(mock_download.called) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=True, autospec=True) def test_fetch_image_dest_and_master_uptodate_no_force_raw( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): master_path = os.path.join(self.master_dir, self.uuid) self.cache.fetch_image(self.uuid, self.dest_path, force_raw=False) mock_cache_upd.assert_called_once_with(master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(master_path, self.dest_path) self.assertFalse(mock_link.called) self.assertFalse(mock_download.called) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=False, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=True, autospec=True) def test_fetch_image_dest_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) mock_link.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_download.called) self.assertFalse(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=False, autospec=True) def test_fetch_image_master_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) mock_download.assert_called_once_with( self.cache, self.uuid, self.master_path, self.dest_path, ctx=None, force_raw=True) mock_clean_up.assert_called_once_with(self.cache) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(image_cache, '_delete_dest_path_if_stale', return_value=True, autospec=True) @mock.patch.object(image_cache, '_delete_master_path_if_stale', return_value=False, autospec=True) def test_fetch_image_both_master_and_dest_out_of_date( self, mock_cache_upd, mock_dest_upd, mock_link, mock_download, mock_clean_up): self.cache.fetch_image(self.uuid, self.dest_path) mock_cache_upd.assert_called_once_with(self.master_path, self.uuid, None) mock_dest_upd.assert_called_once_with(self.master_path, self.dest_path) self.assertFalse(mock_link.called) mock_download.assert_called_once_with( self.cache, self.uuid, self.master_path, self.dest_path, ctx=None, force_raw=True) mock_clean_up.assert_called_once_with(self.cache) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) def test_fetch_image_not_uuid(self, mock_download, mock_clean_up): href = u'http://abc.com/ubuntu.qcow2' href_converted = str(uuid.uuid5(uuid.NAMESPACE_URL, href)) master_path = ''.join([os.path.join(self.master_dir, href_converted), '.converted']) self.cache.fetch_image(href, self.dest_path) mock_download.assert_called_once_with( self.cache, href, master_path, self.dest_path, ctx=None, force_raw=True) self.assertTrue(mock_clean_up.called) @mock.patch.object(image_cache.ImageCache, 'clean_up', autospec=True) @mock.patch.object(image_cache.ImageCache, '_download_image', autospec=True) def test_fetch_image_not_uuid_no_force_raw(self, mock_download, mock_clean_up): href = u'http://abc.com/ubuntu.qcow2' href_converted = str(uuid.uuid5(uuid.NAMESPACE_URL, href)) master_path = os.path.join(self.master_dir, href_converted) self.cache.fetch_image(href, self.dest_path, force_raw=False) mock_download.assert_called_once_with( self.cache, href, master_path, self.dest_path, ctx=None, force_raw=False) self.assertTrue(mock_clean_up.called) @mock.patch.object(image_cache, '_fetch', autospec=True) def test__download_image(self, mock_fetch): def _fake_fetch(ctx, uuid, tmp_path, *args): self.assertEqual(self.uuid, uuid) self.assertNotEqual(self.dest_path, tmp_path) self.assertNotEqual(os.path.dirname(tmp_path), self.master_dir) with open(tmp_path, 'w') as fp: fp.write("TEST") mock_fetch.side_effect = _fake_fetch self.cache._download_image(self.uuid, self.master_path, self.dest_path) self.assertTrue(os.path.isfile(self.dest_path)) self.assertTrue(os.path.isfile(self.master_path)) self.assertEqual(os.stat(self.dest_path).st_ino, os.stat(self.master_path).st_ino) with open(self.dest_path) as fp: self.assertEqual("TEST", fp.read()) @mock.patch.object(image_cache, '_fetch', autospec=True) @mock.patch.object(image_cache, 'LOG', autospec=True) @mock.patch.object(os, 'link', autospec=True) def test__download_image_linkfail(self, mock_link, mock_log, mock_fetch): mock_link.side_effect = [None, OSError] self.assertRaises(exception.ImageDownloadFailed, self.cache._download_image, self.uuid, self.master_path, self.dest_path) self.assertTrue(mock_fetch.called) self.assertEqual(2, mock_link.call_count) self.assertTrue(mock_log.error.called) @mock.patch.object(os, 'unlink', autospec=True) class TestUpdateImages(base.TestCase): def setUp(self): super(TestUpdateImages, self).setUp() self.master_dir = tempfile.mkdtemp() self.dest_dir = tempfile.mkdtemp() self.dest_path = os.path.join(self.dest_dir, 'dest') self.uuid = uuidutils.generate_uuid() self.master_path = os.path.join(self.master_dir, self.uuid) @mock.patch.object(os.path, 'exists', return_value=False, autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_glance_img_not_cached( self, mock_gis, mock_path_exists, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, self.uuid, None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) mock_path_exists.assert_called_once_with(self.master_path) self.assertFalse(res) @mock.patch.object(os.path, 'exists', return_value=True, autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_glance_img( self, mock_gis, mock_path_exists, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, self.uuid, None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) mock_path_exists.assert_called_once_with(self.master_path) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_no_master(self, mock_gis, mock_unlink): res = image_cache._delete_master_path_if_stale(self.master_path, 'http://11', None) self.assertFalse(mock_gis.called) self.assertFalse(mock_unlink.called) self.assertFalse(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_no_updated_at(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img111' mock_gis.return_value.show.return_value = {} res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_master_up_to_date(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': datetime.datetime(1999, 11, 15, 8, 12, 31) } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_master_same_time(self, mock_gis, mock_unlink): # When times identical should not delete cached file touch(self.master_path) mtime = utils.unix_file_modification_datetime(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': mtime } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) self.assertFalse(mock_unlink.called) self.assertTrue(res) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test__delete_master_path_if_stale_out_of_date(self, mock_gis, mock_unlink): touch(self.master_path) href = 'http://awesomefreeimages.al/img999' mock_gis.return_value.show.return_value = { 'updated_at': datetime.datetime((datetime.datetime.utcnow().year + 1), 11, 15, 8, 12, 31) } res = image_cache._delete_master_path_if_stale(self.master_path, href, None) mock_gis.assert_called_once_with(href, context=None) mock_unlink.assert_called_once_with(self.master_path) self.assertFalse(res) def test__delete_dest_path_if_stale_no_dest(self, mock_unlink): res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) self.assertFalse(mock_unlink.called) self.assertFalse(res) def test__delete_dest_path_if_stale_no_master(self, mock_unlink): touch(self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) mock_unlink.assert_called_once_with(self.dest_path) self.assertFalse(res) def test__delete_dest_path_if_stale_out_of_date(self, mock_unlink): touch(self.master_path) touch(self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) mock_unlink.assert_called_once_with(self.dest_path) self.assertFalse(res) def test__delete_dest_path_if_stale_up_to_date(self, mock_unlink): touch(self.master_path) os.link(self.master_path, self.dest_path) res = image_cache._delete_dest_path_if_stale(self.master_path, self.dest_path) self.assertFalse(mock_unlink.called) self.assertTrue(res) class TestImageCacheCleanUp(base.TestCase): def setUp(self): super(TestImageCacheCleanUp, self).setUp() self.master_dir = tempfile.mkdtemp() self.cache = image_cache.ImageCache(self.master_dir, cache_size=10, cache_ttl=600) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_old_deleted(self, mock_clean_size): mock_clean_size.return_value = None files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: touch(filename) # NOTE(dtantsur): Can't alter ctime, have to set mtime to the future new_current_time = time.time() + 900 os.utime(files[0], (new_current_time - 100, new_current_time - 100)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() mock_clean_size.assert_called_once_with(self.cache, mock.ANY, None) survived = mock_clean_size.call_args[0][1] self.assertEqual(1, len(survived)) self.assertEqual(files[0], survived[0][0]) # NOTE(dtantsur): do not compare milliseconds self.assertEqual(int(new_current_time - 100), int(survived[0][1])) self.assertEqual(int(new_current_time - 100), int(survived[0][2].st_mtime)) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_old_with_amount(self, mock_clean_size): files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: with open(filename, 'wb') as f: f.write(b'X') new_current_time = time.time() + 900 with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up(amount=1) self.assertFalse(mock_clean_size.called) # Exactly one file is expected to be deleted self.assertTrue(any(os.path.exists(f) for f in files)) self.assertFalse(all(os.path.exists(f) for f in files)) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_files_with_links_untouched(self, mock_clean_size): mock_clean_size.return_value = None files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: touch(filename) os.link(filename, filename + 'copy') new_current_time = time.time() + 900 with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() for filename in files: self.assertTrue(os.path.exists(filename)) mock_clean_size.assert_called_once_with(mock.ANY, [], None) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_ensure_cache_size(self, mock_clean_ttl): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 6 files # with 3 bytes each and expect 3 to be deleted files = [os.path.join(self.master_dir, str(i)) for i in range(6)] for filename in files: with open(filename, 'w') as fp: fp.write('123') # NOTE(dtantsur): Make 3 files 'newer' to check that # old ones are deleted first new_current_time = time.time() + 100 for filename in files[:3]: os.utime(filename, (new_current_time, new_current_time)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up() for filename in files[:3]: self.assertTrue(os.path.exists(filename)) for filename in files[3:]: self.assertFalse(os.path.exists(filename)) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, None) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_ensure_cache_size_with_amount(self, mock_clean_ttl): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 6 files # with 3 bytes each and set amount to be 15, 5 files are to be deleted files = [os.path.join(self.master_dir, str(i)) for i in range(6)] for filename in files: with open(filename, 'w') as fp: fp.write('123') # NOTE(dtantsur): Make 1 file 'newer' to check that # old ones are deleted first new_current_time = time.time() + 100 os.utime(files[0], (new_current_time, new_current_time)) with mock.patch.object(time, 'time', lambda: new_current_time): self.cache.clean_up(amount=15) self.assertTrue(os.path.exists(files[0])) for filename in files[5:]: self.assertFalse(os.path.exists(filename)) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, 15) @mock.patch.object(image_cache.LOG, 'info', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) def test_clean_up_cache_still_large(self, mock_clean_ttl, mock_log): mock_clean_ttl.side_effect = lambda *xx: xx[1:] # NOTE(dtantsur): Cache size in test is 10 bytes, we create 2 files # than cannot be deleted and expected this to be logged files = [os.path.join(self.master_dir, str(i)) for i in range(2)] for filename in files: with open(filename, 'w') as fp: fp.write('123') os.link(filename, filename + 'copy') self.cache.clean_up() for filename in files: self.assertTrue(os.path.exists(filename)) self.assertTrue(mock_log.called) mock_clean_ttl.assert_called_once_with(mock.ANY, mock.ANY, None) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True) def test_temp_images_not_cleaned(self, mock_fetch, mock_rmtree): def _fake_fetch(ctx, uuid, tmp_path, *args): with open(tmp_path, 'w') as fp: fp.write("TEST" * 10) # assume cleanup from another thread at this moment self.cache.clean_up() self.assertTrue(os.path.exists(tmp_path)) mock_fetch.side_effect = _fake_fetch master_path = os.path.join(self.master_dir, 'uuid') dest_path = os.path.join(tempfile.mkdtemp(), 'dest') self.cache._download_image('uuid', master_path, dest_path) self.assertTrue(mock_rmtree.called) @mock.patch.object(utils, 'rmtree_without_raise', autospec=True) @mock.patch.object(image_cache, '_fetch', autospec=True) def test_temp_dir_exception(self, mock_fetch, mock_rmtree): mock_fetch.side_effect = exception.IronicException self.assertRaises(exception.IronicException, self.cache._download_image, 'uuid', 'fake', 'fake') self.assertTrue(mock_rmtree.called) @mock.patch.object(image_cache.LOG, 'warning', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_too_old', autospec=True) @mock.patch.object(image_cache.ImageCache, '_clean_up_ensure_cache_size', autospec=True) def test_clean_up_amount_not_satisfied(self, mock_clean_size, mock_clean_ttl, mock_log): mock_clean_ttl.side_effect = lambda *xx: xx[1:] mock_clean_size.side_effect = lambda self, listing, amount: amount self.cache.clean_up(amount=15) self.assertTrue(mock_log.called) def test_cleanup_ordering(self): class ParentCache(image_cache.ImageCache): def __init__(self): super(ParentCache, self).__init__('a', 1, 1, None) @image_cache.cleanup(priority=10000) class Cache1(ParentCache): pass @image_cache.cleanup(priority=20000) class Cache2(ParentCache): pass @image_cache.cleanup(priority=10000) class Cache3(ParentCache): pass self.assertEqual(image_cache._cache_cleanup_list[0][1], Cache2) # The order of caches with same prioirty is not deterministic. item_possibilities = [Cache1, Cache3] second_item_actual = image_cache._cache_cleanup_list[1][1] self.assertIn(second_item_actual, item_possibilities) item_possibilities.remove(second_item_actual) third_item_actual = image_cache._cache_cleanup_list[2][1] self.assertEqual(item_possibilities[0], third_item_actual) @mock.patch.object(image_cache, '_cache_cleanup_list', autospec=True) @mock.patch.object(os, 'statvfs', autospec=True) @mock.patch.object(image_service, 'get_image_service', autospec=True) class CleanupImageCacheTestCase(base.TestCase): def setUp(self): super(CleanupImageCacheTestCase, self).setUp() self.mock_first_cache = mock.MagicMock(spec_set=[]) self.mock_second_cache = mock.MagicMock(spec_set=[]) self.cache_cleanup_list = [(50, self.mock_first_cache), (20, self.mock_second_cache)] self.mock_first_cache.return_value.master_dir = 'first_cache_dir' self.mock_second_cache.return_value.master_dir = 'second_cache_dir' def test_no_clean_up(self, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Enough space found - no clean up mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.return_value = mock.MagicMock( spec_set=['f_frsize', 'f_bavail'], f_frsize=1, f_bavail=1024) cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_once_with('master_dir') self.assertFalse(self.mock_first_cache.return_value.clean_up.called) self.assertFalse(self.mock_second_cache.return_value.clean_up.called) mock_statvfs.assert_called_once_with('master_dir') @mock.patch.object(os, 'stat', autospec=True) def test_one_clean_up(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, first cache clean up is enough mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(2, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.assertFalse(self.mock_second_cache.return_value.clean_up.called) # Since we are using generator expression in clean_up_caches, stat on # second cache wouldn't be called if we got enough free space on # cleaning up the first cache. mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_clean_up_another_fs(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, need to cleanup second cache mock_stat.side_effect = [mock.MagicMock(st_dev=1, spec_set=['st_dev']), mock.MagicMock(st_dev=2, spec_set=['st_dev']), mock.MagicMock(st_dev=1, spec_set=['st_dev'])] mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(2, mock_statvfs.call_count) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.assertFalse(self.mock_first_cache.return_value.clean_up.called) # Since first cache exists on a different partition, it wouldn't be # considered for cleanup. mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_both_clean_up(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space, clean up of both caches required mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.side_effect = [ mock.MagicMock(f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=2, spec_set=['f_frsize', 'f_bavail']), mock.MagicMock(f_frsize=1, f_bavail=1024, spec_set=['f_frsize', 'f_bavail']) ] cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list image_cache.clean_up_caches(None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(3, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 2)) mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) @mock.patch.object(os, 'stat', autospec=True) def test_clean_up_fail(self, mock_stat, mock_image_service, mock_statvfs, cache_cleanup_list_mock): # Not enough space even after cleaning both caches - failure mock_stat.return_value.st_dev = 1 mock_show = mock_image_service.return_value.show mock_show.return_value = dict(size=42) mock_statvfs.return_value = mock.MagicMock( f_frsize=1, f_bavail=1, spec_set=['f_frsize', 'f_bavail']) cache_cleanup_list_mock.__iter__.return_value = self.cache_cleanup_list self.assertRaises(exception.InsufficientDiskSpace, image_cache.clean_up_caches, None, 'master_dir', [('uuid', 'path')]) mock_show.assert_called_once_with('uuid') mock_statvfs.assert_called_with('master_dir') self.assertEqual(3, mock_statvfs.call_count) self.mock_first_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) self.mock_second_cache.return_value.clean_up.assert_called_once_with( amount=(42 - 1)) mock_stat_calls_expected = [mock.call('master_dir'), mock.call('first_cache_dir'), mock.call('second_cache_dir')] mock_statvfs_calls_expected = [mock.call('master_dir'), mock.call('master_dir'), mock.call('master_dir')] self.assertEqual(mock_stat_calls_expected, mock_stat.mock_calls) self.assertEqual(mock_statvfs_calls_expected, mock_statvfs.mock_calls) class TestFetchCleanup(base.TestCase): @mock.patch.object(images, 'converted_size', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(images, 'image_to_raw', autospec=True) @mock.patch.object(image_cache, '_clean_up_caches', autospec=True) def test__fetch(self, mock_clean, mock_raw, mock_fetch, mock_size): mock_size.return_value = 100 image_cache._fetch('fake', 'fake-uuid', '/foo/bar', force_raw=True) mock_fetch.assert_called_once_with('fake', 'fake-uuid', '/foo/bar.part', force_raw=False) mock_clean.assert_called_once_with('/foo', 100) mock_raw.assert_called_once_with('fake-uuid', '/foo/bar', '/foo/bar.part') ironic-15.0.0/ironic/tests/unit/drivers/modules/test_agent_base.py0000664000175000017500000035536613652514273025367 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import types import mock from oslo_config import cfg from testtools import matchers from ironic.common import boot_devices from ironic.common import exception from ironic.common import image_service from ironic.common import states from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers import base as drivers_base from ironic.drivers.modules import agent from ironic.drivers.modules import agent_base from ironic.drivers.modules import agent_client from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import fake from ironic.drivers.modules import pxe from ironic.drivers import utils as driver_utils from ironic import objects from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils CONF = cfg.CONF INSTANCE_INFO = db_utils.get_test_agent_instance_info() DRIVER_INFO = db_utils.get_test_agent_driver_info() DRIVER_INTERNAL_INFO = db_utils.get_test_agent_driver_internal_info() class AgentDeployMixinBaseTest(db_base.DbTestCase): def setUp(self): super(AgentDeployMixinBaseTest, self).setUp() for iface in drivers_base.ALL_INTERFACES: impl = 'fake' if iface == 'deploy': impl = 'direct' if iface == 'boot': impl = 'pxe' if iface == 'rescue': impl = 'agent' if iface == 'network': continue config_kwarg = {'enabled_%s_interfaces' % iface: [impl], 'default_%s_interface' % iface: impl} self.config(**config_kwarg) self.config(enabled_hardware_types=['fake-hardware']) self.deploy = agent_base.AgentDeployMixin() n = { 'driver': 'fake-hardware', 'instance_info': INSTANCE_INFO, 'driver_info': DRIVER_INFO, 'driver_internal_info': DRIVER_INTERNAL_INFO, 'network_interface': 'noop' } self.node = object_utils.create_test_node(self.context, **n) class HeartbeatMixinTest(AgentDeployMixinBaseTest): def setUp(self): super(HeartbeatMixinTest, self).setUp() self.deploy = agent_base.HeartbeatMixin() @mock.patch.object(agent_base.HeartbeatMixin, 'refresh_steps', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_continue_deploy(self, rti_mock, cd_mock, deploy_started_mock, in_deploy_mock, refresh_steps_mock): in_deploy_mock.return_value = True deploy_started_mock.return_value = False self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) cd_mock.assert_called_once_with(self.deploy, task) self.assertFalse(rti_mock.called) refresh_steps_mock.assert_called_once_with(self.deploy, task, 'deploy') @mock.patch.object(agent_base.HeartbeatMixin, 'refresh_steps', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_continue_deploy_second_run(self, rti_mock, cd_mock, deploy_started_mock, in_deploy_mock, refresh_steps_mock): in_deploy_mock.return_value = True deploy_started_mock.return_value = False dii = self.node.driver_internal_info dii['agent_cached_deploy_steps'] = ['step'] self.node.driver_internal_info = dii self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) cd_mock.assert_called_once_with(self.deploy, task) self.assertFalse(rti_mock.called) self.assertFalse(refresh_steps_mock.called) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_reboot_to_instance(self, rti_mock, cd_mock, deploy_is_done_mock, deploy_started_mock, in_deploy_mock): in_deploy_mock.return_value = True deploy_started_mock.return_value = True deploy_is_done_mock.return_value = True self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.assertFalse(cd_mock.called) rti_mock.assert_called_once_with(self.deploy, task) @mock.patch.object(agent_base.HeartbeatMixin, 'process_next_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_not_in_core_deploy_step(self, rti_mock, cd_mock, deploy_is_done_mock, deploy_started_mock, in_deploy_mock, process_next_mock): # Check that heartbeats do not trigger deployment actions when not in # the deploy.deploy step. in_deploy_mock.return_value = False self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.assertFalse(deploy_started_mock.called) self.assertFalse(deploy_is_done_mock.called) self.assertFalse(cd_mock.called) self.assertFalse(rti_mock.called) process_next_mock.assert_called_once_with(self.deploy, task, 'deploy') @mock.patch.object(agent_base.HeartbeatMixin, 'refresh_steps', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'process_next_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_not_in_core_deploy_step_refresh(self, rti_mock, cd_mock, deploy_is_done_mock, deploy_started_mock, in_deploy_mock, process_next_mock, refresh_steps_mock): # Check loading in-band deploy steps. in_deploy_mock.return_value = False self.node.provision_state = states.DEPLOYWAIT info = self.node.driver_internal_info info.pop('agent_cached_deploy_steps', None) self.node.driver_internal_info = info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.assertFalse(deploy_started_mock.called) self.assertFalse(deploy_is_done_mock.called) self.assertFalse(cd_mock.called) self.assertFalse(rti_mock.called) refresh_steps_mock.assert_called_once_with(self.deploy, task, 'deploy') process_next_mock.assert_called_once_with(self.deploy, task, 'deploy') @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) def test_heartbeat_not_in_core_deploy_step_polling(self, rti_mock, cd_mock, deploy_is_done_mock, deploy_started_mock, in_deploy_mock, in_resume_deploy_mock): # Check that heartbeats do not trigger deployment actions when not in # the deploy.deploy step. in_deploy_mock.return_value = False self.node.provision_state = states.DEPLOYWAIT info = self.node.driver_internal_info info['agent_cached_deploy_steps'] = ['step1'] info['deployment_polling'] = True self.node.driver_internal_info = info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, 'url', '3.2.0') self.assertFalse(task.shared) self.assertEqual( 'url', task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.assertFalse(deploy_started_mock.called) self.assertFalse(deploy_is_done_mock.called) self.assertFalse(cd_mock.called) self.assertFalse(rti_mock.called) self.assertFalse(in_resume_deploy_mock.called) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_in_maintenance(self, ncrc_mock, rti_mock, cd_mock): # NOTE(pas-ha) checking only for states that are not noop for state in (states.DEPLOYWAIT, states.CLEANWAIT): for m in (ncrc_mock, rti_mock, cd_mock): m.reset_mock() self.node.provision_state = state self.node.maintenance = True self.node.save() agent_url = 'url-%s' % state with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, agent_url, '3.2.0') self.assertFalse(task.shared) self.assertEqual( agent_url, task.node.driver_internal_info['agent_url']) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.assertEqual(state, task.node.provision_state) self.assertIsNone(task.node.last_error) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_in_maintenance_abort(self, ncrc_mock, rti_mock, cd_mock): CONF.set_override('allow_provisioning_in_maintenance', False, group='conductor') for state, expected in [(states.DEPLOYWAIT, states.DEPLOYFAIL), (states.CLEANWAIT, states.CLEANFAIL), (states.RESCUEWAIT, states.RESCUEFAIL)]: for m in (ncrc_mock, rti_mock, cd_mock): m.reset_mock() self.node.provision_state = state self.node.maintenance = True self.node.save() agent_url = 'url-%s' % state with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, agent_url, '3.2.0') self.assertFalse(task.shared) self.assertIsNone( task.node.driver_internal_info.get('agent_url', None)) self.assertEqual( '3.2.0', task.node.driver_internal_info['agent_version']) self.node.refresh() self.assertEqual(expected, self.node.provision_state) self.assertIn('aborted', self.node.last_error) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) @mock.patch('time.sleep', lambda _t: None) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_with_reservation(self, ncrc_mock, rti_mock, cd_mock): # NOTE(pas-ha) checking only for states that are not noop for state in (states.DEPLOYWAIT, states.CLEANWAIT): for m in (ncrc_mock, rti_mock, cd_mock): m.reset_mock() self.node.provision_state = state self.node.reservation = 'localhost' self.node.save() old_drv_info = self.node.driver_internal_info.copy() agent_url = 'url-%s' % state with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.heartbeat(task, agent_url, '3.2.0') self.assertTrue(task.shared) self.assertEqual(old_drv_info, task.node.driver_internal_info) self.assertIsNone(task.node.last_error) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) @mock.patch.object(agent_base.LOG, 'error', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_noops_in_wrong_state(self, ncrc_mock, rti_mock, cd_mock, log_mock): allowed = {states.DEPLOYWAIT, states.CLEANWAIT, states.RESCUEWAIT, states.DEPLOYING, states.CLEANING, states.RESCUING} for state in set(states.machine.states) - allowed: for m in (ncrc_mock, rti_mock, cd_mock, log_mock): m.reset_mock() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.provision_state = state self.deploy.heartbeat(task, 'url', '1.0.0') self.assertTrue(task.shared) self.assertNotIn('agent_last_heartbeat', task.node.driver_internal_info) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) log_mock.assert_called_once_with(mock.ANY, {'node': self.node.uuid, 'state': state}) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_deploy', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'reboot_to_instance', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_noops_in_wrong_state2(self, ncrc_mock, rti_mock, cd_mock): CONF.set_override('allow_provisioning_in_maintenance', False, group='conductor') allowed = {states.DEPLOYWAIT, states.CLEANWAIT} for state in set(states.machine.states) - allowed: for m in (ncrc_mock, rti_mock, cd_mock): m.reset_mock() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.node.provision_state = state self.deploy.heartbeat(task, 'url', '1.0.0') self.assertTrue(task.shared) self.assertEqual(0, ncrc_mock.call_count) self.assertEqual(0, rti_mock.call_count) self.assertEqual(0, cd_mock.call_count) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.LOG, 'exception', autospec=True) def test_heartbeat_deploy_done_fails(self, log_mock, done_mock, failed_mock, deploy_started_mock, in_deploy_mock): in_deploy_mock.return_value = True deploy_started_mock.return_value = True done_mock.side_effect = Exception('LlamaException') with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') failed_mock.assert_called_once_with( task, mock.ANY, collect_logs=True) log_mock.assert_called_once_with( 'Asynchronous exception for node %(node)s: %(err)s', {'err': 'Failed checking if deploy is done. ' 'Error: LlamaException', 'node': task.node.uuid}) @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_is_done', autospec=True) @mock.patch.object(agent_base.LOG, 'exception', autospec=True) def test_heartbeat_deploy_done_raises_with_event(self, log_mock, done_mock, failed_mock, deploy_started_mock, in_deploy_mock): in_deploy_mock.return_value = True deploy_started_mock.return_value = True with task_manager.acquire( self.context, self.node['uuid'], shared=False) as task: def driver_failure(*args, **kwargs): # simulate driver failure that both advances the FSM # and raises an exception task.node.provision_state = states.DEPLOYFAIL raise Exception('LlamaException') task.node.provision_state = states.DEPLOYWAIT task.node.target_provision_state = states.ACTIVE done_mock.side_effect = driver_failure self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') # task.node.provision_state being set to DEPLOYFAIL # within the driver_failue, hearbeat should not call # deploy_utils.set_failed_state anymore self.assertFalse(failed_mock.called) log_mock.assert_called_once_with( 'Asynchronous exception for node %(node)s: %(err)s', {'err': 'Failed checking if deploy is done. ' 'Error: LlamaException', 'node': task.node.uuid}) @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'refresh_steps', autospec=True) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_resume_clean(self, mock_notify, mock_set_steps, mock_refresh, mock_touch): self.node.clean_step = {} self.node.provision_state = states.CLEANWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_touch.assert_called_once_with(mock.ANY) mock_refresh.assert_called_once_with(mock.ANY, task, 'clean') mock_notify.assert_called_once_with(task, 'clean') mock_set_steps.assert_called_once_with(task) @mock.patch.object(manager_utils, 'cleaning_error_handler') @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'refresh_steps', autospec=True) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) def test_heartbeat_resume_clean_fails(self, mock_notify, mock_set_steps, mock_refresh, mock_touch, mock_handler): mocks = [mock_refresh, mock_set_steps, mock_notify] self.node.clean_step = {} self.node.provision_state = states.CLEANWAIT self.node.save() for i in range(len(mocks)): before_failed_mocks = mocks[:i] failed_mock = mocks[i] after_failed_mocks = mocks[i + 1:] failed_mock.side_effect = Exception() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_touch.assert_called_once_with(mock.ANY) mock_handler.assert_called_once_with(task, mock.ANY) for called in before_failed_mocks + [failed_mock]: self.assertTrue(called.called) for not_called in after_failed_mocks: self.assertFalse(not_called.called) # Reset mocks for the next interaction for m in mocks + [mock_touch, mock_handler]: m.reset_mock() failed_mock.side_effect = None @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning(self, mock_continue, mock_touch): self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } self.node.provision_state = states.CLEANWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_touch.assert_called_once_with(mock.ANY) mock_continue.assert_called_once_with(mock.ANY, task) @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning_polling(self, mock_continue, mock_touch): info = self.node.driver_internal_info info['cleaning_polling'] = True self.node.driver_internal_info = info self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } self.node.provision_state = states.CLEANWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_touch.assert_called_once_with(mock.ANY) self.assertFalse(mock_continue.called) @mock.patch.object(manager_utils, 'cleaning_error_handler') @mock.patch.object(agent_base.HeartbeatMixin, 'continue_cleaning', autospec=True) def test_heartbeat_continue_cleaning_fails(self, mock_continue, mock_handler): self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'foo', 'reboot_requested': False } mock_continue.side_effect = Exception() self.node.provision_state = states.CLEANWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_continue.assert_called_once_with(mock.ANY, task) mock_handler.assert_called_once_with(task, mock.ANY) @mock.patch.object(manager_utils, 'rescuing_error_handler') @mock.patch.object(agent_base.HeartbeatMixin, '_finalize_rescue', autospec=True) def test_heartbeat_rescue(self, mock_finalize_rescue, mock_rescue_err_handler): self.node.provision_state = states.RESCUEWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_finalize_rescue.assert_called_once_with(mock.ANY, task) self.assertFalse(mock_rescue_err_handler.called) @mock.patch.object(manager_utils, 'rescuing_error_handler') @mock.patch.object(agent_base.HeartbeatMixin, '_finalize_rescue', autospec=True) def test_heartbeat_rescue_fails(self, mock_finalize, mock_rescue_err_handler): self.node.provision_state = states.RESCUEWAIT self.node.save() mock_finalize.side_effect = Exception('some failure') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '1.0.0') mock_finalize.assert_called_once_with(mock.ANY, task) mock_rescue_err_handler.assert_called_once_with( task, 'Node failed to perform ' 'rescue operation. Error: some failure') @mock.patch.object(agent_base.HeartbeatMixin, 'in_core_deploy_step', autospec=True) @mock.patch.object(objects.node.Node, 'touch_provisioning', autospec=True) @mock.patch.object(agent_base.HeartbeatMixin, 'deploy_has_started', autospec=True) def test_heartbeat_touch_provisioning_and_url_save(self, mock_deploy_started, mock_touch, mock_in_deploy): mock_in_deploy.return_value = True mock_deploy_started.return_value = True self.node.provision_state = states.DEPLOYWAIT self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '3.2.0') self.assertEqual('http://127.0.0.1:8080', task.node.driver_internal_info['agent_url']) self.assertEqual('3.2.0', task.node.driver_internal_info['agent_version']) self.assertIsNotNone( task.node.driver_internal_info['agent_last_heartbeat']) mock_touch.assert_called_once_with(mock.ANY) @mock.patch.object(agent_base.LOG, 'error', autospec=True) def test_heartbeat_records_cleaning_deploying(self, log_mock): for provision_state in (states.CLEANING, states.DEPLOYING): self.node.driver_internal_info = {} self.node.provision_state = provision_state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '3.2.0') self.assertEqual('http://127.0.0.1:8080', task.node.driver_internal_info['agent_url']) self.assertEqual('3.2.0', task.node.driver_internal_info[ 'agent_version']) self.assertIsNotNone( task.node.driver_internal_info['agent_last_heartbeat']) self.assertEqual(provision_state, task.node.provision_state) self.assertFalse(log_mock.called) def test_heartbeat_records_fast_track(self): self.config(fast_track=True, group='deploy') for provision_state in [states.ENROLL, states.MANAGEABLE, states.AVAILABLE]: self.node.driver_internal_info = {} self.node.provision_state = provision_state self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.heartbeat(task, 'http://127.0.0.1:8080', '3.2.0') self.assertEqual('http://127.0.0.1:8080', task.node.driver_internal_info['agent_url']) self.assertEqual('3.2.0', task.node.driver_internal_info[ 'agent_version']) self.assertIsNotNone( task.node.driver_internal_info['agent_last_heartbeat']) self.assertEqual(provision_state, task.node.provision_state) def test_in_core_deploy_step(self): self.node.deploy_step = { 'interface': 'deploy', 'step': 'deploy', 'priority': 100} info = self.node.driver_internal_info info['deploy_steps'] = [self.node.deploy_step] self.node.driver_internal_info = info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertTrue(self.deploy.in_core_deploy_step(task)) def test_in_core_deploy_step_in_other_step(self): self.node.deploy_step = { 'interface': 'deploy', 'step': 'other-step', 'priority': 100} info = self.node.driver_internal_info info['deploy_steps'] = [self.node.deploy_step] self.node.driver_internal_info = info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(self.deploy.in_core_deploy_step(task)) class AgentRescueTests(AgentDeployMixinBaseTest): def setUp(self): super(AgentRescueTests, self).setUp() @mock.patch.object(agent.AgentRescue, 'clean_up', spec_set=True, autospec=True) @mock.patch.object(agent_client.AgentClient, 'finalize_rescue', spec=types.FunctionType) def test__finalize_rescue(self, mock_finalize_rescue, mock_clean_up): node = self.node node.provision_state = states.RESCUEWAIT node.save() mock_finalize_rescue.return_value = {'command_status': 'SUCCEEDED'} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.driver.network.configure_tenant_networks = mock.Mock() task.process_event = mock.Mock() self.deploy._finalize_rescue(task) mock_finalize_rescue.assert_called_once_with(task.node) task.process_event.assert_has_calls([mock.call('resume'), mock.call('done')]) mock_clean_up.assert_called_once_with(mock.ANY, task) @mock.patch.object(agent_client.AgentClient, 'finalize_rescue', spec=types.FunctionType) def test__finalize_rescue_bad_command_result(self, mock_finalize_rescue): node = self.node node.provision_state = states.RESCUEWAIT node.save() mock_finalize_rescue.return_value = {'command_status': 'FAILED', 'command_error': 'bad'} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InstanceRescueFailure, self.deploy._finalize_rescue, task) mock_finalize_rescue.assert_called_once_with(task.node) @mock.patch.object(agent_client.AgentClient, 'finalize_rescue', spec=types.FunctionType) def test__finalize_rescue_exc(self, mock_finalize_rescue): node = self.node node.provision_state = states.RESCUEWAIT node.save() mock_finalize_rescue.side_effect = exception.IronicException("No pass") with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InstanceRescueFailure, self.deploy._finalize_rescue, task) mock_finalize_rescue.assert_called_once_with(task.node) @mock.patch.object(agent_client.AgentClient, 'finalize_rescue', spec=types.FunctionType) def test__finalize_rescue_missing_command_result(self, mock_finalize_rescue): node = self.node node.provision_state = states.RESCUEWAIT node.save() mock_finalize_rescue.return_value = {} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InstanceRescueFailure, self.deploy._finalize_rescue, task) mock_finalize_rescue.assert_called_once_with(task.node) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(agent.AgentRescue, 'clean_up', spec_set=True, autospec=True) @mock.patch.object(agent_client.AgentClient, 'finalize_rescue', spec=types.FunctionType) def test__finalize_rescue_with_smartnic_port( self, mock_finalize_rescue, mock_clean_up, power_on_node_if_needed_mock, restore_power_state_mock): node = self.node node.provision_state = states.RESCUEWAIT node.save() mock_finalize_rescue.return_value = {'command_status': 'SUCCEEDED'} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.driver.network.configure_tenant_networks = mock.Mock() task.process_event = mock.Mock() power_on_node_if_needed_mock.return_value = states.POWER_OFF self.deploy._finalize_rescue(task) mock_finalize_rescue.assert_called_once_with(task.node) task.process_event.assert_has_calls([mock.call('resume'), mock.call('done')]) mock_clean_up.assert_called_once_with(mock.ANY, task) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) class AgentDeployMixinTest(AgentDeployMixinBaseTest): @mock.patch.object(manager_utils, 'power_on_node_if_needed') @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy( self, power_off_mock, get_power_state_mock, node_power_action_mock, collect_mock, resume_mock, power_on_node_if_needed_mock): cfg.CONF.set_override('deploy_logs_collect', 'always', 'agent') self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.side_effect = [states.POWER_ON, states.POWER_OFF] power_on_node_if_needed_mock.return_value = None self.deploy.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(2, get_power_state_mock.call_count) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) collect_mock.assert_called_once_with(task.node) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'configure_tenant_networks', spec_set=True, autospec=True) def test_reboot_and_finish_deploy_soft_poweroff_doesnt_complete( self, configure_tenant_net_mock, remove_provisioning_net_mock, power_off_mock, get_power_state_mock, node_power_action_mock, mock_collect, resume_mock, power_on_node_if_needed_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_ON self.deploy.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) configure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'configure_tenant_networks', spec_set=True, autospec=True) def test_reboot_and_finish_deploy_soft_poweroff_fails( self, configure_tenant_net_mock, remove_provisioning_net_mock, power_off_mock, node_power_action_mock, mock_collect, resume_mock): power_off_mock.side_effect = RuntimeError("boom") self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) configure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) @mock.patch.object(manager_utils, 'power_on_node_if_needed') @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'configure_tenant_networks', spec_set=True, autospec=True) def test_reboot_and_finish_deploy_get_power_state_fails( self, configure_tenant_net_mock, remove_provisioning_net_mock, power_off_mock, get_power_state_mock, node_power_action_mock, mock_collect, resume_mock, power_on_node_if_needed_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.side_effect = RuntimeError("boom") power_on_node_if_needed_mock.return_value = None self.deploy.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) configure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.network.neutron.NeutronNetwork.' 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.network.neutron.NeutronNetwork.' 'configure_tenant_networks', spec_set=True, autospec=True) def test_reboot_and_finish_deploy_configure_tenant_network_exception( self, configure_tenant_net_mock, remove_provisioning_net_mock, power_off_mock, get_power_state_mock, node_power_action_mock, mock_collect, power_on_node_if_needed_mock): self.node.network_interface = 'neutron' self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() power_on_node_if_needed_mock.return_value = None with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: configure_tenant_net_mock.side_effect = exception.NetworkError( "boom") self.assertRaises(exception.InstanceDeployFailure, self.deploy.reboot_and_finish_deploy, task) self.assertEqual(7, get_power_state_mock.call_count) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) configure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_off_fails( self, power_off_mock, get_power_state_mock, node_power_action_mock, mock_collect): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.return_value = states.POWER_ON node_power_action_mock.side_effect = RuntimeError("boom") self.assertRaises(exception.InstanceDeployFailure, self.deploy.reboot_and_finish_deploy, task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF)]) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) mock_collect.assert_called_once_with(task.node) @mock.patch.object(manager_utils, 'power_on_node_if_needed', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'remove_provisioning_network', spec_set=True, autospec=True) @mock.patch('ironic.drivers.modules.network.noop.NoopNetwork.' 'configure_tenant_networks', spec_set=True, autospec=True) def test_reboot_and_finish_deploy_power_on_fails( self, configure_tenant_net_mock, remove_provisioning_net_mock, power_off_mock, get_power_state_mock, node_power_action_mock, mock_collect, power_on_node_if_needed_mock): self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: power_on_node_if_needed_mock.return_value = None get_power_state_mock.return_value = states.POWER_ON node_power_action_mock.side_effect = [None, RuntimeError("boom")] self.assertRaises(exception.InstanceDeployFailure, self.deploy.reboot_and_finish_deploy, task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(7, get_power_state_mock.call_count) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON)]) remove_provisioning_net_mock.assert_called_once_with(mock.ANY, task) configure_tenant_net_mock.assert_called_once_with(mock.ANY, task) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'sync', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_oob_power_off( self, sync_mock, node_power_action_mock, mock_collect, resume_mock): # Enable force power off driver_info = self.node.driver_info driver_info['deploy_forces_oob_reboot'] = True self.node.driver_info = driver_info self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: self.deploy.reboot_and_finish_deploy(task) sync_mock.assert_called_once_with(task.node) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON), ]) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) self.assertFalse(mock_collect.called) resume_mock.assert_called_once_with(task) @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(agent_base.LOG, 'warning', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'sync', spec=types.FunctionType) def test_reboot_and_finish_deploy_power_action_oob_power_off_failed( self, sync_mock, node_power_action_mock, log_mock, mock_collect, resume_mock): # Enable force power off driver_info = self.node.driver_info driver_info['deploy_forces_oob_reboot'] = True self.node.driver_info = driver_info self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: sync_mock.return_value = {'faultstring': 'Unknown command: blah'} self.deploy.reboot_and_finish_deploy(task) sync_mock.assert_called_once_with(task.node) node_power_action_mock.assert_has_calls([ mock.call(task, states.POWER_OFF), mock.call(task, states.POWER_ON), ]) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) log_error = ('The version of the IPA ramdisk used in the ' 'deployment do not support the command "sync"') log_mock.assert_called_once_with( 'Failed to flush the file system prior to hard rebooting the ' 'node %(node)s. Error: %(error)s', {'node': task.node.uuid, 'error': log_error}) self.assertFalse(mock_collect.called) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True, return_value='whatever') def test_configure_local_boot(self, boot_mode_mock, try_set_boot_device_mock, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task, root_uuid='some-root-uuid') try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) boot_mode_mock.assert_called_once_with(task.node) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None, prep_boot_part_uuid=None, target_boot_mode='whatever', software_raid=False ) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True, return_value='whatever') def test_configure_local_boot_with_prep(self, boot_mode_mock, try_set_boot_device_mock, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task, root_uuid='some-root-uuid', prep_boot_part_uuid='fake-prep') try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) boot_mode_mock.assert_called_once_with(task.node) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None, prep_boot_part_uuid='fake-prep', target_boot_mode='whatever', software_raid=False ) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True, return_value='uefi') def test_configure_local_boot_uefi(self, boot_mode_mock, try_set_boot_device_mock, install_bootloader_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot( task, root_uuid='some-root-uuid', efi_system_part_uuid='efi-system-part-uuid') try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) boot_mode_mock.assert_called_once_with(task.node) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid='efi-system-part-uuid', prep_boot_part_uuid=None, target_boot_mode='uefi', software_raid=False ) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_whole_disk_image( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_no_root_uuid( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_no_root_uuid_whole_disk( self, install_bootloader_mock, try_set_boot_device_mock, boot_mode_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = True boot_mode_mock.return_value = 'uefi' self.deploy.configure_local_boot( task, root_uuid=None, efi_system_part_uuid='efi-system-part-uuid') install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid=None, efi_system_part_uuid='efi-system-part-uuid', prep_boot_part_uuid=None, target_boot_mode='uefi', software_raid=False) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_on_software_raid( self, install_bootloader_mock, try_set_boot_device_mock, GlanceImageService_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = True task.node.target_raid_config = { "logical_disks": [ { "size_gb": 100, "raid_level": "1", "controller": "software", }, { "size_gb": 'MAX', "raid_level": "0", "controller": "software", } ] } self.deploy.configure_local_boot(task) self.assertTrue(GlanceImageService_mock.called) self.assertTrue(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_on_software_raid_exception( self, install_bootloader_mock, try_set_boot_device_mock, GlanceImageService_mock): GlanceImageService_mock.side_effect = Exception('Glance not found') with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = True root_uuid = "1efecf88-2b58-4d4e-8fbd-7bef1a40a1b0" task.node.driver_internal_info['root_uuid_or_disk_id'] = root_uuid task.node.target_raid_config = { "logical_disks": [ { "size_gb": 100, "raid_level": "1", "controller": "software", }, { "size_gb": 'MAX', "raid_level": "0", "controller": "software", } ] } self.deploy.configure_local_boot(task) self.assertTrue(GlanceImageService_mock.called) # check if the root_uuid comes from the driver_internal_info install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid=root_uuid, efi_system_part_uuid=None, prep_boot_part_uuid=None, target_boot_mode='bios', software_raid=True) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_on_non_software_raid( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False task.node.target_raid_config = { "logical_disks": [ { "size_gb": 100, "raid_level": "1", }, { "size_gb": 'MAX', "raid_level": "0", } ] } self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_enforce_persistent_boot_device_default( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: driver_info = task.node.driver_info driver_info['force_persistent_boot_device'] = 'Default' task.node.driver_info = driver_info driver_info['force_persistent_boot_device'] = 'Always' task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_enforce_persistent_boot_device_always( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: driver_info = task.node.driver_info driver_info['force_persistent_boot_device'] = 'Always' task.node.driver_info = driver_info task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) def test_configure_local_boot_enforce_persistent_boot_device_never( self, install_bootloader_mock, try_set_boot_device_mock): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: driver_info = task.node.driver_info driver_info['force_persistent_boot_device'] = 'Never' task.node.driver_info = driver_info task.node.driver_internal_info['is_whole_disk_image'] = False self.deploy.configure_local_boot(task) self.assertFalse(install_bootloader_mock.called) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=False) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True, return_value='whatever') def test_configure_local_boot_boot_loader_install_fail( self, boot_mode_mock, install_bootloader_mock, collect_logs_mock): install_bootloader_mock.return_value = { 'command_status': 'FAILED', 'command_error': 'boom'} self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InstanceDeployFailure, self.deploy.configure_local_boot, task, root_uuid='some-root-uuid') boot_mode_mock.assert_called_once_with(task.node) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None, prep_boot_part_uuid=None, target_boot_mode='whatever', software_raid=False ) collect_logs_mock.assert_called_once_with(mock.ANY, task.node) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(agent_client.AgentClient, 'collect_system_logs', autospec=True) @mock.patch.object(deploy_utils, 'try_set_boot_device', autospec=True) @mock.patch.object(agent_client.AgentClient, 'install_bootloader', autospec=True) @mock.patch.object(boot_mode_utils, 'get_boot_mode', autospec=True, return_value='whatever') def test_configure_local_boot_set_boot_device_fail( self, boot_mode_mock, install_bootloader_mock, try_set_boot_device_mock, collect_logs_mock): install_bootloader_mock.return_value = { 'command_status': 'SUCCESS', 'command_error': None} try_set_boot_device_mock.side_effect = RuntimeError('error') self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: task.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InstanceDeployFailure, self.deploy.configure_local_boot, task, root_uuid='some-root-uuid', prep_boot_part_uuid=None) boot_mode_mock.assert_called_once_with(task.node) install_bootloader_mock.assert_called_once_with( mock.ANY, task.node, root_uuid='some-root-uuid', efi_system_part_uuid=None, prep_boot_part_uuid=None, target_boot_mode='whatever', software_raid=False) try_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) collect_logs_mock.assert_called_once_with(mock.ANY, task.node) self.assertEqual(states.DEPLOYFAIL, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_netboot(self, configure_mock, boot_option_mock, prepare_instance_mock, failed_state_mock): boot_option_mock.return_value = 'netboot' prepare_instance_mock.return_value = None self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.prepare_instance_to_boot(task, root_uuid, efi_system_part_uuid) self.assertFalse(configure_mock.called) boot_option_mock.assert_called_once_with(task.node) prepare_instance_mock.assert_called_once_with(task.driver.boot, task) self.assertFalse(failed_state_mock.called) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_localboot(self, configure_mock, boot_option_mock, prepare_instance_mock, failed_state_mock): boot_option_mock.return_value = 'local' prepare_instance_mock.return_value = None self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.prepare_instance_to_boot(task, root_uuid, efi_system_part_uuid) configure_mock.assert_called_once_with( self.deploy, task, root_uuid=root_uuid, efi_system_part_uuid=efi_system_part_uuid, prep_boot_part_uuid=None) boot_option_mock.assert_called_once_with(task.node) prepare_instance_mock.assert_called_once_with(task.driver.boot, task) self.assertFalse(failed_state_mock.called) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_localboot_prep_partition( self, configure_mock, boot_option_mock, prepare_instance_mock, failed_state_mock): boot_option_mock.return_value = 'local' prepare_instance_mock.return_value = None self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' prep_boot_part_uuid = 'prep_boot_part_uuid' with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.prepare_instance_to_boot(task, root_uuid, efi_system_part_uuid, prep_boot_part_uuid) configure_mock.assert_called_once_with( self.deploy, task, root_uuid=root_uuid, efi_system_part_uuid=efi_system_part_uuid, prep_boot_part_uuid=prep_boot_part_uuid) boot_option_mock.assert_called_once_with(task.node) prepare_instance_mock.assert_called_once_with(task.driver.boot, task) self.assertFalse(failed_state_mock.called) @mock.patch.object(deploy_utils, 'set_failed_state', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_instance', autospec=True) @mock.patch.object(deploy_utils, 'get_boot_option', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'configure_local_boot', autospec=True) def test_prepare_instance_to_boot_configure_fails(self, configure_mock, boot_option_mock, prepare_mock, failed_state_mock): boot_option_mock.return_value = 'local' self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.save() root_uuid = 'root_uuid' efi_system_part_uuid = 'efi_sys_uuid' reason = 'reason' configure_mock.side_effect = ( exception.InstanceDeployFailure(reason=reason)) prepare_mock.side_effect = ( exception.InstanceDeployFailure(reason=reason)) with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.assertRaises(exception.InstanceDeployFailure, self.deploy.prepare_instance_to_boot, task, root_uuid, efi_system_part_uuid) configure_mock.assert_called_once_with( self.deploy, task, root_uuid=root_uuid, efi_system_part_uuid=efi_system_part_uuid, prep_boot_part_uuid=None) boot_option_mock.assert_called_once_with(task.node) self.assertFalse(prepare_mock.called) self.assertFalse(failed_state_mock.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning(self, status_mock, notify_mock): # Test a successful execute clean step on the agent self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'reboot_requested': False } self.node.save() status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': { 'clean_step': self.node.clean_step } }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) notify_mock.assert_called_once_with(task, 'clean') @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test__post_step_reboot(self, mock_reboot, mock_prepare, mock_build_opt): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: i_info = task.node.driver_internal_info i_info['agent_secret_token'] = 'magicvalue01' task.node.driver_internal_info = i_info agent_base._post_step_reboot(task, 'clean') self.assertTrue(mock_build_opt.called) self.assertTrue(mock_prepare.called) mock_reboot.assert_called_once_with(task, states.REBOOT) self.assertTrue(task.node.driver_internal_info['cleaning_reboot']) self.assertNotIn('agent_secret_token', task.node.driver_internal_info) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test__post_step_reboot_deploy(self, mock_reboot, mock_prepare, mock_build_opt): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: i_info = task.node.driver_internal_info i_info['agent_secret_token'] = 'magicvalue01' task.node.driver_internal_info = i_info agent_base._post_step_reboot(task, 'deploy') self.assertTrue(mock_build_opt.called) self.assertTrue(mock_prepare.called) mock_reboot.assert_called_once_with(task, states.REBOOT) self.assertTrue( task.node.driver_internal_info['deployment_reboot']) self.assertNotIn('agent_secret_token', task.node.driver_internal_info) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test__post_step_reboot_pregenerated_token( self, mock_reboot, mock_prepare, mock_build_opt): with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: i_info = task.node.driver_internal_info i_info['agent_secret_token'] = 'magicvalue01' i_info['agent_secret_token_pregenerated'] = True task.node.driver_internal_info = i_info agent_base._post_step_reboot(task, 'clean') self.assertTrue(mock_build_opt.called) self.assertTrue(mock_prepare.called) mock_reboot.assert_called_once_with(task, states.REBOOT) self.assertIn('agent_secret_token', task.node.driver_internal_info) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test__post_step_reboot_fail(self, mock_reboot, mock_handler, mock_prepare, mock_build_opt): mock_reboot.side_effect = RuntimeError("broken") with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: agent_base._post_step_reboot(task, 'clean') mock_reboot.assert_called_once_with(task, states.REBOOT) mock_handler.assert_called_once_with(task, mock.ANY) self.assertNotIn('cleaning_reboot', task.node.driver_internal_info) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'deploying_error_handler', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) def test__post_step_reboot_fail_deploy(self, mock_reboot, mock_handler, mock_prepare, mock_build_opt): mock_reboot.side_effect = RuntimeError("broken") with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: agent_base._post_step_reboot(task, 'deploy') mock_reboot.assert_called_once_with(task, states.REBOOT) mock_handler.assert_called_once_with(task, mock.ANY) self.assertNotIn('deployment_reboot', task.node.driver_internal_info) @mock.patch.object(deploy_utils, 'build_agent_options', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', spec_set=True, autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_reboot( self, status_mock, reboot_mock, mock_prepare, mock_build_opt): # Test a successful execute clean step on the agent, with reboot self.node.clean_step = { 'priority': 42, 'interface': 'deploy', 'step': 'reboot_me_afterwards', 'reboot_requested': True } self.node.save() status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': { 'clean_step': self.node.clean_step } }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) reboot_mock.assert_called_once_with(task, states.REBOOT) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_after_reboot(self, status_mock, notify_mock): # Test a successful execute clean step on the agent, with reboot self.node.clean_step = { 'priority': 42, 'interface': 'deploy', 'step': 'reboot_me_afterwards', 'reboot_requested': True } driver_internal_info = self.node.driver_internal_info driver_internal_info['cleaning_reboot'] = True self.node.driver_internal_info = driver_internal_info self.node.save() # Represents a freshly booted agent with no commands status_mock.return_value = [] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) notify_mock.assert_called_once_with(task, 'clean') self.assertNotIn('cleaning_reboot', task.node.driver_internal_info) @mock.patch.object(agent_base, '_get_post_step_hook', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_with_hook( self, status_mock, notify_mock, get_hook_mock): self.node.clean_step = { 'priority': 10, 'interface': 'raid', 'step': 'create_configuration', } self.node.save() command_status = { 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': {'clean_step': self.node.clean_step}} status_mock.return_value = [command_status] hook_mock = mock.MagicMock(spec=types.FunctionType, __name__='foo') get_hook_mock.return_value = hook_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.deploy.continue_cleaning(task) get_hook_mock.assert_called_once_with(task.node, 'clean') hook_mock.assert_called_once_with(task, command_status) notify_mock.assert_called_once_with(task, 'clean') @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_base, '_get_post_step_hook', autospec=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_with_hook_fails( self, status_mock, error_handler_mock, get_hook_mock, notify_mock): self.node.clean_step = { 'priority': 10, 'interface': 'raid', 'step': 'create_configuration', } self.node.save() command_status = { 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': {'clean_step': self.node.clean_step}} status_mock.return_value = [command_status] hook_mock = mock.MagicMock(spec=types.FunctionType, __name__='foo') hook_mock.side_effect = RuntimeError('error') get_hook_mock.return_value = hook_mock with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.deploy.continue_cleaning(task) get_hook_mock.assert_called_once_with(task.node, 'clean') hook_mock.assert_called_once_with(task, command_status) error_handler_mock.assert_called_once_with(task, mock.ANY) self.assertFalse(notify_mock.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_old_command(self, status_mock, notify_mock): # Test when a second execute_clean_step happens to the agent, but # the new step hasn't started yet. self.node.clean_step = { 'priority': 10, 'interface': 'deploy', 'step': 'erase_devices', 'reboot_requested': False } self.node.save() status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'execute_clean_step', 'command_result': { 'priority': 20, 'interface': 'deploy', 'step': 'update_firmware', 'reboot_requested': False } }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) self.assertFalse(notify_mock.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_running(self, status_mock, notify_mock): # Test that no action is taken while a clean step is executing status_mock.return_value = [{ 'command_status': 'RUNNING', 'command_name': 'execute_clean_step', 'command_result': None }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) self.assertFalse(notify_mock.called) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_no_step_running(self, status_mock, notify_mock): status_mock.return_value = [{ 'command_status': 'SUCCEEDED', 'command_name': 'get_clean_steps', 'command_result': [] }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) notify_mock.assert_called_once_with(task, 'clean') @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_fail(self, status_mock, error_mock): # Test that a failure puts the node in CLEANFAIL status_mock.return_value = [{ 'command_status': 'FAILED', 'command_name': 'execute_clean_step', 'command_result': {} }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) error_mock.assert_called_once_with(task, mock.ANY) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'refresh_steps', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def _test_continue_cleaning_clean_version_mismatch( self, status_mock, refresh_steps_mock, notify_mock, steps_mock, manual=False): status_mock.return_value = [{ 'command_status': 'CLEAN_VERSION_MISMATCH', 'command_name': 'execute_clean_step', }] tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE self.node.provision_state = states.CLEANWAIT self.node.target_provision_state = tgt_prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.deploy.continue_cleaning(task) notify_mock.assert_called_once_with(task, 'clean') refresh_steps_mock.assert_called_once_with(mock.ANY, task, 'clean') if manual: self.assertFalse( task.node.driver_internal_info['skip_current_clean_step']) self.assertFalse(steps_mock.called) else: steps_mock.assert_called_once_with(task) self.assertNotIn('skip_current_clean_step', task.node.driver_internal_info) def test_continue_cleaning_automated_clean_version_mismatch(self): self._test_continue_cleaning_clean_version_mismatch() def test_continue_cleaning_manual_clean_version_mismatch(self): self._test_continue_cleaning_clean_version_mismatch(manual=True) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(conductor_steps, 'set_node_cleaning_steps', autospec=True) @mock.patch.object(manager_utils, 'notify_conductor_resume_operation', autospec=True) @mock.patch.object(agent_base.AgentDeployMixin, 'refresh_steps', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_clean_version_mismatch_fail( self, status_mock, refresh_steps_mock, notify_mock, steps_mock, error_mock, manual=False): status_mock.return_value = [{ 'command_status': 'CLEAN_VERSION_MISMATCH', 'command_name': 'execute_clean_step', 'command_result': {'hardware_manager_version': {'Generic': '1'}} }] refresh_steps_mock.side_effect = exception.NodeCleaningFailure("boo") tgt_prov_state = states.MANAGEABLE if manual else states.AVAILABLE self.node.provision_state = states.CLEANWAIT self.node.target_provision_state = tgt_prov_state self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.deploy.continue_cleaning(task) status_mock.assert_called_once_with(mock.ANY, task.node) refresh_steps_mock.assert_called_once_with(mock.ANY, task, 'clean') error_mock.assert_called_once_with(task, mock.ANY) self.assertFalse(notify_mock.called) self.assertFalse(steps_mock.called) @mock.patch.object(manager_utils, 'cleaning_error_handler', autospec=True) @mock.patch.object(agent_client.AgentClient, 'get_commands_status', autospec=True) def test_continue_cleaning_unknown(self, status_mock, error_mock): # Test that unknown commands are treated as failures status_mock.return_value = [{ 'command_status': 'UNKNOWN', 'command_name': 'execute_clean_step', 'command_result': {} }] with task_manager.acquire(self.context, self.node['uuid'], shared=False) as task: self.deploy.continue_cleaning(task) error_mock.assert_called_once_with(task, mock.ANY) def _test_clean_step_hook(self): """Helper method for unit tests related to clean step hooks.""" some_function_mock = mock.MagicMock() @agent_base.post_clean_step_hook( interface='raid', step='delete_configuration') @agent_base.post_clean_step_hook( interface='raid', step='create_configuration') def hook_method(): some_function_mock('some-arguments') return hook_method @mock.patch.object(agent_base, '_POST_STEP_HOOKS', {'clean': {}, 'deploy': {}}) def test_post_clean_step_hook(self): # This unit test makes sure that hook methods are registered # properly and entries are made in # agent_base.POST_CLEAN_STEP_HOOKS hook_method = self._test_clean_step_hook() hooks = agent_base._POST_STEP_HOOKS['clean'] self.assertEqual(hook_method, hooks['raid']['create_configuration']) self.assertEqual(hook_method, hooks['raid']['delete_configuration']) @mock.patch.object(agent_base, '_POST_STEP_HOOKS', {'clean': {}, 'deploy': {}}) def test__get_post_step_hook(self): # Check if agent_base._get_post_step_hook can get # clean step for which hook is registered. hook_method = self._test_clean_step_hook() self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() hook_returned = agent_base._get_post_step_hook(self.node, 'clean') self.assertEqual(hook_method, hook_returned) @mock.patch.object(agent_base, '_POST_STEP_HOOKS', {'clean': {}, 'deploy': {}}) def test__get_post_step_hook_no_hook_registered(self): # Make sure agent_base._get_post_step_hook returns # None when no clean step hook is registered for the clean step. self._test_clean_step_hook() self.node.clean_step = {'step': 'some-clean-step', 'interface': 'some-other-interface'} self.node.save() hook_returned = agent_base._get_post_step_hook(self.node, 'clean') self.assertIsNone(hook_returned) @mock.patch.object(manager_utils, 'restore_power_state_if_needed', autospec=True) @mock.patch.object(manager_utils, 'power_on_node_if_needed') @mock.patch.object(manager_utils, 'notify_conductor_resume_deploy', autospec=True) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) @mock.patch.object(time, 'sleep', lambda seconds: None) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(fake.FakePower, 'get_power_state', spec=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'power_off', spec=types.FunctionType) def test_reboot_and_finish_deploy_with_smartnic_port( self, power_off_mock, get_power_state_mock, node_power_action_mock, collect_mock, resume_mock, power_on_node_if_needed_mock, restore_power_state_mock): cfg.CONF.set_override('deploy_logs_collect', 'always', 'agent') self.node.provision_state = states.DEPLOYING self.node.target_provision_state = states.ACTIVE self.node.deploy_step = { 'step': 'deploy', 'priority': 50, 'interface': 'deploy'} self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: get_power_state_mock.side_effect = [states.POWER_ON, states.POWER_OFF] power_on_node_if_needed_mock.return_value = states.POWER_OFF self.deploy.reboot_and_finish_deploy(task) power_off_mock.assert_called_once_with(task.node) self.assertEqual(2, get_power_state_mock.call_count) node_power_action_mock.assert_called_once_with( task, states.POWER_ON) self.assertEqual(states.DEPLOYWAIT, task.node.provision_state) self.assertEqual(states.ACTIVE, task.node.target_provision_state) collect_mock.assert_called_once_with(task.node) resume_mock.assert_called_once_with(task) power_on_node_if_needed_mock.assert_called_once_with(task) restore_power_state_mock.assert_called_once_with( task, states.POWER_OFF) class TestRefreshCleanSteps(AgentDeployMixinBaseTest): def setUp(self): super(TestRefreshCleanSteps, self).setUp() self.node.driver_internal_info['agent_url'] = 'http://127.0.0.1:9999' self.ports = [object_utils.create_test_port(self.context, node_id=self.node.id)] self.clean_steps = { 'hardware_manager_version': '1', 'clean_steps': { 'GenericHardwareManager': [ {'interface': 'deploy', 'step': 'erase_devices', 'priority': 20}, ], 'SpecificHardwareManager': [ {'interface': 'deploy', 'step': 'update_firmware', 'priority': 30}, {'interface': 'raid', 'step': 'create_configuration', 'priority': 10}, ] } } # NOTE(dtantsur): deploy steps are structurally identical to clean # steps, reusing self.clean_steps for simplicity self.deploy_steps = { 'hardware_manager_version': '1', 'deploy_steps': self.clean_steps['clean_steps'], } @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test_refresh_steps(self, client_mock): client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.refresh_steps(task, 'clean') client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) self.assertEqual('1', task.node.driver_internal_info[ 'hardware_manager_version']) self.assertIn('agent_cached_clean_steps_refreshed', task.node.driver_internal_info) steps = task.node.driver_internal_info['agent_cached_clean_steps'] # Since steps are returned in dicts, they have non-deterministic # ordering self.assertEqual(2, len(steps)) self.assertIn(self.clean_steps['clean_steps'][ 'GenericHardwareManager'][0], steps['deploy']) self.assertIn(self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][0], steps['deploy']) self.assertEqual([self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][1]], steps['raid']) @mock.patch.object(agent_client.AgentClient, 'get_deploy_steps', autospec=True) def test_refresh_steps_deploy(self, client_mock): client_mock.return_value = { 'command_result': self.deploy_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.deploy.refresh_steps(task, 'deploy') client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) self.assertEqual('1', task.node.driver_internal_info[ 'hardware_manager_version']) self.assertIn('agent_cached_deploy_steps_refreshed', task.node.driver_internal_info) steps = task.node.driver_internal_info['agent_cached_deploy_steps'] self.assertEqual({'deploy', 'raid'}, set(steps)) # Since steps are returned in dicts, they have non-deterministic # ordering self.assertIn(self.clean_steps['clean_steps'][ 'GenericHardwareManager'][0], steps['deploy']) self.assertIn(self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][0], steps['deploy']) self.assertEqual([self.clean_steps['clean_steps'][ 'SpecificHardwareManager'][1]], steps['raid']) @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test_refresh_steps_missing_steps(self, client_mock): del self.clean_steps['clean_steps'] client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex(exception.NodeCleaningFailure, 'invalid result', self.deploy.refresh_steps, task, 'clean') client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) @mock.patch.object(agent_client.AgentClient, 'get_clean_steps', autospec=True) def test_refresh_steps_missing_interface(self, client_mock): step = self.clean_steps['clean_steps']['SpecificHardwareManager'][1] del step['interface'] client_mock.return_value = { 'command_result': self.clean_steps} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaisesRegex(exception.NodeCleaningFailure, 'invalid clean step', self.deploy.refresh_steps, task, 'clean') client_mock.assert_called_once_with(mock.ANY, task.node, task.ports) class FakeAgentDeploy(agent_base.AgentDeployMixin, fake.FakeDeploy): pass class StepMethodsTestCase(db_base.DbTestCase): def setUp(self): super(StepMethodsTestCase, self).setUp() self.clean_steps = { 'deploy': [ {'interface': 'deploy', 'step': 'erase_devices', 'priority': 20}, {'interface': 'deploy', 'step': 'update_firmware', 'priority': 30} ], 'raid': [ {'interface': 'raid', 'step': 'create_configuration', 'priority': 10} ] } n = {'boot_interface': 'pxe', 'deploy_interface': 'direct', 'driver_internal_info': { 'agent_cached_clean_steps': self.clean_steps}} self.node = object_utils.create_test_node(self.context, **n) self.ports = [object_utils.create_test_port(self.context, node_id=self.node.id)] self.deploy = FakeAgentDeploy() def test_agent_get_steps(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.get_steps(task, 'clean') # Since steps are returned in dicts, they have non-deterministic # ordering self.assertThat(response, matchers.HasLength(3)) self.assertIn(self.clean_steps['deploy'][0], response) self.assertIn(self.clean_steps['deploy'][1], response) self.assertIn(self.clean_steps['raid'][0], response) def test_agent_get_steps_deploy(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info = { 'agent_cached_deploy_steps': self.clean_steps } response = agent_base.get_steps(task, 'deploy') # Since steps are returned in dicts, they have non-deterministic # ordering self.assertThat(response, matchers.HasLength(3)) self.assertIn(self.clean_steps['deploy'][0], response) self.assertIn(self.clean_steps['deploy'][1], response) self.assertIn(self.clean_steps['raid'][0], response) def test_get_steps_custom_interface(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.get_steps(task, 'clean', interface='raid') self.assertThat(response, matchers.HasLength(1)) self.assertEqual(self.clean_steps['raid'], response) def test_get_steps_override_priorities(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: new_priorities = {'create_configuration': 42} response = agent_base.get_steps( task, 'clean', interface='raid', override_priorities=new_priorities) self.assertEqual(42, response[0]['priority']) def test_get_steps_override_priorities_none(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: # this is simulating the default value of a configuration option new_priorities = {'create_configuration': None} response = agent_base.get_steps( task, 'clean', interface='raid', override_priorities=new_priorities) self.assertEqual(10, response[0]['priority']) def test_get_steps_missing_steps(self): info = self.node.driver_internal_info del info['agent_cached_clean_steps'] self.node.driver_internal_info = info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertEqual([], agent_base.get_steps(task, 'clean')) def test_get_deploy_steps(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: task.node.driver_internal_info = { 'agent_cached_deploy_steps': self.clean_steps } steps = self.deploy.get_deploy_steps(task) # 2 in-band steps + one out-of-band self.assertEqual(3, len(steps)) self.assertIn(self.clean_steps['deploy'][0], steps) self.assertIn(self.clean_steps['deploy'][1], steps) self.assertNotIn(self.clean_steps['raid'][0], steps) def test_get_deploy_steps_only_oob(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: steps = self.deploy.get_deploy_steps(task) # one out-of-band step self.assertEqual(1, len(steps)) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step(self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'SUCCEEDED'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.execute_step( task, self.clean_steps['deploy'][0], 'clean') self.assertEqual(states.CLEANWAIT, response) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_deploy_step', autospec=True) def test_execute_deploy_step(self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'SUCCEEDED'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.execute_step( task, self.clean_steps['deploy'][0], 'deploy') self.assertEqual(states.DEPLOYWAIT, response) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step_running(self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'RUNNING'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.execute_step( task, self.clean_steps['deploy'][0], 'clean') self.assertEqual(states.CLEANWAIT, response) @mock.patch('ironic.objects.Port.list_by_node_id', spec_set=types.FunctionType) @mock.patch.object(agent_client.AgentClient, 'execute_clean_step', autospec=True) def test_execute_clean_step_version_mismatch( self, client_mock, list_ports_mock): client_mock.return_value = { 'command_status': 'RUNNING'} list_ports_mock.return_value = self.ports with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: response = agent_base.execute_step( task, self.clean_steps['deploy'][0], 'clean') self.assertEqual(states.CLEANWAIT, response) ironic-15.0.0/ironic/tests/unit/drivers/modules/test_deploy_utils.py0000664000175000017500000027411313652514273026001 0ustar zuulzuul00000000000000# Copyright (c) 2012 NTT DOCOMO, INC. # Copyright 2011 OpenStack Foundation # Copyright 2011 Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import fixtures import mock from oslo_config import cfg from oslo_utils import fileutils from oslo_utils import uuidutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import faults from ironic.common import image_service from ironic.common import states from ironic.common import utils as common_utils from ironic.conductor import task_manager from ironic.conductor import utils as manager_utils from ironic.drivers.modules import boot_mode_utils from ironic.drivers.modules import deploy_utils as utils from ironic.drivers.modules import fake from ironic.drivers.modules import image_cache from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import cinder from ironic.drivers import utils as driver_utils from ironic.tests import base as tests_base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() _PXECONF_DEPLOY = b""" default deploy label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root={{ ROOT }} --- ramdisk """ _PXECONF_BOOT_PARTITION = """ default boot_partition label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root=UUID=12345678-1234-1234-1234-1234567890abcdef \ --- ramdisk """ _PXECONF_BOOT_WHOLE_DISK = """ default boot_whole_disk label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} label boot_whole_disk COM32 chain.c32 append mbr:0x12345678 label trusted_boot kernel mboot append tboot.gz --- kernel root={{ ROOT }} --- ramdisk """ _PXECONF_TRUSTED_BOOT = """ default trusted_boot label deploy kernel deploy_kernel append initrd=deploy_ramdisk ipappend 3 label boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef label boot_whole_disk COM32 chain.c32 append mbr:{{ DISK_IDENTIFIER }} label trusted_boot kernel mboot append tboot.gz --- kernel root=UUID=12345678-1234-1234-1234-1234567890abcdef \ --- ramdisk """ _IPXECONF_DEPLOY = b""" #!ipxe dhcp goto deploy :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot """ _IPXECONF_BOOT_PARTITION = """ #!ipxe dhcp goto boot_partition :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root=UUID=12345678-1234-1234-1234-1234567890abcdef boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot """ _IPXECONF_BOOT_WHOLE_DISK = """ #!ipxe dhcp goto boot_whole_disk :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root={{ ROOT }} boot :boot_whole_disk kernel chain.c32 append mbr:0x12345678 boot """ _IPXECONF_BOOT_ISCSI_NO_CONFIG = """ #!ipxe dhcp goto boot_iscsi :deploy kernel deploy_kernel initrd deploy_ramdisk boot :boot_partition kernel kernel append initrd=ramdisk root=UUID=0x12345678 boot :boot_whole_disk kernel chain.c32 append mbr:{{ DISK_IDENTIFIER }} boot """ _UEFI_PXECONF_DEPLOY = b""" default=deploy image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root={{ ROOT }}" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" """ _UEFI_PXECONF_BOOT_PARTITION = """ default=boot_partition image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root=UUID=12345678-1234-1234-1234-1234567890abcdef" image=chain.c32 label=boot_whole_disk append="mbr:{{ DISK_IDENTIFIER }}" """ _UEFI_PXECONF_BOOT_WHOLE_DISK = """ default=boot_whole_disk image=deploy_kernel label=deploy initrd=deploy_ramdisk append="ro text" image=kernel label=boot_partition initrd=ramdisk append="root={{ ROOT }}" image=chain.c32 label=boot_whole_disk append="mbr:0x12345678" """ _UEFI_PXECONF_DEPLOY_GRUB = b""" set default=deploy set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=(( ROOT ))" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } """ _UEFI_PXECONF_BOOT_PARTITION_GRUB = """ set default=boot_partition set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=UUID=12345678-1234-1234-1234-1234567890abcdef" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:(( DISK_IDENTIFIER )) } """ _UEFI_PXECONF_BOOT_WHOLE_DISK_GRUB = """ set default=boot_whole_disk set timeout=5 set hidden_timeout_quiet=false menuentry "deploy" { linuxefi deploy_kernel "ro text" initrdefi deploy_ramdisk } menuentry "boot_partition" { linuxefi kernel "root=(( ROOT ))" initrdefi ramdisk } menuentry "boot_whole_disk" { linuxefi chain.c32 mbr:0x12345678 } """ class SwitchPxeConfigTestCase(tests_base.TestCase): # NOTE(TheJulia): Remove elilo support after the deprecation period, # in the Queens release. def _create_config(self, ipxe=False, boot_mode=None, boot_loader='elilo'): (fd, fname) = tempfile.mkstemp() if boot_mode == 'uefi' and not ipxe: if boot_loader == 'grub': pxe_cfg = _UEFI_PXECONF_DEPLOY_GRUB else: pxe_cfg = _UEFI_PXECONF_DEPLOY else: pxe_cfg = _IPXECONF_DEPLOY if ipxe else _PXECONF_DEPLOY os.write(fd, pxe_cfg) os.close(fd) self.addCleanup(os.unlink, fname) return fname def test_switch_pxe_config_partition_image(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_BOOT_PARTITION, pxeconf) def test_switch_pxe_config_whole_disk_image(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_pxe_config_trusted_boot(self): boot_mode = 'bios' fname = self._create_config() utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_PXECONF_TRUSTED_BOOT, pxeconf) def test_switch_ipxe_config_partition_image(self): boot_mode = 'bios' fname = self._create_config(ipxe=True) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False, ipxe_enabled=True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_PARTITION, pxeconf) def test_switch_ipxe_config_whole_disk_image(self): boot_mode = 'bios' fname = self._create_config(ipxe=True) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True, ipxe_enabled=True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_WHOLE_DISK, pxeconf) # NOTE(TheJulia): Remove elilo support after the deprecation period, # in the Queens release. def test_switch_uefi_elilo_pxe_config_partition_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_PARTITION, pxeconf) # NOTE(TheJulia): Remove elilo support after the deprecation period, # in the Queens release. def test_switch_uefi_elilo_config_whole_disk_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_uefi_grub_pxe_config_partition_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, boot_loader='grub') utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_PARTITION_GRUB, pxeconf) def test_switch_uefi_grub_config_whole_disk_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, boot_loader='grub') utils.switch_pxe_config(fname, '0x12345678', boot_mode, True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_UEFI_PXECONF_BOOT_WHOLE_DISK_GRUB, pxeconf) def test_switch_uefi_ipxe_config_partition_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, ipxe=True) utils.switch_pxe_config(fname, '12345678-1234-1234-1234-1234567890abcdef', boot_mode, False, ipxe_enabled=True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_PARTITION, pxeconf) def test_switch_uefi_ipxe_config_whole_disk_image(self): boot_mode = 'uefi' fname = self._create_config(boot_mode=boot_mode, ipxe=True) utils.switch_pxe_config(fname, '0x12345678', boot_mode, True, ipxe_enabled=True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_WHOLE_DISK, pxeconf) def test_switch_ipxe_iscsi_boot(self): boot_mode = 'iscsi' fname = self._create_config(boot_mode=boot_mode, ipxe=True) utils.switch_pxe_config(fname, '0x12345678', boot_mode, False, False, True, ipxe_enabled=True) with open(fname, 'r') as f: pxeconf = f.read() self.assertEqual(_IPXECONF_BOOT_ISCSI_NO_CONFIG, pxeconf) class GetPxeBootConfigTestCase(db_base.DbTestCase): def setUp(self): super(GetPxeBootConfigTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake-hardware') self.config(pxe_bootfile_name='bios-bootfile', group='pxe') self.config(uefi_pxe_bootfile_name='uefi-bootfile', group='pxe') self.config(pxe_config_template='bios-template', group='pxe') self.config(uefi_pxe_config_template='uefi-template', group='pxe') self.bootfile_by_arch = {'aarch64': 'aarch64-bootfile', 'ppc64': 'ppc64-bootfile'} self.template_by_arch = {'aarch64': 'aarch64-template', 'ppc64': 'ppc64-template'} def test_get_pxe_boot_file_bios_without_by_arch(self): properties = {'cpu_arch': 'x86', 'capabilities': 'boot_mode:bios'} self.node.properties = properties self.config(pxe_bootfile_name_by_arch={}, group='pxe') result = utils.get_pxe_boot_file(self.node) self.assertEqual('bios-bootfile', result) def test_get_pxe_config_template_bios_without_by_arch(self): properties = {'cpu_arch': 'x86', 'capabilities': 'boot_mode:bios'} self.node.properties = properties self.config(pxe_config_template_by_arch={}, group='pxe') result = utils.get_pxe_config_template(self.node) self.assertEqual('bios-template', result) def test_get_pxe_boot_file_uefi_without_by_arch(self): properties = {'cpu_arch': 'x86_64', 'capabilities': 'boot_mode:uefi'} self.node.properties = properties self.config(pxe_bootfile_name_by_arch={}, group='pxe') result = utils.get_pxe_boot_file(self.node) self.assertEqual('uefi-bootfile', result) def test_get_pxe_config_template_uefi_without_by_arch(self): properties = {'cpu_arch': 'x86_64', 'capabilities': 'boot_mode:uefi'} self.node.properties = properties self.config(pxe_config_template_by_arch={}, group='pxe') result = utils.get_pxe_config_template(self.node) self.assertEqual('uefi-template', result) def test_get_pxe_boot_file_cpu_not_in_by_arch(self): properties = {'cpu_arch': 'x86', 'capabilities': 'boot_mode:bios'} self.node.properties = properties self.config(pxe_bootfile_name_by_arch=self.bootfile_by_arch, group='pxe') result = utils.get_pxe_boot_file(self.node) self.assertEqual('bios-bootfile', result) def test_get_pxe_config_template_cpu_not_in_by_arch(self): properties = {'cpu_arch': 'x86', 'capabilities': 'boot_mode:bios'} self.node.properties = properties self.config(pxe_config_template_by_arch=self.template_by_arch, group='pxe') result = utils.get_pxe_config_template(self.node) self.assertEqual('bios-template', result) def test_get_pxe_boot_file_cpu_in_by_arch(self): properties = {'cpu_arch': 'aarch64', 'capabilities': 'boot_mode:uefi'} self.node.properties = properties self.config(pxe_bootfile_name_by_arch=self.bootfile_by_arch, group='pxe') result = utils.get_pxe_boot_file(self.node) self.assertEqual('aarch64-bootfile', result) def test_get_pxe_config_template_cpu_in_by_arch(self): properties = {'cpu_arch': 'aarch64', 'capabilities': 'boot_mode:uefi'} self.node.properties = properties self.config(pxe_config_template_by_arch=self.template_by_arch, group='pxe') result = utils.get_pxe_config_template(self.node) self.assertEqual('aarch64-template', result) def test_get_pxe_boot_file_emtpy_property(self): self.node.properties = {} self.config(pxe_bootfile_name_by_arch=self.bootfile_by_arch, group='pxe') result = utils.get_pxe_boot_file(self.node) self.assertEqual('bios-bootfile', result) def test_get_pxe_config_template_emtpy_property(self): self.node.properties = {} self.config(pxe_config_template_by_arch=self.template_by_arch, group='pxe') result = utils.get_pxe_config_template(self.node) self.assertEqual('bios-template', result) def test_get_pxe_config_template_per_node(self): node = obj_utils.create_test_node( self.context, driver='fake-hardware', driver_info={"pxe_template": "fake-template"}, ) result = utils.get_pxe_config_template(node) self.assertEqual('fake-template', result) @mock.patch('time.sleep', lambda sec: None) class OtherFunctionTestCase(db_base.DbTestCase): def setUp(self): super(OtherFunctionTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, boot_interface='pxe') @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(manager_utils, 'node_power_action', autospec=True) @mock.patch.object(manager_utils, 'deploying_error_handler', autospec=True) def _test_set_failed_state(self, mock_error, mock_power, mock_log, event_value=None, power_value=None, log_calls=None, poweroff=True, collect_logs=True): err_msg = 'some failure' mock_error.side_effect = event_value mock_power.side_effect = power_value with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: if collect_logs: utils.set_failed_state(task, err_msg) else: utils.set_failed_state(task, err_msg, collect_logs=collect_logs) mock_error.assert_called_once_with(task, err_msg, err_msg, clean_up=False) if poweroff: mock_power.assert_called_once_with(task, states.POWER_OFF) else: self.assertFalse(mock_power.called) self.assertEqual(err_msg, task.node.last_error) if (log_calls and poweroff): mock_log.exception.assert_has_calls(log_calls) else: self.assertFalse(mock_log.called) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_set_failed_state(self, mock_collect): exc_state = exception.InvalidState('invalid state') exc_param = exception.InvalidParameterValue('invalid parameter') mock_call = mock.call(mock.ANY) self._test_set_failed_state() calls = [mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), log_calls=calls) calls = [mock_call] self._test_set_failed_state(power_value=iter([exc_param] * len(calls)), log_calls=calls) calls = [mock_call, mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), power_value=iter([exc_param] * len(calls)), log_calls=calls) self.assertEqual(4, mock_collect.call_count) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_set_failed_state_no_poweroff(self, mock_collect): cfg.CONF.set_override('power_off_after_deploy_failure', False, 'deploy') exc_state = exception.InvalidState('invalid state') exc_param = exception.InvalidParameterValue('invalid parameter') mock_call = mock.call(mock.ANY) self._test_set_failed_state(poweroff=False) calls = [mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), log_calls=calls, poweroff=False) calls = [mock_call] self._test_set_failed_state(power_value=iter([exc_param] * len(calls)), log_calls=calls, poweroff=False) calls = [mock_call, mock_call] self._test_set_failed_state(event_value=iter([exc_state] * len(calls)), power_value=iter([exc_param] * len(calls)), log_calls=calls, poweroff=False) self.assertEqual(4, mock_collect.call_count) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_set_failed_state_collect_deploy_logs(self, mock_collect): for opt in ('always', 'on_failure'): cfg.CONF.set_override('deploy_logs_collect', opt, 'agent') self._test_set_failed_state() mock_collect.assert_called_once_with(mock.ANY) mock_collect.reset_mock() @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_set_failed_state_collect_deploy_logs_never(self, mock_collect): cfg.CONF.set_override('deploy_logs_collect', 'never', 'agent') self._test_set_failed_state() self.assertFalse(mock_collect.called) @mock.patch.object(driver_utils, 'collect_ramdisk_logs', autospec=True) def test_set_failed_state_collect_deploy_logs_overide(self, mock_collect): cfg.CONF.set_override('deploy_logs_collect', 'always', 'agent') self._test_set_failed_state(collect_logs=False) self.assertFalse(mock_collect.called) def test_get_boot_option(self): self.node.instance_info = {'capabilities': '{"boot_option": "local"}'} result = utils.get_boot_option(self.node) self.assertEqual("local", result) def test_get_boot_option_default_value(self): self.node.instance_info = {} result = utils.get_boot_option(self.node) self.assertEqual("local", result) def test_get_boot_option_overridden_default_value(self): cfg.CONF.set_override('default_boot_option', 'local', 'deploy') self.node.instance_info = {} result = utils.get_boot_option(self.node) self.assertEqual("local", result) def test_get_boot_option_instance_info_priority(self): cfg.CONF.set_override('default_boot_option', 'local', 'deploy') self.node.instance_info = {'capabilities': '{"boot_option": "netboot"}'} result = utils.get_boot_option(self.node) self.assertEqual("netboot", result) @mock.patch.object(utils, 'is_software_raid', autospec=True) def test_get_boot_option_software_raid(self, mock_is_software_raid): mock_is_software_raid.return_value = True cfg.CONF.set_override('default_boot_option', 'netboot', 'deploy') result = utils.get_boot_option(self.node) self.assertEqual("local", result) def test_is_software_raid(self): self.node.target_raid_config = { "logical_disks": [ { "size_gb": 100, "raid_level": "1", "controller": "software", } ] } result = utils.is_software_raid(self.node) self.assertTrue(result) def test_is_software_raid_false(self): self.node.target_raid_config = {} result = utils.is_software_raid(self.node) self.assertFalse(result) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True) def test_fetch_images(self, mock_clean_up_caches): mock_cache = mock.MagicMock( spec_set=['fetch_image', 'master_dir'], master_dir='master_dir') utils.fetch_images(None, mock_cache, [('uuid', 'path')]) mock_clean_up_caches.assert_called_once_with(None, 'master_dir', [('uuid', 'path')]) mock_cache.fetch_image.assert_called_once_with('uuid', 'path', ctx=None, force_raw=True) @mock.patch.object(image_cache, 'clean_up_caches', autospec=True) def test_fetch_images_fail(self, mock_clean_up_caches): exc = exception.InsufficientDiskSpace(path='a', required=2, actual=1) mock_cache = mock.MagicMock( spec_set=['master_dir'], master_dir='master_dir') mock_clean_up_caches.side_effect = [exc] self.assertRaises(exception.InstanceDeployFailure, utils.fetch_images, None, mock_cache, [('uuid', 'path')]) mock_clean_up_caches.assert_called_once_with(None, 'master_dir', [('uuid', 'path')]) @mock.patch('ironic.common.keystone.get_auth') @mock.patch.object(utils, '_get_ironic_session') def test_get_ironic_api_url_from_config(self, mock_ks, mock_auth): mock_sess = mock.Mock() mock_ks.return_value = mock_sess fake_api_url = 'http://foo/' self.config(api_url=fake_api_url, group='conductor') # also checking for stripped trailing slash self.assertEqual(fake_api_url[:-1], utils.get_ironic_api_url()) @mock.patch('ironic.common.keystone.get_auth') @mock.patch.object(utils, '_get_ironic_session') @mock.patch('ironic.common.keystone.get_adapter') def test_get_ironic_api_url_from_keystone(self, mock_ka, mock_ks, mock_auth): mock_sess = mock.Mock() mock_ks.return_value = mock_sess fake_api_url = 'http://foo/' mock_ka.return_value.get_endpoint.return_value = fake_api_url # NOTE(pas-ha) endpoint_override is None by default self.config(api_url=None, group='conductor') url = utils.get_ironic_api_url() # also checking for stripped trailing slash self.assertEqual(fake_api_url[:-1], url) mock_ka.assert_called_with('service_catalog', session=mock_sess, auth=mock_auth.return_value) mock_ka.return_value.get_endpoint.assert_called_once_with() @mock.patch('ironic.common.keystone.get_auth') @mock.patch.object(utils, '_get_ironic_session') @mock.patch('ironic.common.keystone.get_adapter') def test_get_ironic_api_url_fail(self, mock_ka, mock_ks, mock_auth): mock_sess = mock.Mock() mock_ks.return_value = mock_sess mock_ka.return_value.get_endpoint.side_effect = ( exception.KeystoneFailure()) self.config(api_url=None, group='conductor') self.assertRaises(exception.InvalidParameterValue, utils.get_ironic_api_url) @mock.patch('ironic.common.keystone.get_auth') @mock.patch.object(utils, '_get_ironic_session') @mock.patch('ironic.common.keystone.get_adapter') def test_get_ironic_api_url_none(self, mock_ka, mock_ks, mock_auth): mock_sess = mock.Mock() mock_ks.return_value = mock_sess mock_ka.return_value.get_endpoint.return_value = None self.config(api_url=None, group='conductor') self.assertRaises(exception.InvalidParameterValue, utils.get_ironic_api_url) class GetSingleNicTestCase(db_base.DbTestCase): def setUp(self): super(GetSingleNicTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context) def test_get_single_nic_with_vif_port_id(self): obj_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={'tenant_vif_port_id': 'test-vif-A'}) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: address = utils.get_single_nic_with_vif_port_id(task) self.assertEqual('aa:bb:cc:dd:ee:ff', address) def test_get_single_nic_with_vif_port_id_extra(self): obj_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': 'test-vif-A'}) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: address = utils.get_single_nic_with_vif_port_id(task) self.assertEqual('aa:bb:cc:dd:ee:ff', address) def test_get_single_nic_with_cleaning_vif_port_id(self): obj_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={'cleaning_vif_port_id': 'test-vif-A'}) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: address = utils.get_single_nic_with_vif_port_id(task) self.assertEqual('aa:bb:cc:dd:ee:ff', address) def test_get_single_nic_with_provisioning_vif_port_id(self): obj_utils.create_test_port( self.context, node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), internal_info={'provisioning_vif_port_id': 'test-vif-A'}) with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: address = utils.get_single_nic_with_vif_port_id(task) self.assertEqual('aa:bb:cc:dd:ee:ff', address) class ParseInstanceInfoCapabilitiesTestCase(tests_base.TestCase): def setUp(self): super(ParseInstanceInfoCapabilitiesTestCase, self).setUp() self.node = obj_utils.get_test_node(self.context, driver='fake-hardware') def test_parse_instance_info_capabilities_string(self): self.node.instance_info = {'capabilities': '{"cat": "meow"}'} expected_result = {"cat": "meow"} result = utils.parse_instance_info_capabilities(self.node) self.assertEqual(expected_result, result) def test_parse_instance_info_capabilities(self): self.node.instance_info = {'capabilities': {"dog": "wuff"}} expected_result = {"dog": "wuff"} result = utils.parse_instance_info_capabilities(self.node) self.assertEqual(expected_result, result) def test_parse_instance_info_invalid_type(self): self.node.instance_info = {'capabilities': 'not-a-dict'} self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info_capabilities, self.node) def test_is_secure_boot_requested_true(self): self.node.instance_info = {'capabilities': {"secure_boot": "tRue"}} self.assertTrue(utils.is_secure_boot_requested(self.node)) def test_is_secure_boot_requested_false(self): self.node.instance_info = {'capabilities': {"secure_boot": "false"}} self.assertFalse(utils.is_secure_boot_requested(self.node)) def test_is_secure_boot_requested_invalid(self): self.node.instance_info = {'capabilities': {"secure_boot": "invalid"}} self.assertFalse(utils.is_secure_boot_requested(self.node)) def test_is_trusted_boot_requested_true(self): self.node.instance_info = {'capabilities': {"trusted_boot": "true"}} self.assertTrue(utils.is_trusted_boot_requested(self.node)) def test_is_trusted_boot_requested_false(self): self.node.instance_info = {'capabilities': {"trusted_boot": "false"}} self.assertFalse(utils.is_trusted_boot_requested(self.node)) def test_is_trusted_boot_requested_invalid(self): self.node.instance_info = {'capabilities': {"trusted_boot": "invalid"}} self.assertFalse(utils.is_trusted_boot_requested(self.node)) def test_get_boot_mode_for_deploy_using_capabilities(self): properties = {'capabilities': 'boot_mode:uefi,cap2:value2'} self.node.properties = properties result = boot_mode_utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) def test_get_boot_mode_for_deploy_using_instance_info_cap(self): instance_info = {'capabilities': {'secure_boot': 'True'}} self.node.instance_info = instance_info result = boot_mode_utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) instance_info = {'capabilities': {'trusted_boot': 'True'}} self.node.instance_info = instance_info result = boot_mode_utils.get_boot_mode_for_deploy(self.node) self.assertEqual('bios', result) instance_info = {'capabilities': {'trusted_boot': 'True', 'secure_boot': 'True'}} self.node.instance_info = instance_info result = boot_mode_utils.get_boot_mode_for_deploy(self.node) self.assertEqual('uefi', result) def test_get_boot_mode_for_deploy_using_instance_info(self): instance_info = {'deploy_boot_mode': 'bios'} self.node.instance_info = instance_info result = boot_mode_utils.get_boot_mode_for_deploy(self.node) self.assertEqual('bios', result) def test_validate_boot_mode_capability(self): prop = {'capabilities': 'boot_mode:uefi,cap2:value2'} self.node.properties = prop result = utils.validate_capabilities(self.node) self.assertIsNone(result) def test_validate_boot_mode_capability_with_exc(self): prop = {'capabilities': 'boot_mode:UEFI,cap2:value2'} self.node.properties = prop self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_validate_boot_mode_capability_instance_info(self): inst_info = {'capabilities': {"boot_mode": "uefi", "cap2": "value2"}} self.node.instance_info = inst_info result = utils.validate_capabilities(self.node) self.assertIsNone(result) def test_validate_boot_mode_capability_instance_info_with_exc(self): inst_info = {'capabilities': {"boot_mode": "UEFI", "cap2": "value2"}} self.node.instance_info = inst_info self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_validate_trusted_boot_capability(self): properties = {'capabilities': 'trusted_boot:value'} self.node.properties = properties self.assertRaises(exception.InvalidParameterValue, utils.validate_capabilities, self.node) def test_all_supported_capabilities(self): self.assertEqual(('local', 'netboot', 'ramdisk'), utils.SUPPORTED_CAPABILITIES['boot_option']) self.assertEqual(('bios', 'uefi'), utils.SUPPORTED_CAPABILITIES['boot_mode']) self.assertEqual(('true', 'false'), utils.SUPPORTED_CAPABILITIES['secure_boot']) self.assertEqual(('true', 'false'), utils.SUPPORTED_CAPABILITIES['trusted_boot']) def test_get_disk_label(self): inst_info = {'capabilities': {'disk_label': 'gpt', 'foo': 'bar'}} self.node.instance_info = inst_info result = utils.get_disk_label(self.node) self.assertEqual('gpt', result) class TrySetBootDeviceTestCase(db_base.DbTestCase): def setUp(self): super(TrySetBootDeviceTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver="fake-hardware") @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_okay(self, node_set_boot_device_mock): with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.try_set_boot_device(task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_ipmifailure_uefi( self, node_set_boot_device_mock, log_mock): self.node.properties = {'capabilities': 'boot_mode:uefi'} self.node.save() node_set_boot_device_mock.side_effect = exception.IPMIFailure(cmd='a') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: utils.try_set_boot_device(task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) log_mock.warning.assert_called_once_with(mock.ANY, self.node.uuid) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_ipmifailure_bios( self, node_set_boot_device_mock): node_set_boot_device_mock.side_effect = exception.IPMIFailure(cmd='a') with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IPMIFailure, utils.try_set_boot_device, task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) @mock.patch.object(manager_utils, 'node_set_boot_device', autospec=True) def test_try_set_boot_device_some_other_exception( self, node_set_boot_device_mock): exc = exception.IloOperationError(operation="qwe", error="error") node_set_boot_device_mock.side_effect = exc with task_manager.acquire(self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.IloOperationError, utils.try_set_boot_device, task, boot_devices.DISK, persistent=True) node_set_boot_device_mock.assert_called_once_with( task, boot_devices.DISK, persistent=True) class AgentMethodsTestCase(db_base.DbTestCase): def setUp(self): super(AgentMethodsTestCase, self).setUp() self.clean_steps = { 'deploy': [ {'interface': 'deploy', 'step': 'erase_devices', 'priority': 20}, {'interface': 'deploy', 'step': 'update_firmware', 'priority': 30} ], 'raid': [ {'interface': 'raid', 'step': 'create_configuration', 'priority': 10} ] } n = {'boot_interface': 'pxe', 'deploy_interface': 'direct', 'driver_internal_info': { 'agent_cached_clean_steps': self.clean_steps}} self.node = obj_utils.create_test_node(self.context, **n) self.ports = [obj_utils.create_test_port(self.context, node_id=self.node.id)] def test_agent_add_clean_params(self): cfg.CONF.set_override('shred_random_overwrite_iterations', 2, 'deploy') cfg.CONF.set_override('shred_final_overwrite_with_zeros', False, 'deploy') cfg.CONF.set_override('continue_if_disk_secure_erase_fails', True, 'deploy') cfg.CONF.set_override('enable_ata_secure_erase', False, 'deploy') cfg.CONF.set_override('disk_erasure_concurrency', 8, 'deploy') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.agent_add_clean_params(task) self.assertEqual(2, task.node.driver_internal_info[ 'agent_erase_devices_iterations']) self.assertIs(False, task.node.driver_internal_info[ 'agent_erase_devices_zeroize']) self.assertIs(True, task.node.driver_internal_info[ 'agent_continue_if_ata_erase_failed']) self.assertIs(False, task.node.driver_internal_info[ 'agent_enable_ata_secure_erase']) self.assertEqual(8, task.node.driver_internal_info[ 'disk_erasure_concurrency']) @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(pxe.PXEBoot, 'prepare_ramdisk', autospec=True) @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) @mock.patch.object(utils, 'build_agent_options', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.' 'add_cleaning_network') def _test_prepare_inband_cleaning( self, add_cleaning_network_mock, build_options_mock, power_mock, prepare_ramdisk_mock, is_fast_track_mock, manage_boot=True, fast_track=False): build_options_mock.return_value = {'a': 'b'} is_fast_track_mock.return_value = fast_track with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertEqual( states.CLEANWAIT, utils.prepare_inband_cleaning(task, manage_boot=manage_boot)) add_cleaning_network_mock.assert_called_once_with(task) if not fast_track: power_mock.assert_called_once_with(task, states.REBOOT) else: self.assertFalse(power_mock.called) self.assertEqual(1, task.node.driver_internal_info[ 'agent_erase_devices_iterations']) self.assertIs(True, task.node.driver_internal_info[ 'agent_erase_devices_zeroize']) if manage_boot: prepare_ramdisk_mock.assert_called_once_with( mock.ANY, mock.ANY, {'a': 'b'}) build_options_mock.assert_called_once_with(task.node) else: self.assertFalse(prepare_ramdisk_mock.called) self.assertFalse(build_options_mock.called) def test_prepare_inband_cleaning(self): self._test_prepare_inband_cleaning() def test_prepare_inband_cleaning_manage_boot_false(self): self._test_prepare_inband_cleaning(manage_boot=False) def test_prepare_inband_cleaning_fast_track(self): self._test_prepare_inband_cleaning(fast_track=True) @mock.patch('ironic.conductor.utils.is_fast_track', autospec=True) @mock.patch.object(pxe.PXEBoot, 'clean_up_ramdisk', autospec=True) @mock.patch('ironic.drivers.modules.network.flat.FlatNetwork.' 'remove_cleaning_network') @mock.patch('ironic.conductor.utils.node_power_action', autospec=True) def _test_tear_down_inband_cleaning( self, power_mock, remove_cleaning_network_mock, clean_up_ramdisk_mock, is_fast_track_mock, manage_boot=True, fast_track=False, cleaning_error=False): is_fast_track_mock.return_value = fast_track with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: if cleaning_error: task.node.fault = faults.CLEAN_FAILURE utils.tear_down_inband_cleaning(task, manage_boot=manage_boot) if not (fast_track or cleaning_error): power_mock.assert_called_once_with(task, states.POWER_OFF) else: self.assertFalse(power_mock.called) remove_cleaning_network_mock.assert_called_once_with(task) if manage_boot: clean_up_ramdisk_mock.assert_called_once_with( task.driver.boot, task) else: self.assertFalse(clean_up_ramdisk_mock.called) def test_tear_down_inband_cleaning(self): self._test_tear_down_inband_cleaning(manage_boot=True) def test_tear_down_inband_cleaning_manage_boot_false(self): self._test_tear_down_inband_cleaning(manage_boot=False) def test_tear_down_inband_cleaning_fast_track(self): self._test_tear_down_inband_cleaning(fast_track=True) def test_tear_down_inband_cleaning_cleaning_error(self): self._test_tear_down_inband_cleaning(cleaning_error=True) def test_build_agent_options_conf(self): self.config(api_url='https://api-url', group='conductor') options = utils.build_agent_options(self.node) self.assertEqual('https://api-url', options['ipa-api-url']) @mock.patch.object(utils, '_get_ironic_session') def test_build_agent_options_keystone(self, session_mock): self.config(api_url=None, group='conductor') sess = mock.Mock() sess.get_endpoint.return_value = 'https://api-url' session_mock.return_value = sess options = utils.build_agent_options(self.node) self.assertEqual('https://api-url', options['ipa-api-url']) def test_direct_deploy_should_convert_raw_image_true(self): cfg.CONF.set_override('force_raw_images', True) cfg.CONF.set_override('stream_raw_images', True, group='agent') internal_info = self.node.driver_internal_info internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = internal_info self.assertTrue( utils.direct_deploy_should_convert_raw_image(self.node)) def test_direct_deploy_should_convert_raw_image_no_force_raw(self): cfg.CONF.set_override('force_raw_images', False) cfg.CONF.set_override('stream_raw_images', True, group='agent') internal_info = self.node.driver_internal_info internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = internal_info self.assertFalse( utils.direct_deploy_should_convert_raw_image(self.node)) def test_direct_deploy_should_convert_raw_image_no_stream(self): cfg.CONF.set_override('force_raw_images', True) cfg.CONF.set_override('stream_raw_images', False, group='agent') internal_info = self.node.driver_internal_info internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = internal_info self.assertFalse( utils.direct_deploy_should_convert_raw_image(self.node)) def test_direct_deploy_should_convert_raw_image_partition(self): cfg.CONF.set_override('force_raw_images', True) cfg.CONF.set_override('stream_raw_images', True, group='agent') internal_info = self.node.driver_internal_info internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = internal_info self.assertTrue( utils.direct_deploy_should_convert_raw_image(self.node)) class ValidateImagePropertiesTestCase(db_base.DbTestCase): @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image(self, image_service_mock): node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) image_service_mock.return_value.show.return_value = { 'properties': {'kernel_id': '1111', 'ramdisk_id': '2222'}, } utils.validate_image_properties(self.context, inst_info, ['kernel_id', 'ramdisk_id']) image_service_mock.assert_called_once_with( node.instance_info['image_source'], context=self.context ) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_missing_prop( self, image_service_mock): node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) image_service_mock.return_value.show.return_value = { 'properties': {'kernel_id': '1111'}, } self.assertRaises(exception.MissingParameterValue, utils.validate_image_properties, self.context, inst_info, ['kernel_id', 'ramdisk_id']) image_service_mock.assert_called_once_with( node.instance_info['image_source'], context=self.context ) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_not_authorized( self, image_service_mock): inst_info = {'image_source': 'uuid'} show_mock = image_service_mock.return_value.show show_mock.side_effect = exception.ImageNotAuthorized(image_id='uuid') self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_validate_image_properties_glance_image_not_found( self, image_service_mock): inst_info = {'image_source': 'uuid'} show_mock = image_service_mock.return_value.show show_mock.side_effect = exception.ImageNotFound(image_id='uuid') self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) def test_validate_image_properties_invalid_image_href(self): inst_info = {'image_source': 'emule://uuid'} self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, []) @mock.patch.object(image_service.HttpImageService, 'show', autospec=True) def test_validate_image_properties_nonglance_image( self, image_service_show_mock): instance_info = { 'image_source': 'http://ubuntu', 'kernel': 'kernel_uuid', 'ramdisk': 'file://initrd', 'root_gb': 100, } image_service_show_mock.return_value = {'size': 1, 'properties': {}} node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) utils.validate_image_properties(self.context, inst_info, ['kernel', 'ramdisk']) image_service_show_mock.assert_called_once_with( mock.ANY, instance_info['image_source']) @mock.patch.object(image_service.HttpImageService, 'show', autospec=True) def test_validate_image_properties_nonglance_image_validation_fail( self, img_service_show_mock): instance_info = { 'image_source': 'http://ubuntu', 'kernel': 'kernel_uuid', 'ramdisk': 'file://initrd', 'root_gb': 100, } img_service_show_mock.side_effect = exception.ImageRefValidationFailed( image_href='http://ubuntu', reason='HTTPError') node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) inst_info = utils.get_image_instance_info(node) expected_error = ('Validation of image href http://ubuntu ' 'failed, reason: HTTPError') error = self.assertRaises(exception.InvalidParameterValue, utils.validate_image_properties, self.context, inst_info, ['kernel', 'ramdisk']) self.assertEqual(expected_error, str(error)) class ValidateParametersTestCase(db_base.DbTestCase): def _test__get_img_instance_info( self, instance_info=INST_INFO_DICT, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT): # make sure we get back the expected things node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=instance_info, driver_info=driver_info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) info = utils.get_image_instance_info(node) self.assertIsNotNone(info['image_source']) return info def test__get_img_instance_info_good(self): self._test__get_img_instance_info() def test__get_img_instance_info_good_non_glance_image(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['kernel'] = 'http://kernel' instance_info['ramdisk'] = 'http://ramdisk' info = self._test__get_img_instance_info(instance_info=instance_info) self.assertIsNotNone(info['ramdisk']) self.assertIsNotNone(info['kernel']) def test__get_img_instance_info_non_glance_image_missing_kernel(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['ramdisk'] = 'http://ramdisk' self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_non_glance_image_missing_ramdisk(self): instance_info = INST_INFO_DICT.copy() instance_info['image_source'] = 'http://image' instance_info['kernel'] = 'http://kernel' self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_missing_image_source(self): instance_info = INST_INFO_DICT.copy() del instance_info['image_source'] self.assertRaises( exception.MissingParameterValue, self._test__get_img_instance_info, instance_info=instance_info) def test__get_img_instance_info_whole_disk_image(self): driver_internal_info = DRV_INTERNAL_INFO_DICT.copy() driver_internal_info['is_whole_disk_image'] = True self._test__get_img_instance_info( driver_internal_info=driver_internal_info) class InstanceInfoTestCase(db_base.DbTestCase): def test_parse_instance_info_good(self): # make sure we get back the expected things node = obj_utils.create_test_node( self.context, boot_interface='pxe', instance_info=INST_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT ) info = utils.parse_instance_info(node) self.assertIsNotNone(info['image_source']) self.assertIsNotNone(info['root_gb']) self.assertEqual(0, info['ephemeral_gb']) self.assertIsNone(info['configdrive']) def test_parse_instance_info_missing_instance_source(self): # make sure error is raised when info is missing info = dict(INST_INFO_DICT) del info['image_source'] node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_missing_root_gb(self): # make sure error is raised when info is missing info = dict(INST_INFO_DICT) del info['root_gb'] node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_invalid_root_gb(self): info = dict(INST_INFO_DICT) info['root_gb'] = 'foobar' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_valid_ephemeral_gb(self): ephemeral_gb = 10 ephemeral_mb = 1024 * ephemeral_gb ephemeral_fmt = 'test-fmt' info = dict(INST_INFO_DICT) info['ephemeral_gb'] = ephemeral_gb info['ephemeral_format'] = ephemeral_fmt node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = utils.parse_instance_info(node) self.assertEqual(ephemeral_mb, data['ephemeral_mb']) self.assertEqual(ephemeral_fmt, data['ephemeral_format']) def test_parse_instance_info_unicode_swap_mb(self): swap_mb = u'10' swap_mb_int = 10 info = dict(INST_INFO_DICT) info['swap_mb'] = swap_mb node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = utils.parse_instance_info(node) self.assertEqual(swap_mb_int, data['swap_mb']) def test_parse_instance_info_invalid_ephemeral_gb(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 'foobar' info['ephemeral_format'] = 'exttest' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_valid_ephemeral_missing_format(self): ephemeral_gb = 10 ephemeral_fmt = 'test-fmt' info = dict(INST_INFO_DICT) info['ephemeral_gb'] = ephemeral_gb info['ephemeral_format'] = None self.config(default_ephemeral_format=ephemeral_fmt, group='pxe') node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) instance_info = utils.parse_instance_info(node) self.assertEqual(ephemeral_fmt, instance_info['ephemeral_format']) def test_parse_instance_info_valid_preserve_ephemeral_true(self): info = dict(INST_INFO_DICT) for opt in ['true', 'TRUE', 'True', 't', 'on', 'yes', 'y', '1']: info['preserve_ephemeral'] = opt node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = utils.parse_instance_info(node) self.assertTrue(data['preserve_ephemeral']) def test_parse_instance_info_valid_preserve_ephemeral_false(self): info = dict(INST_INFO_DICT) for opt in ['false', 'FALSE', 'False', 'f', 'off', 'no', 'n', '0']: info['preserve_ephemeral'] = opt node = obj_utils.create_test_node( self.context, uuid=uuidutils.generate_uuid(), instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) data = utils.parse_instance_info(node) self.assertFalse(data['preserve_ephemeral']) def test_parse_instance_info_invalid_preserve_ephemeral(self): info = dict(INST_INFO_DICT) info['preserve_ephemeral'] = 'foobar' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_invalid_ephemeral_disk(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 9, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertRaises(exception.InvalidParameterValue, utils.parse_instance_info, node) def test__check_disk_layout_unchanged_fails(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 20, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertRaises(exception.InvalidParameterValue, utils._check_disk_layout_unchanged, node, info) def test__check_disk_layout_unchanged(self): info = dict(INST_INFO_DICT) info['ephemeral_gb'] = 10 info['swap_mb'] = 0 info['root_gb'] = 20 info['preserve_ephemeral'] = True drv_internal_dict = {'instance': {'ephemeral_gb': 10, 'swap_mb': 0, 'root_gb': 20}} drv_internal_dict.update(DRV_INTERNAL_INFO_DICT) node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=drv_internal_dict, ) self.assertIsNone(utils._check_disk_layout_unchanged(node, info)) def test_parse_instance_info_configdrive(self): info = dict(INST_INFO_DICT) info['configdrive'] = 'http://1.2.3.4/cd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) instance_info = utils.parse_instance_info(node) self.assertEqual('http://1.2.3.4/cd', instance_info['configdrive']) def test_parse_instance_info_nonglance_image(self): info = INST_INFO_DICT.copy() info['image_source'] = 'file:///image.qcow2' info['kernel'] = 'file:///image.vmlinuz' info['ramdisk'] = 'file:///image.initrd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) utils.parse_instance_info(node) def test_parse_instance_info_nonglance_image_no_kernel(self): info = INST_INFO_DICT.copy() info['image_source'] = 'file:///image.qcow2' info['ramdisk'] = 'file:///image.initrd' node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) self.assertRaises(exception.MissingParameterValue, utils.parse_instance_info, node) def test_parse_instance_info_whole_disk_image(self): driver_internal_info = dict(DRV_INTERNAL_INFO_DICT) driver_internal_info['is_whole_disk_image'] = True node = obj_utils.create_test_node( self.context, instance_info=INST_INFO_DICT, driver_internal_info=driver_internal_info, ) instance_info = utils.parse_instance_info(node) self.assertIsNotNone(instance_info['image_source']) self.assertNotIn('root_mb', instance_info) self.assertNotIn('ephemeral_mb', instance_info) self.assertNotIn('swap_mb', instance_info) self.assertIsNone(instance_info['configdrive']) def test_parse_instance_info_whole_disk_image_missing_root(self): driver_internal_info = dict(DRV_INTERNAL_INFO_DICT) driver_internal_info['is_whole_disk_image'] = True info = dict(INST_INFO_DICT) del info['root_gb'] node = obj_utils.create_test_node( self.context, instance_info=info, driver_internal_info=driver_internal_info ) instance_info = utils.parse_instance_info(node) self.assertIsNotNone(instance_info['image_source']) self.assertNotIn('root_mb', instance_info) self.assertNotIn('ephemeral_mb', instance_info) self.assertNotIn('swap_mb', instance_info) class TestBuildInstanceInfoForDeploy(db_base.DbTestCase): def setUp(self): super(TestBuildInstanceInfoForDeploy, self).setUp() self.node = obj_utils.create_test_node(self.context, boot_interface='pxe', deploy_interface='direct') @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_build_instance_info_for_deploy_glance_image(self, glance_mock, validate_mock): i_info = self.node.instance_info i_info['image_source'] = '733d1c44-a2ea-414b-aca7-69decf20d810' driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info self.node.instance_info = i_info self.node.save() image_info = {'checksum': 'aa', 'disk_format': 'qcow2', 'os_hash_algo': 'sha512', 'os_hash_value': 'fake-sha512', 'container_format': 'bare', 'properties': {}} glance_mock.return_value.show = mock.MagicMock(spec_set=[], return_value=image_info) glance_mock.return_value.swift_temp_url.return_value = ( 'http://temp-url') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: utils.build_instance_info_for_deploy(task) glance_mock.assert_called_once_with(context=task.context) glance_mock.return_value.show.assert_called_once_with( self.node.instance_info['image_source']) glance_mock.return_value.swift_temp_url.assert_called_once_with( image_info) validate_mock.assert_called_once_with(mock.ANY, 'http://temp-url', secret=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(utils, 'parse_instance_info', autospec=True) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_build_instance_info_for_deploy_glance_partition_image( self, glance_mock, parse_instance_info_mock, validate_mock): i_info = {} i_info['image_source'] = '733d1c44-a2ea-414b-aca7-69decf20d810' i_info['kernel'] = '13ce5a56-1de3-4916-b8b2-be778645d003' i_info['ramdisk'] = 'a5a370a8-1b39-433f-be63-2c7d708e4b4e' i_info['root_gb'] = 5 i_info['swap_mb'] = 4 i_info['ephemeral_gb'] = 0 i_info['ephemeral_format'] = None i_info['configdrive'] = 'configdrive' driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = False self.node.driver_internal_info = driver_internal_info self.node.instance_info = i_info self.node.save() image_info = {'checksum': 'aa', 'disk_format': 'qcow2', 'os_hash_algo': 'sha512', 'os_hash_value': 'fake-sha512', 'container_format': 'bare', 'properties': {'kernel_id': 'kernel', 'ramdisk_id': 'ramdisk'}} glance_mock.return_value.show = mock.MagicMock(spec_set=[], return_value=image_info) glance_obj_mock = glance_mock.return_value glance_obj_mock.swift_temp_url.return_value = 'http://temp-url' parse_instance_info_mock.return_value = {'swap_mb': 4} image_source = '733d1c44-a2ea-414b-aca7-69decf20d810' expected_i_info = {'root_gb': 5, 'swap_mb': 4, 'ephemeral_gb': 0, 'ephemeral_format': None, 'configdrive': 'configdrive', 'image_source': image_source, 'image_url': 'http://temp-url', 'kernel': 'kernel', 'ramdisk': 'ramdisk', 'image_type': 'partition', 'image_tags': [], 'image_properties': {'kernel_id': 'kernel', 'ramdisk_id': 'ramdisk'}, 'image_checksum': 'aa', 'image_os_hash_algo': 'sha512', 'image_os_hash_value': 'fake-sha512', 'image_container_format': 'bare', 'image_disk_format': 'qcow2'} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = utils.build_instance_info_for_deploy(task) glance_mock.assert_called_once_with(context=task.context) glance_mock.return_value.show.assert_called_once_with( self.node.instance_info['image_source']) glance_mock.return_value.swift_temp_url.assert_called_once_with( image_info) validate_mock.assert_called_once_with( mock.ANY, 'http://temp-url', secret=True) image_type = task.node.instance_info['image_type'] self.assertEqual('partition', image_type) self.assertEqual('kernel', info['kernel']) self.assertEqual('ramdisk', info['ramdisk']) self.assertEqual(expected_i_info, info) parse_instance_info_mock.assert_called_once_with(task.node) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonglance_image( self, validate_href_mock): i_info = self.node.instance_info driver_internal_info = self.node.driver_internal_info i_info['image_source'] = 'http://image-ref' i_info['image_checksum'] = 'aa' i_info['root_gb'] = 10 i_info['image_checksum'] = 'aa' driver_internal_info['is_whole_disk_image'] = True self.node.instance_info = i_info self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = utils.build_instance_info_for_deploy(task) self.assertEqual(self.node.instance_info['image_source'], info['image_url']) validate_href_mock.assert_called_once_with( mock.ANY, 'http://image-ref', False) @mock.patch.object(utils, 'parse_instance_info', autospec=True) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonglance_partition_image( self, validate_href_mock, parse_instance_info_mock): i_info = {} driver_internal_info = self.node.driver_internal_info i_info['image_source'] = 'http://image-ref' i_info['kernel'] = 'http://kernel-ref' i_info['ramdisk'] = 'http://ramdisk-ref' i_info['image_checksum'] = 'aa' i_info['root_gb'] = 10 i_info['configdrive'] = 'configdrive' driver_internal_info['is_whole_disk_image'] = False self.node.instance_info = i_info self.node.driver_internal_info = driver_internal_info self.node.save() validate_href_mock.side_effect = ['http://image-ref', 'http://kernel-ref', 'http://ramdisk-ref'] parse_instance_info_mock.return_value = {'swap_mb': 5} expected_i_info = {'image_source': 'http://image-ref', 'image_url': 'http://image-ref', 'image_type': 'partition', 'kernel': 'http://kernel-ref', 'ramdisk': 'http://ramdisk-ref', 'image_checksum': 'aa', 'root_gb': 10, 'swap_mb': 5, 'configdrive': 'configdrive'} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: info = utils.build_instance_info_for_deploy(task) self.assertEqual(self.node.instance_info['image_source'], info['image_url']) validate_href_mock.assert_called_once_with( mock.ANY, 'http://image-ref', False) self.assertEqual('partition', info['image_type']) self.assertEqual(expected_i_info, info) parse_instance_info_mock.assert_called_once_with(task.node) @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) def test_build_instance_info_for_deploy_nonsupported_image( self, validate_href_mock): validate_href_mock.side_effect = exception.ImageRefValidationFailed( image_href='file://img.qcow2', reason='fail') i_info = self.node.instance_info i_info['image_source'] = 'file://img.qcow2' i_info['image_checksum'] = 'aa' self.node.instance_info = i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.ImageRefValidationFailed, utils.build_instance_info_for_deploy, task) class TestBuildInstanceInfoForHttpProvisioning(db_base.DbTestCase): def setUp(self): super(TestBuildInstanceInfoForHttpProvisioning, self).setUp() self.node = obj_utils.create_test_node(self.context, boot_interface='pxe', deploy_interface='direct') i_info = self.node.instance_info i_info['image_source'] = '733d1c44-a2ea-414b-aca7-69decf20d810' i_info['root_gb'] = 100 driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info self.node.instance_info = i_info self.node.save() self.checksum_mock = self.useFixture(fixtures.MockPatchObject( fileutils, 'compute_file_checksum')).mock self.checksum_mock.return_value = 'fake-checksum' self.cache_image_mock = self.useFixture(fixtures.MockPatchObject( utils, 'cache_instance_image', autospec=True)).mock self.cache_image_mock.return_value = ( '733d1c44-a2ea-414b-aca7-69decf20d810', '/var/lib/ironic/images/{}/disk'.format(self.node.uuid)) self.ensure_tree_mock = self.useFixture(fixtures.MockPatchObject( utils.fileutils, 'ensure_tree', autospec=True)).mock self.create_link_mock = self.useFixture(fixtures.MockPatchObject( common_utils, 'create_link_without_raise', autospec=True)).mock cfg.CONF.set_override('http_url', 'http://172.172.24.10:8080', group='deploy') cfg.CONF.set_override('image_download_source', 'http', group='agent') self.expected_url = '/'.join([cfg.CONF.deploy.http_url, cfg.CONF.deploy.http_image_subdir, self.node.uuid]) self.image_info = {'checksum': 'aa', 'disk_format': 'qcow2', 'os_hash_algo': 'sha512', 'os_hash_value': 'fake-sha512', 'container_format': 'bare', 'properties': {}} @mock.patch.object(image_service.HttpImageService, 'validate_href', autospec=True) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def _test_build_instance_info(self, glance_mock, validate_mock, image_info={}, expect_raw=False): glance_mock.return_value.show = mock.MagicMock(spec_set=[], return_value=image_info) with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: instance_info = utils.build_instance_info_for_deploy(task) glance_mock.assert_called_once_with(context=task.context) glance_mock.return_value.show.assert_called_once_with( self.node.instance_info['image_source']) self.cache_image_mock.assert_called_once_with(task.context, task.node, force_raw=expect_raw) symlink_dir = utils._get_http_image_symlink_dir_path() symlink_file = utils._get_http_image_symlink_file_path( self.node.uuid) image_path = utils._get_image_file_path(self.node.uuid) self.ensure_tree_mock.assert_called_once_with(symlink_dir) self.create_link_mock.assert_called_once_with(image_path, symlink_file) validate_mock.assert_called_once_with(mock.ANY, self.expected_url, secret=True) return image_path, instance_info def test_build_instance_info_no_force_raw(self): cfg.CONF.set_override('force_raw_images', False) _, instance_info = self._test_build_instance_info( image_info=self.image_info, expect_raw=False) self.assertEqual(instance_info['image_checksum'], 'aa') self.assertEqual(instance_info['image_disk_format'], 'qcow2') self.assertEqual(instance_info['image_os_hash_algo'], 'sha512') self.assertEqual(instance_info['image_os_hash_value'], 'fake-sha512') self.checksum_mock.assert_not_called() def test_build_instance_info_force_raw(self): cfg.CONF.set_override('force_raw_images', True) image_path, instance_info = self._test_build_instance_info( image_info=self.image_info, expect_raw=True) self.assertIsNone(instance_info['image_checksum']) self.assertEqual(instance_info['image_disk_format'], 'raw') calls = [mock.call(image_path, algorithm='sha512')] self.checksum_mock.assert_has_calls(calls) def test_build_instance_info_force_raw_drops_md5(self): cfg.CONF.set_override('force_raw_images', True) self.image_info['os_hash_algo'] = 'md5' image_path, instance_info = self._test_build_instance_info( image_info=self.image_info, expect_raw=True) self.assertIsNone(instance_info['image_checksum']) self.assertEqual(instance_info['image_disk_format'], 'raw') calls = [mock.call(image_path, algorithm='sha256')] self.checksum_mock.assert_has_calls(calls) class TestStorageInterfaceUtils(db_base.DbTestCase): def setUp(self): super(TestStorageInterfaceUtils, self).setUp() self.node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.config(enabled_storage_interfaces=['noop', 'fake', 'cinder']) def test_check_interface_capability(self): class fake_driver(object): capabilities = ['foo', 'bar'] self.assertTrue(utils.check_interface_capability(fake_driver, 'foo')) self.assertFalse(utils.check_interface_capability(fake_driver, 'baz')) def test_get_remote_boot_volume(self): obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='4321') obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=uuidutils.generate_uuid()) self.node.storage_interface = 'cinder' self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: volume = utils.get_remote_boot_volume(task) self.assertEqual('1234', volume['volume_id']) def test_get_remote_boot_volume_none(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertIsNone(utils.get_remote_boot_volume(task)) obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='4321') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertIsNone(utils.get_remote_boot_volume(task)) @mock.patch.object(fake.FakeBoot, 'capabilities', ['iscsi_volume_boot'], create=True) @mock.patch.object(fake.FakeDeploy, 'capabilities', ['iscsi_volume_deploy'], create=True) @mock.patch.object(cinder.CinderStorage, 'should_write_image', autospec=True) def test_populate_storage_driver_internal_info_iscsi(self, mock_should_write): mock_should_write.return_value = True vol_uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_uuid) # NOTE(TheJulia): Since the default for the storage_interface # is a noop interface, we need to define another driver that # can be loaded by driver_manager in order to create the task # to test this method. self.node.storage_interface = "cinder" self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: driver_utils.add_node_capability(task, 'iscsi_boot', 'True') utils.populate_storage_driver_internal_info(task) self.assertEqual( vol_uuid, task.node.driver_internal_info.get('boot_from_volume', None)) self.assertEqual( vol_uuid, task.node.driver_internal_info.get('boot_from_volume_deploy', None)) @mock.patch.object(fake.FakeBoot, 'capabilities', ['fibre_channel_volume_boot'], create=True) @mock.patch.object(fake.FakeDeploy, 'capabilities', ['fibre_channel_volume_deploy'], create=True) @mock.patch.object(cinder.CinderStorage, 'should_write_image', autospec=True) def test_populate_storage_driver_internal_info_fc(self, mock_should_write): mock_should_write.return_value = True self.node.storage_interface = "cinder" self.node.save() vol_uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fibre_channel', boot_index=0, volume_id='1234', uuid=vol_uuid) with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: driver_utils.add_node_capability(task, 'fibre_channel_boot', 'True') utils.populate_storage_driver_internal_info(task) self.assertEqual( vol_uuid, task.node.driver_internal_info.get('boot_from_volume', None)) self.assertEqual( vol_uuid, task.node.driver_internal_info.get('boot_from_volume_deploy', None)) @mock.patch.object(fake.FakeBoot, 'capabilities', ['fibre_channel_volume_boot'], create=True) @mock.patch.object(fake.FakeDeploy, 'capabilities', ['fibre_channel_volume_deploy'], create=True) def test_populate_storage_driver_internal_info_error(self): obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234') with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertRaises(exception.StorageError, utils.populate_storage_driver_internal_info, task) def test_tear_down_storage_configuration(self): vol_uuid = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_uuid) d_i_info = self.node.driver_internal_info d_i_info['boot_from_volume'] = vol_uuid d_i_info['boot_from_volume_deploy'] = vol_uuid self.node.driver_internal_info = d_i_info self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: node = task.node self.assertEqual(1, len(task.volume_targets)) self.assertEqual( vol_uuid, node.driver_internal_info.get('boot_from_volume')) self.assertEqual( vol_uuid, node.driver_internal_info.get('boot_from_volume_deploy')) utils.tear_down_storage_configuration(task) node.refresh() self.assertIsNone( node.driver_internal_info.get('boot_from_volume')) self.assertIsNone( node.driver_internal_info.get('boot_from_volume_deploy')) with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertEqual(0, len(task.volume_targets)) def test_is_iscsi_boot(self): vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id) self.node.driver_internal_info = {'boot_from_volume': vol_id} self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertTrue(utils.is_iscsi_boot(task)) def test_is_iscsi_boot_exception(self): self.node.driver_internal_info = { 'boot_from_volume': uuidutils.generate_uuid()} with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(utils.is_iscsi_boot(task)) def test_is_iscsi_boot_false(self): with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(utils.is_iscsi_boot(task)) def test_is_iscsi_boot_false_fc_target(self): vol_id = uuidutils.generate_uuid() obj_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fibre_channel', boot_index=0, volume_id='3214', uuid=vol_id) self.node.driver_internal_info.update({'boot_from_volume': vol_id}) self.node.save() with task_manager.acquire( self.context, self.node.uuid, shared=False) as task: self.assertFalse(utils.is_iscsi_boot(task)) class InstanceImageCacheTestCase(db_base.DbTestCase): @mock.patch.object(fileutils, 'ensure_tree') def test_with_master_path(self, mock_ensure_tree): self.config(instance_master_path='/fake/path', group='pxe') self.config(image_cache_size=500, group='pxe') self.config(image_cache_ttl=30, group='pxe') cache = utils.InstanceImageCache() mock_ensure_tree.assert_called_once_with('/fake/path') self.assertEqual(500 * 1024 * 1024, cache._cache_size) self.assertEqual(30 * 60, cache._cache_ttl) @mock.patch.object(fileutils, 'ensure_tree') def test_without_master_path(self, mock_ensure_tree): self.config(instance_master_path='', group='pxe') self.config(image_cache_size=500, group='pxe') self.config(image_cache_ttl=30, group='pxe') cache = utils.InstanceImageCache() mock_ensure_tree.assert_not_called() self.assertEqual(500 * 1024 * 1024, cache._cache_size) self.assertEqual(30 * 60, cache._cache_ttl) class AsyncStepTestCase(db_base.DbTestCase): def setUp(self): super(AsyncStepTestCase, self).setUp() self.node = obj_utils.create_test_node(self.context, driver="fake-hardware") def _test_get_async_step_return_state(self): result = utils.get_async_step_return_state(self.node) if self.node.clean_step: self.assertEqual(states.CLEANWAIT, result) else: self.assertEqual(states.DEPLOYWAIT, result) def test_get_async_step_return_state_cleaning(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_get_async_step_return_state() def test_get_async_step_return_state_deploying(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.save() self._test_get_async_step_return_state() def test_set_async_step_flags_cleaning_set_all(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.driver_internal_info = {} expected = {'cleaning_reboot': True, 'cleaning_polling': True, 'skip_current_clean_step': True} self.node.save() utils.set_async_step_flags(self.node, reboot=True, skip_current_step=True, polling=True) self.assertEqual(expected, self.node.driver_internal_info) def test_set_async_step_flags_cleaning_set_one(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.driver_internal_info = {} self.node.save() utils.set_async_step_flags(self.node, reboot=True) self.assertEqual({'cleaning_reboot': True}, self.node.driver_internal_info) def test_set_async_step_flags_deploying_set_all(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.driver_internal_info = { 'agent_secret_token': 'test', 'agent_secret_token_pregenerated': True} expected = {'deployment_reboot': True, 'deployment_polling': True, 'skip_current_deploy_step': True, 'agent_secret_token': 'test', 'agent_secret_token_pregenerated': True} self.node.save() utils.set_async_step_flags(self.node, reboot=True, skip_current_step=True, polling=True) self.assertEqual(expected, self.node.driver_internal_info) def test_set_async_step_flags_deploying_set_one(self): self.node.deploy_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.driver_internal_info = {} self.node.save() utils.set_async_step_flags(self.node, reboot=True) self.assertEqual({'deployment_reboot': True}, self.node.driver_internal_info) def test_set_async_step_flags_clears_non_pregenerated_token(self): self.node.clean_step = {'step': 'create_configuration', 'interface': 'raid'} self.node.driver_internal_info = {'agent_secret_token': 'test'} expected = {'cleaning_reboot': True, 'cleaning_polling': True, 'skip_current_clean_step': True} self.node.save() utils.set_async_step_flags(self.node, reboot=True, skip_current_step=True, polling=True) self.assertEqual(expected, self.node.driver_internal_info) ironic-15.0.0/ironic/tests/unit/drivers/modules/intel_ipmi/0000775000175000017500000000000013652514443023774 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/intel_ipmi/base.py0000664000175000017500000000231613652514273025263 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test base class for iBMC Driver.""" from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class IntelIPMITestCase(db_base.DbTestCase): def setUp(self): super(IntelIPMITestCase, self).setUp() self.driver_info = db_utils.get_test_ipmi_info() self.config(enabled_hardware_types=['intel-ipmi'], enabled_management_interfaces=['intel-ipmitool'], enabled_power_interfaces=['ipmitool']) self.node = obj_utils.create_test_node( self.context, driver='intel-ipmi', driver_info=self.driver_info) ironic-15.0.0/ironic/tests/unit/drivers/modules/intel_ipmi/test_management.py0000664000175000017500000001006613652514273027525 0ustar zuulzuul00000000000000# Copyright 2015, Cisco Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers.modules import ipmitool from ironic.tests.unit.drivers.modules.intel_ipmi import base class IntelIPMIManagementTestCase(base.IntelIPMITestCase): def test_configure_intel_speedselect_empty(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.management.configure_intel_speedselect, task) @mock.patch.object(ipmitool, "send_raw", spec_set=True, autospec=True) def test_configure_intel_speedselect(self, send_raw_mock): send_raw_mock.return_value = [None, None] config = {"intel_speedselect_config": "0x02", "socket_count": 1} with task_manager.acquire(self.context, self.node.uuid) as task: ret = task.driver.management.configure_intel_speedselect(task, **config) self.assertIsNone(ret) send_raw_mock.assert_called_once_with(task, '0x2c 0x41 0x04 0x00 0x00 0x02') @mock.patch.object(ipmitool, "send_raw", spec_set=True, autospec=True) def test_configure_intel_speedselect_more_socket(self, send_raw_mock): send_raw_mock.return_value = [None, None] config = {"intel_speedselect_config": "0x02", "socket_count": 4} with task_manager.acquire(self.context, self.node.uuid) as task: ret = task.driver.management.configure_intel_speedselect(task, **config) self.assertIsNone(ret) self.assertEqual(send_raw_mock.call_count, 4) calls = [ mock.call(task, '0x2c 0x41 0x04 0x00 0x00 0x02'), mock.call(task, '0x2c 0x41 0x04 0x00 0x01 0x02'), mock.call(task, '0x2c 0x41 0x04 0x00 0x02 0x02'), mock.call(task, '0x2c 0x41 0x04 0x00 0x03 0x02') ] send_raw_mock.assert_has_calls(calls) @mock.patch.object(ipmitool, "send_raw", spec_set=True, autospec=True) def test_configure_intel_speedselect_error(self, send_raw_mock): send_raw_mock.side_effect = exception.IPMIFailure('err') config = {"intel_speedselect_config": "0x02", "socket_count": 1} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaisesRegex( exception.IPMIFailure, "Failed to set Intel SST-PP configuration", task.driver.management.configure_intel_speedselect, task, **config) def test_configure_intel_speedselect_invalid_input(self): config = {"intel_speedselect_config": "0", "socket_count": 1} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.management.configure_intel_speedselect, task, **config) for value in (-1, None): config = {"intel_speedselect_config": "0x00", "socket_count": value} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.InvalidParameterValue, task.driver.management.configure_intel_speedselect, task, **config) ironic-15.0.0/ironic/tests/unit/drivers/modules/intel_ipmi/test_intel_ipmi.py0000664000175000017500000001067713652514273027552 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.conductor import task_manager from ironic.drivers.modules import agent from ironic.drivers.modules.intel_ipmi import management as intel_management from ironic.drivers.modules import ipmitool from ironic.drivers.modules import iscsi_deploy from ironic.drivers.modules import noop from ironic.drivers.modules import pxe from ironic.drivers.modules.storage import cinder from ironic.drivers.modules.storage import noop as noop_storage from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class IntelIPMIHardwareTestCase(db_base.DbTestCase): def setUp(self): super(IntelIPMIHardwareTestCase, self).setUp() self.config(enabled_hardware_types=['intel-ipmi'], enabled_power_interfaces=['ipmitool'], enabled_management_interfaces=['intel-ipmitool', 'noop'], enabled_raid_interfaces=['no-raid', 'agent'], enabled_console_interfaces=['no-console'], enabled_vendor_interfaces=['ipmitool', 'no-vendor']) def _validate_interfaces(self, task, **kwargs): self.assertIsInstance( task.driver.management, kwargs.get('management', intel_management.IntelIPMIManagement)) self.assertIsInstance( task.driver.power, kwargs.get('power', ipmitool.IPMIPower)) self.assertIsInstance( task.driver.boot, kwargs.get('boot', pxe.PXEBoot)) self.assertIsInstance( task.driver.deploy, kwargs.get('deploy', iscsi_deploy.ISCSIDeploy)) self.assertIsInstance( task.driver.console, kwargs.get('console', noop.NoConsole)) self.assertIsInstance( task.driver.raid, kwargs.get('raid', noop.NoRAID)) self.assertIsInstance( task.driver.vendor, kwargs.get('vendor', ipmitool.VendorPassthru)) self.assertIsInstance( task.driver.storage, kwargs.get('storage', noop_storage.NoopStorage)) self.assertIsInstance( task.driver.rescue, kwargs.get('rescue', noop.NoRescue)) def test_default_interfaces(self): node = obj_utils.create_test_node(self.context, driver='intel-ipmi') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task) def test_override_with_shellinabox(self): self.config(enabled_console_interfaces=['ipmitool-shellinabox', 'ipmitool-socat']) node = obj_utils.create_test_node( self.context, driver='intel-ipmi', deploy_interface='direct', raid_interface='agent', console_interface='ipmitool-shellinabox', vendor_interface='no-vendor') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces( task, deploy=agent.AgentDeploy, console=ipmitool.IPMIShellinaboxConsole, raid=agent.AgentRAID, vendor=noop.NoVendor) def test_override_with_cinder_storage(self): self.config(enabled_storage_interfaces=['noop', 'cinder']) node = obj_utils.create_test_node( self.context, driver='intel-ipmi', storage_interface='cinder') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task, storage=cinder.CinderStorage) def test_override_with_agent_rescue(self): self.config(enabled_rescue_interfaces=['no-rescue', 'agent']) node = obj_utils.create_test_node( self.context, driver='intel-ipmi', rescue_interface='agent') with task_manager.acquire(self.context, node.id) as task: self._validate_interfaces(task, rescue=agent.AgentRescue) ironic-15.0.0/ironic/tests/unit/drivers/modules/intel_ipmi/__init__.py0000664000175000017500000000000013652514273026074 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/network/0000775000175000017500000000000013652514443023334 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/network/test_noop.py0000664000175000017500000000763613652514273025735 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.conductor import task_manager from ironic.drivers.modules.network import noop from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils class NoopInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(NoopInterfaceTestCase, self).setUp() self.interface = noop.NoopNetwork() self.node = utils.create_test_node(self.context, network_interface='noop') self.port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:32') def test_get_properties(self): result = self.interface.get_properties() self.assertEqual({}, result) def test_validate(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) def test_port_changed(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) def test_portgroup_changed(self): portgroup = utils.create_test_portgroup(self.context) with task_manager.acquire(self.context, self.node.id) as task: self.interface.portgroup_changed(task, portgroup) def test_vif_attach(self): vif = {'id': 'vif-id'} with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) def test_vif_detach(self): vif_id = 'vif-id' with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, vif_id) def test_vif_list(self): with task_manager.acquire(self.context, self.node.id) as task: result = self.interface.vif_list(task) self.assertEqual([], result) def test_get_current_vif(self): with task_manager.acquire(self.context, self.node.id) as task: result = self.interface.get_current_vif(task, self.port) self.assertIsNone(result) def test_add_provisioning_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_provisioning_network(task) def test_remove_provisioning_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_provisioning_network(task) def test_configure_tenant_networks(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.configure_tenant_networks(task) def test_unconfigure_tenant_networks(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.unconfigure_tenant_networks(task) def test_add_cleaning_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_cleaning_network(task) def test_remove_cleaning_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_cleaning_network(task) def test_add_inspection_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_inspection_network(task) def test_remove_inspection_network(self): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_inspection_network(task) ironic-15.0.0/ironic/tests/unit/drivers/modules/network/test_neutron.py0000664000175000017500000012620513652514273026446 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.common import exceptions as neutron_exceptions from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.common import neutron as neutron_common from ironic.common import states from ironic.conductor import task_manager from ironic.drivers import base as drivers_base from ironic.drivers.modules.network import neutron from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils CONF = cfg.CONF CLIENT_ID1 = '20:00:55:04:01:fe:80:00:00:00:00:00:00:00:02:c9:02:00:23:13:92' CLIENT_ID2 = '20:00:55:04:01:fe:80:00:00:00:00:00:00:00:02:c9:02:00:23:13:93' VIFMIXINPATH = 'ironic.drivers.modules.network.common.NeutronVIFPortIDMixin' class NeutronInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(NeutronInterfaceTestCase, self).setUp() self.config(enabled_hardware_types=['fake-hardware']) for iface in drivers_base.ALL_INTERFACES: name = 'fake' if iface == 'network': name = 'neutron' config_kwarg = {'enabled_%s_interfaces' % iface: [name], 'default_%s_interface' % iface: name} self.config(**config_kwarg) self.interface = neutron.NeutronNetwork() self.node = utils.create_test_node(self.context, driver='fake-hardware', network_interface='neutron') self.port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid()}) self.neutron_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f00', 'mac_address': '52:54:00:cf:2d:32'} @mock.patch('%s.vif_list' % VIFMIXINPATH) def test_vif_list(self, mock_vif_list): with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_list(task) mock_vif_list.assert_called_once_with(task) @mock.patch('%s.vif_attach' % VIFMIXINPATH) def test_vif_attach(self, mock_vif_attach): vif = mock.MagicMock() with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_vif_attach.assert_called_once_with(task, vif) @mock.patch('%s.vif_detach' % VIFMIXINPATH) def test_vif_detach(self, mock_vif_detach): vif_id = "vif" with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, vif_id) mock_vif_detach.assert_called_once_with(task, vif_id) @mock.patch('%s.port_changed' % VIFMIXINPATH) def test_vif_port_changed(self, mock_p_changed): port = mock.MagicMock() with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, port) mock_p_changed.assert_called_once_with(task, port) def test_init_incorrect_provisioning_net(self): self.config(provisioning_network=None, group='neutron') self.assertRaises(exception.DriverLoadError, neutron.NeutronNetwork) self.config(provisioning_network=uuidutils.generate_uuid(), group='neutron') self.config(cleaning_network=None, group='neutron') self.assertRaises(exception.DriverLoadError, neutron.NeutronNetwork) @mock.patch.object(neutron_common, 'validate_network', autospec=True) def test_validate(self, validate_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertEqual([mock.call(CONF.neutron.cleaning_network, 'cleaning network', context=task.context), mock.call(CONF.neutron.provisioning_network, 'provisioning network', context=task.context)], validate_mock.call_args_list) @mock.patch.object(neutron_common, 'validate_network', autospec=True) def test_validate_boot_option_netboot(self, validate_mock): driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info boot_option = {'capabilities': '{"boot_option": "netboot"}'} self.node.instance_info = boot_option self.node.provision_state = states.DEPLOYING self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.InvalidParameterValue, 'cannot perform "local" boot for whole disk image', self.interface.validate, task) self.assertEqual([mock.call(CONF.neutron.cleaning_network, 'cleaning network', context=task.context), mock.call(CONF.neutron.provisioning_network, 'provisioning network', context=task.context)], validate_mock.call_args_list) @mock.patch.object(neutron_common, 'validate_network', autospec=True) def test_validate_boot_option_netboot_no_exc(self, validate_mock): CONF.set_override('default_boot_option', 'netboot', 'deploy') driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info self.node.provision_state = states.AVAILABLE self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertEqual([mock.call(CONF.neutron.cleaning_network, 'cleaning network', context=task.context), mock.call(CONF.neutron.provisioning_network, 'provisioning network', context=task.context)], validate_mock.call_args_list) @mock.patch.object(neutron_common, 'validate_network', autospec=True) def test_validate_boot_option_local(self, validate_mock): driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = True self.node.driver_internal_info = driver_internal_info boot_option = {'capabilities': '{"boot_option": "local"}'} self.node.instance_info = boot_option self.node.provision_state = states.DEPLOYING self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) self.assertEqual([mock.call(CONF.neutron.cleaning_network, 'cleaning network', context=task.context), mock.call(CONF.neutron.provisioning_network, 'provisioning network', context=task.context)], validate_mock.call_args_list) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_provisioning_network(self, add_ports_mock, rollback_mock, validate_mock): self.port.internal_info = {'provisioning_vif_port_id': 'vif-port-id'} self.port.save() add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_provisioning_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.provisioning_network) add_ports_mock.assert_called_once_with( task, CONF.neutron.provisioning_network, security_groups=[]) validate_mock.assert_called_once_with( CONF.neutron.provisioning_network, 'provisioning network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['provisioning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_provisioning_network_from_node(self, add_ports_mock, rollback_mock, validate_mock): self.port.internal_info = {'provisioning_vif_port_id': 'vif-port-id'} self.port.save() add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} # Make sure that changing the network UUID works for provisioning_network_uuid in [ '3aea0de6-4b92-44da-9aa0-52d134c83fdf', '438be438-6aae-4fb1-bbcb-613ad7a38286']: validate_mock.reset_mock() driver_info = self.node.driver_info driver_info['provisioning_network'] = provisioning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_provisioning_network(task) rollback_mock.assert_called_with( task, provisioning_network_uuid) add_ports_mock.assert_called_with( task, provisioning_network_uuid, security_groups=[]) validate_mock.assert_called_once_with( provisioning_network_uuid, 'provisioning network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['provisioning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_provisioning_network_with_sg(self, add_ports_mock, rollback_mock): sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) self.config(provisioning_network_security_groups=sg_ids, group='neutron') add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_provisioning_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.provisioning_network) add_ports_mock.assert_called_once_with( task, CONF.neutron.provisioning_network, security_groups=( CONF.neutron.provisioning_network_security_groups)) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['provisioning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'remove_ports_from_network') def test_remove_provisioning_network(self, remove_ports_mock, validate_mock): self.port.internal_info = {'provisioning_vif_port_id': 'vif-port-id'} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_provisioning_network(task) remove_ports_mock.assert_called_once_with( task, CONF.neutron.provisioning_network) validate_mock.assert_called_once_with( CONF.neutron.provisioning_network, 'provisioning network', context=task.context) self.port.refresh() self.assertNotIn('provisioning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'remove_ports_from_network') def test_remove_provisioning_network_from_node(self, remove_ports_mock, validate_mock): self.port.internal_info = {'provisioning_vif_port_id': 'vif-port-id'} self.port.save() provisioning_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83f9c' driver_info = self.node.driver_info driver_info['provisioning_network'] = provisioning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_provisioning_network(task) remove_ports_mock.assert_called_once_with( task, provisioning_network_uuid) validate_mock.assert_called_once_with( provisioning_network_uuid, 'provisioning network', context=task.context) self.port.refresh() self.assertNotIn('provisioning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_cleaning_network(self, add_ports_mock, rollback_mock, validate_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_cleaning_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) self.assertEqual(res, add_ports_mock.return_value) validate_mock.assert_called_once_with( CONF.neutron.cleaning_network, 'cleaning network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['cleaning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_cleaning_network_from_node(self, add_ports_mock, rollback_mock, validate_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} # Make sure that changing the network UUID works for cleaning_network_uuid in ['3aea0de6-4b92-44da-9aa0-52d134c83fdf', '438be438-6aae-4fb1-bbcb-613ad7a38286']: validate_mock.reset_mock() driver_info = self.node.driver_info driver_info['cleaning_network'] = cleaning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_cleaning_network(task) rollback_mock.assert_called_with(task, cleaning_network_uuid) self.assertEqual(res, add_ports_mock.return_value) validate_mock.assert_called_once_with( cleaning_network_uuid, 'cleaning network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['cleaning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_cleaning_network_with_sg(self, add_ports_mock, rollback_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) self.config(cleaning_network_security_groups=sg_ids, group='neutron') with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_cleaning_network(task) add_ports_mock.assert_called_once_with( task, CONF.neutron.cleaning_network, security_groups=CONF.neutron.cleaning_network_security_groups) rollback_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) self.assertEqual(res, add_ports_mock.return_value) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['cleaning_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'remove_ports_from_network') def test_remove_cleaning_network(self, remove_ports_mock, validate_mock): self.port.internal_info = {'cleaning_vif_port_id': 'vif-port-id'} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_cleaning_network(task) remove_ports_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) validate_mock.assert_called_once_with( CONF.neutron.cleaning_network, 'cleaning network', context=task.context) self.port.refresh() self.assertNotIn('cleaning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'remove_ports_from_network') def test_remove_cleaning_network_from_node(self, remove_ports_mock, validate_mock): self.port.internal_info = {'cleaning_vif_port_id': 'vif-port-id'} self.port.save() cleaning_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['cleaning_network'] = cleaning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_cleaning_network(task) remove_ports_mock.assert_called_once_with( task, cleaning_network_uuid) validate_mock.assert_called_once_with( cleaning_network_uuid, 'cleaning network', context=task.context) self.port.refresh() self.assertNotIn('cleaning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) def test_validate_rescue(self, validate_mock): rescuing_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['rescuing_network'] = rescuing_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate_rescue(task) validate_mock.assert_called_once_with( rescuing_network_uuid, 'rescuing network', context=task.context), def test_validate_rescue_exc(self): self.config(rescuing_network="", group='neutron') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex(exception.MissingParameterValue, 'rescuing network is not set', self.interface.validate_rescue, task) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_rescuing_network(self, add_ports_mock, rollback_mock, validate_mock): other_port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': uuidutils.generate_uuid()}) neutron_other_port = {'id': uuidutils.generate_uuid(), 'mac_address': '52:54:00:cf:2d:33'} add_ports_mock.return_value = { other_port.uuid: neutron_other_port['id']} with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_rescuing_network(task) add_ports_mock.assert_called_once_with( task, CONF.neutron.rescuing_network, security_groups=[]) rollback_mock.assert_called_once_with( task, CONF.neutron.rescuing_network) self.assertEqual(add_ports_mock.return_value, res) validate_mock.assert_called_once_with( CONF.neutron.rescuing_network, 'rescuing network', context=task.context) other_port.refresh() self.assertEqual(neutron_other_port['id'], other_port.internal_info['rescuing_vif_port_id']) self.assertNotIn('rescuing_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_rescuing_network_from_node(self, add_ports_mock, rollback_mock, validate_mock): other_port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': uuidutils.generate_uuid()}) neutron_other_port = {'id': uuidutils.generate_uuid(), 'mac_address': '52:54:00:cf:2d:33'} add_ports_mock.return_value = { other_port.uuid: neutron_other_port['id']} rescuing_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['rescuing_network'] = rescuing_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_rescuing_network(task) add_ports_mock.assert_called_once_with( task, rescuing_network_uuid, security_groups=[]) rollback_mock.assert_called_once_with( task, rescuing_network_uuid) self.assertEqual(add_ports_mock.return_value, res) validate_mock.assert_called_once_with( rescuing_network_uuid, 'rescuing network', context=task.context) other_port.refresh() self.assertEqual(neutron_other_port['id'], other_port.internal_info['rescuing_vif_port_id']) self.assertNotIn('rescuing_vif_port_id', self.port.internal_info) @mock.patch.object(neutron_common, 'validate_network', lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_rescuing_network_with_sg(self, add_ports_mock, rollback_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) self.config(rescuing_network_security_groups=sg_ids, group='neutron') with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_rescuing_network(task) add_ports_mock.assert_called_once_with( task, CONF.neutron.rescuing_network, security_groups=CONF.neutron.rescuing_network_security_groups) rollback_mock.assert_called_once_with( task, CONF.neutron.rescuing_network) self.assertEqual(add_ports_mock.return_value, res) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['rescuing_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'remove_ports_from_network') def test_remove_rescuing_network(self, remove_ports_mock, validate_mock): other_port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid(), extra={'vif_port_id': uuidutils.generate_uuid()}) other_port.internal_info = {'rescuing_vif_port_id': 'vif-port-id'} other_port.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_rescuing_network(task) remove_ports_mock.assert_called_once_with( task, CONF.neutron.rescuing_network) validate_mock.assert_called_once_with( CONF.neutron.rescuing_network, 'rescuing network', context=task.context) other_port.refresh() self.assertNotIn('rescuing_vif_port_id', self.port.internal_info) self.assertNotIn('rescuing_vif_port_id', other_port.internal_info) @mock.patch.object(neutron_common, 'unbind_neutron_port') def test_unconfigure_tenant_networks(self, mock_unbind_port): with task_manager.acquire(self.context, self.node.id) as task: self.interface.unconfigure_tenant_networks(task) mock_unbind_port.assert_called_once_with( self.port.extra['vif_port_id'], context=task.context) @mock.patch.object(neutron_common, 'get_client') @mock.patch.object(neutron_common, 'wait_for_host_agent') @mock.patch.object(neutron_common, 'unbind_neutron_port') def test_unconfigure_tenant_networks_smartnic( self, mock_unbind_port, wait_agent_mock, client_mock): nclient = mock.MagicMock() client_mock.return_value = nclient local_link_connection = self.port.local_link_connection local_link_connection['hostname'] = 'hostname' self.port.local_link_connection = local_link_connection self.port.is_smartnic = True self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.unconfigure_tenant_networks(task) mock_unbind_port.assert_called_once_with( self.port.extra['vif_port_id'], context=task.context) wait_agent_mock.assert_called_once_with(nclient, 'hostname') def test_configure_tenant_networks_no_ports_for_node(self): n = utils.create_test_node(self.context, network_interface='neutron', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, n.id) as task: self.assertRaisesRegex( exception.NetworkError, 'No ports are associated', self.interface.configure_tenant_networks, task) @mock.patch.object(neutron_common, 'get_client') @mock.patch.object(neutron, 'LOG') def test_configure_tenant_networks_no_vif_id(self, log_mock, client_mock): self.port.extra = {} self.port.save() upd_mock = mock.Mock() client_mock.return_value.update_port = upd_mock with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex(exception.NetworkError, 'No neutron ports or portgroups are ' 'associated with node', self.interface.configure_tenant_networks, task) client_mock.assert_called_once_with(context=task.context) upd_mock.assert_not_called() self.assertIn('No neutron ports or portgroups are associated with', log_mock.error.call_args[0][0]) @mock.patch.object(neutron_common, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron_common, 'update_neutron_port') @mock.patch.object(neutron_common, 'get_client') @mock.patch.object(neutron, 'LOG') def test_configure_tenant_networks_multiple_ports_one_vif_id( self, log_mock, client_mock, update_mock, wait_agent_mock): expected_body = { 'port': { 'binding:vnic_type': 'baremetal', 'binding:host_id': self.node.uuid, 'binding:profile': {'local_link_information': [self.port.local_link_connection]}, 'mac_address': '52:54:00:cf:2d:32' } } with task_manager.acquire(self.context, self.node.id) as task: self.interface.configure_tenant_networks(task) client_mock.assert_called_once_with(context=task.context) update_mock.assert_called_once_with(self.context, self.port.extra['vif_port_id'], expected_body) @mock.patch.object(neutron_common, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron_common, 'update_neutron_port') @mock.patch.object(neutron_common, 'get_client') def test_configure_tenant_networks_update_fail(self, client_mock, update_mock, wait_agent_mock): update_mock.side_effect = neutron_exceptions.ConnectionFailed( reason='meow') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.NetworkError, 'Could not add', self.interface.configure_tenant_networks, task) client_mock.assert_called_once_with(context=task.context) @mock.patch.object(neutron_common, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron_common, 'update_neutron_port') @mock.patch.object(neutron_common, 'get_client') def _test_configure_tenant_networks(self, client_mock, update_mock, wait_agent_mock, is_client_id=False, vif_int_info=False): if vif_int_info: kwargs = {'internal_info': { 'tenant_vif_port_id': uuidutils.generate_uuid()}} self.port.internal_info = { 'tenant_vif_port_id': self.port.extra['vif_port_id']} self.port.extra = {} else: kwargs = {'extra': {'vif_port_id': uuidutils.generate_uuid()}} second_port = utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid(), local_link_connection={'switch_id': '0a:1b:2c:3d:4e:ff', 'port_id': 'Ethernet1/1', 'switch_info': 'switch2'}, **kwargs ) if is_client_id: client_ids = (CLIENT_ID1, CLIENT_ID2) ports = (self.port, second_port) for port, client_id in zip(ports, client_ids): extra = port.extra extra['client-id'] = client_id port.extra = extra port.save() expected_body = { 'port': { 'binding:vnic_type': 'baremetal', 'binding:host_id': self.node.uuid, } } port1_body = copy.deepcopy(expected_body) port1_body['port']['binding:profile'] = { 'local_link_information': [self.port.local_link_connection] } port1_body['port']['mac_address'] = '52:54:00:cf:2d:32' port2_body = copy.deepcopy(expected_body) port2_body['port']['binding:profile'] = { 'local_link_information': [second_port.local_link_connection] } port2_body['port']['mac_address'] = '52:54:00:cf:2d:33' if is_client_id: port1_body['port']['extra_dhcp_opts'] = ( [{'opt_name': '61', 'opt_value': client_ids[0]}]) port2_body['port']['extra_dhcp_opts'] = ( [{'opt_name': '61', 'opt_value': client_ids[1]}]) with task_manager.acquire(self.context, self.node.id) as task: self.interface.configure_tenant_networks(task) client_mock.assert_called_once_with(context=task.context) if vif_int_info: portid1 = self.port.internal_info['tenant_vif_port_id'] portid2 = second_port.internal_info['tenant_vif_port_id'] else: portid1 = self.port.extra['vif_port_id'] portid2 = second_port.extra['vif_port_id'] update_mock.assert_has_calls( [mock.call(self.context, portid1, port1_body), mock.call(self.context, portid2, port2_body)], any_order=True ) def test_configure_tenant_networks_vif_extra(self): self.node.instance_uuid = uuidutils.generate_uuid() self.node.save() self._test_configure_tenant_networks() def test_configure_tenant_networks_vif_int_info(self): self.node.instance_uuid = uuidutils.generate_uuid() self.node.save() self._test_configure_tenant_networks(vif_int_info=True) def test_configure_tenant_networks_no_instance_uuid(self): self._test_configure_tenant_networks() def test_configure_tenant_networks_with_client_id(self): self.node.instance_uuid = uuidutils.generate_uuid() self.node.save() self._test_configure_tenant_networks(is_client_id=True) @mock.patch.object(neutron_common, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron_common, 'update_neutron_port', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'get_local_group_information', autospec=True) def test_configure_tenant_networks_with_portgroups( self, glgi_mock, client_mock, update_mock, wait_agent_mock): pg = utils.create_test_portgroup( self.context, node_id=self.node.id, address='ff:54:00:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid()}) port1 = utils.create_test_port( self.context, node_id=self.node.id, address='ff:54:00:cf:2d:33', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id, local_link_connection={'switch_id': '0a:1b:2c:3d:4e:ff', 'port_id': 'Ethernet1/1', 'switch_info': 'switch2'} ) port2 = utils.create_test_port( self.context, node_id=self.node.id, address='ff:54:00:cf:2d:34', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id, local_link_connection={'switch_id': '0a:1b:2c:3d:4e:ff', 'port_id': 'Ethernet1/2', 'switch_info': 'switch2'} ) local_group_info = {'a': 'b'} glgi_mock.return_value = local_group_info expected_body = { 'port': { 'binding:vnic_type': 'baremetal', 'binding:host_id': self.node.uuid, } } call1_body = copy.deepcopy(expected_body) call1_body['port']['binding:profile'] = { 'local_link_information': [self.port.local_link_connection], } call1_body['port']['mac_address'] = '52:54:00:cf:2d:32' call2_body = copy.deepcopy(expected_body) call2_body['port']['binding:profile'] = { 'local_link_information': [port1.local_link_connection, port2.local_link_connection], 'local_group_information': local_group_info } call2_body['port']['mac_address'] = 'ff:54:00:cf:2d:32' with task_manager.acquire(self.context, self.node.id) as task: # Override task.portgroups here, to have ability to check # that mocked get_local_group_information was called with # this portgroup object. task.portgroups = [pg] self.interface.configure_tenant_networks(task) client_mock.assert_called_once_with(context=task.context) glgi_mock.assert_called_once_with(task, pg) update_mock.assert_has_calls( [mock.call(self.context, self.port.extra['vif_port_id'], call1_body), mock.call(self.context, pg.extra['vif_port_id'], call2_body)] ) def test_need_power_on_true(self): self.port.is_smartnic = True self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertTrue(self.interface.need_power_on(task)) def test_need_power_on_false(self): with task_manager.acquire(self.context, self.node.id) as task: self.assertFalse(self.interface.need_power_on(task)) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_inspection_network(self, add_ports_mock, rollback_mock, validate_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_inspection_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.inspection_network) self.assertEqual(res, add_ports_mock.return_value) validate_mock.assert_called_once_with( CONF.neutron.inspection_network, 'inspection network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['inspection_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_inspection_network_from_node(self, add_ports_mock, rollback_mock, validate_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} # Make sure that changing the network UUID works for inspection_network_uuid in [ '3aea0de6-4b92-44da-9aa0-52d134c83fdf', '438be438-6aae-4fb1-bbcb-613ad7a38286']: validate_mock.reset_mock() driver_info = self.node.driver_info driver_info['inspection_network'] = inspection_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_inspection_network(task) rollback_mock.assert_called_with(task, inspection_network_uuid) self.assertEqual(res, add_ports_mock.return_value) validate_mock.assert_called_once_with( inspection_network_uuid, 'inspection network', context=task.context) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['inspection_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', lambda n, t, context=None: n) @mock.patch.object(neutron_common, 'rollback_ports') @mock.patch.object(neutron_common, 'add_ports_to_network') def test_add_inspection_network_with_sg(self, add_ports_mock, rollback_mock): add_ports_mock.return_value = {self.port.uuid: self.neutron_port['id']} sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) self.config(inspection_network_security_groups=sg_ids, group='neutron') sg = CONF.neutron.inspection_network_security_groups with task_manager.acquire(self.context, self.node.id) as task: res = self.interface.add_inspection_network(task) add_ports_mock.assert_called_once_with( task, CONF.neutron.inspection_network, security_groups=sg) rollback_mock.assert_called_once_with( task, CONF.neutron.inspection_network) self.assertEqual(res, add_ports_mock.return_value) self.port.refresh() self.assertEqual(self.neutron_port['id'], self.port.internal_info['inspection_vif_port_id']) @mock.patch.object(neutron_common, 'validate_network', side_effect=lambda n, t, context=None: n) def test_validate_inspection(self, validate_mock): inspection_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['inspection_network'] = inspection_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate_inspection(task) validate_mock.assert_called_once_with( inspection_network_uuid, 'inspection network', context=task.context), def test_validate_inspection_exc(self): self.config(inspection_network="", group='neutron') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.UnsupportedDriverExtension, self.interface.validate_inspection, task) ironic-15.0.0/ironic/tests/unit/drivers/modules/network/test_common.py0000664000175000017500000020532713652514273026247 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.common import neutron as neutron_common from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.network import common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils CONF = cfg.CONF class TestCommonFunctions(db_base.DbTestCase): def setUp(self): super(TestCommonFunctions, self).setUp() self.node = obj_utils.create_test_node(self.context, network_interface='neutron') self.port = obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:32') self.vif_id = "fake_vif_id" self.client = mock.MagicMock() def _objects_setup(self, set_physnets): pg1 = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) pg1_ports = [] # This portgroup contains 2 ports, both of them without VIF. The ports # are assigned to physnet physnet1. physical_network = 'physnet1' if set_physnets else None for i in range(2): pg1_ports.append(obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:0%d' % i, physical_network=physical_network, uuid=uuidutils.generate_uuid(), portgroup_id=pg1.id)) pg2 = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, address='00:54:00:cf:2d:04', name='foo2', uuid=uuidutils.generate_uuid()) pg2_ports = [] # This portgroup contains 3 ports, one of them with 'some-vif' # attached, so the two free ones should be considered standalone. # The ports are assigned physnet physnet2. physical_network = 'physnet2' if set_physnets else None for i in range(2, 4): pg2_ports.append(obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:0%d' % i, physical_network=physical_network, uuid=uuidutils.generate_uuid(), portgroup_id=pg2.id)) pg2_ports.append(obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:04', physical_network=physical_network, extra={'vif_port_id': 'some-vif'}, uuid=uuidutils.generate_uuid(), portgroup_id=pg2.id)) # This portgroup has 'some-vif-2' attached to it and contains one port, # so neither portgroup nor port can be considered free. The ports are # assigned physnet3. physical_network = 'physnet3' if set_physnets else None pg3 = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, address='00:54:00:cf:2d:05', name='foo3', uuid=uuidutils.generate_uuid(), internal_info={common.TENANT_VIF_KEY: 'some-vif-2'}) pg3_ports = [obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:05', uuid=uuidutils.generate_uuid(), physical_network=physical_network, portgroup_id=pg3.id)] return pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports def test__get_free_portgroups_and_ports_no_port_physnets(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=False) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'anyphysnet'})) self.assertItemsEqual( [pg1.uuid, self.port.uuid] + [p.uuid for p in pg2_ports[:2]], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_no_physnets(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, set())) self.assertItemsEqual( [pg1.uuid, self.port.uuid] + [p.uuid for p in pg2_ports[:2]], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_no_matching_physnet(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'notaphysnet'})) self.assertItemsEqual( [self.port.uuid], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_physnet1(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'physnet1'})) self.assertItemsEqual( [pg1.uuid, self.port.uuid], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_physnet2(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'physnet2'})) self.assertItemsEqual( [self.port.uuid] + [p.uuid for p in pg2_ports[:2]], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_physnet3(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'physnet3'})) self.assertItemsEqual( [self.port.uuid], [p.uuid for p in free_port_like_objs]) def test__get_free_portgroups_and_ports_all_physnets(self): self.node.network_interface = 'flat' self.node.save() pg1, pg1_ports, pg2, pg2_ports, pg3, pg3_ports = self._objects_setup( set_physnets=True) physnets = {'physnet1', 'physnet2', 'physnet3'} with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, physnets)) self.assertItemsEqual( [pg1.uuid, self.port.uuid] + [p.uuid for p in pg2_ports[:2]], [p.uuid for p in free_port_like_objs]) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True) def test__get_free_portgroups_and_ports_neutron_missed(self, vpi_mock): vpi_mock.return_value = False with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'anyphysnet'})) self.assertItemsEqual([], free_port_like_objs) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True) def test__get_free_portgroups_and_ports_neutron(self, vpi_mock): vpi_mock.return_value = True with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'anyphysnet'})) self.assertItemsEqual( [self.port.uuid], [p.uuid for p in free_port_like_objs]) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True) def test__get_free_portgroups_and_ports_flat(self, vpi_mock): self.node.network_interface = 'flat' self.node.save() vpi_mock.return_value = True with task_manager.acquire(self.context, self.node.id) as task: free_port_like_objs = ( common._get_free_portgroups_and_ports(task, self.vif_id, {'anyphysnet'})) self.assertItemsEqual( [self.port.uuid], [p.uuid for p in free_port_like_objs]) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ports(self, vpi_mock): with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'anyphysnet'}) self.assertEqual(self.port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ports_pxe_enabled_first(self, vpi_mock): self.port.pxe_enabled = False self.port.save() other_port = obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'anyphysnet'}) self.assertEqual(other_port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ports_physnet_match_first(self, vpi_mock): self.port.pxe_enabled = False self.port.physical_network = 'physnet1' self.port.save() obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'physnet1'}) self.assertEqual(self.port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ports_physnet_match_first2(self, vpi_mock): self.port.pxe_enabled = False self.port.physical_network = 'physnet1' self.port.save() pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'physnet1'}) self.assertEqual(self.port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_portgroup_first(self, vpi_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'anyphysnet'}) self.assertEqual(pg.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_portgroup_physnet_match_first(self, vpi_mock): pg1 = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg1.id) pg2 = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), name='pg2', address='52:54:00:cf:2d:01') obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:02', uuid=uuidutils.generate_uuid(), portgroup_id=pg2.id, physical_network='physnet1') with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'physnet1'}) self.assertEqual(pg2.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ignores_empty_portgroup(self, vpi_mock): obj_utils.create_test_portgroup(self.context, node_id=self.node.id) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'anyphysnet'}) self.assertEqual(self.port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_ignores_standalone_portgroup( self, vpi_mock): self.port.destroy() pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id, extra={'vif_port_id': 'some-vif'}) free_port = obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:02', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id) with task_manager.acquire(self.context, self.node.id) as task: res = common.get_free_port_like_object(task, self.vif_id, {'anyphysnet'}) self.assertEqual(free_port.uuid, res.uuid) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_vif_attached_to_portgroup( self, vpi_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, internal_info={common.TENANT_VIF_KEY: self.vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifAlreadyAttached, r"already attached to Ironic Portgroup", common.get_free_port_like_object, task, self.vif_id, {'anyphysnet'}) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_vif_attached_to_portgroup_extra( self, vpi_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, extra={'vif_port_id': self.vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid(), portgroup_id=pg.id) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifAlreadyAttached, r"already attached to Ironic Portgroup", common.get_free_port_like_object, task, self.vif_id, {'anyphysnet'}) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_vif_attached_to_port(self, vpi_mock): self.port.internal_info = {common.TENANT_VIF_KEY: self.vif_id} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifAlreadyAttached, r"already attached to Ironic Port\b", common.get_free_port_like_object, task, self.vif_id, {'anyphysnet'}) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_vif_attached_to_port_extra( self, vpi_mock): self.port.extra = {'vif_port_id': self.vif_id} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifAlreadyAttached, r"already attached to Ironic Port\b", common.get_free_port_like_object, task, self.vif_id, {'anyphysnet'}) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_nothing_free(self, vpi_mock): self.port.extra = {'vif_port_id': 'another-vif'} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NoFreePhysicalPorts, common.get_free_port_like_object, task, self.vif_id, {'anyphysnet'}) @mock.patch.object(neutron_common, 'validate_port_info', autospec=True, return_value=True) def test_get_free_port_like_object_no_matching_physnets(self, vpi_mock): self.port.physical_network = 'physnet1' self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NoFreePhysicalPorts, common.get_free_port_like_object, task, self.vif_id, {'physnet2'}) @mock.patch.object(neutron_common, 'update_neutron_port', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) def test_plug_port_to_tenant_network_client(self, mock_gc, mock_update): self.port.internal_info = {common.TENANT_VIF_KEY: self.vif_id} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: common.plug_port_to_tenant_network(task, self.port, client=mock.MagicMock()) self.assertFalse(mock_gc.called) self.assertTrue(mock_update.called) @mock.patch.object(neutron_common, 'update_neutron_port', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) def test_plug_port_to_tenant_network_no_client(self, mock_gc, mock_update): self.port.internal_info = {common.TENANT_VIF_KEY: self.vif_id} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: common.plug_port_to_tenant_network(task, self.port) self.assertTrue(mock_gc.called) self.assertTrue(mock_update.called) @mock.patch.object(neutron_common, 'get_client', autospec=True) def test_plug_port_to_tenant_network_no_tenant_vif(self, mock_gc): nclient = mock.MagicMock() mock_gc.return_value = nclient self.port.extra = {} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifNotAttached, "not associated with port %s" % self.port.uuid, common.plug_port_to_tenant_network, task, self.port) @mock.patch.object(neutron_common, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron_common, 'wait_for_port_status', autospec=True) @mock.patch.object(neutron_common, 'update_neutron_port', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) def test_plug_port_to_tenant_network_smartnic_port( self, mock_gc, mock_update, wait_port_mock, wait_agent_mock): nclient = mock.MagicMock() mock_gc.return_value = nclient local_link_connection = self.port.local_link_connection local_link_connection['hostname'] = 'hostname' self.port.local_link_connection = local_link_connection self.port.internal_info = {common.TENANT_VIF_KEY: self.vif_id} self.port.is_smartnic = True self.port.save() with task_manager.acquire(self.context, self.node.id) as task: common.plug_port_to_tenant_network(task, self.port) wait_agent_mock.assert_called_once_with( nclient, 'hostname') wait_port_mock.assert_called_once_with( nclient, self.vif_id, 'ACTIVE') self.assertTrue(mock_update.called) class TestVifPortIDMixin(db_base.DbTestCase): def setUp(self): super(TestVifPortIDMixin, self).setUp() self.interface = common.VIFPortIDMixin() self.node = obj_utils.create_test_node(self.context, network_interface='neutron') self.port = obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid(), 'client-id': 'fake1'}) def test__save_vif_to_port_like_obj_port(self): self.port.extra = {} self.port.save() vif_id = "fake_vif_id" self.interface._save_vif_to_port_like_obj(self.port, vif_id) self.port.refresh() self.assertIn(common.TENANT_VIF_KEY, self.port.internal_info) self.assertEqual(vif_id, self.port.internal_info[common.TENANT_VIF_KEY]) self.assertEqual({}, self.port.extra) def test__save_vif_to_port_like_obj_portgroup(self): vif_id = "fake_vif_id" pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', portgroup_id=pg.id, uuid=uuidutils.generate_uuid() ) self.interface._save_vif_to_port_like_obj(pg, vif_id) pg.refresh() self.assertIn(common.TENANT_VIF_KEY, pg.internal_info) self.assertEqual(vif_id, pg.internal_info[common.TENANT_VIF_KEY]) self.assertEqual({}, pg.extra) def test__clear_vif_from_port_like_obj_in_extra_port(self): self.interface._clear_vif_from_port_like_obj(self.port) self.port.refresh() self.assertNotIn('vif_port_id', self.port.extra) self.assertNotIn(common.TENANT_VIF_KEY, self.port.internal_info) def test__clear_vif_from_port_like_obj_in_internal_info_port(self): self.port.internal_info = { common.TENANT_VIF_KEY: self.port.extra['vif_port_id']} self.port.extra = {} self.port.save() self.interface._clear_vif_from_port_like_obj(self.port) self.port.refresh() self.assertNotIn('vif_port_id', self.port.extra) self.assertNotIn(common.TENANT_VIF_KEY, self.port.internal_info) def test__clear_vif_from_port_like_obj_in_extra_portgroup(self): vif_id = uuidutils.generate_uuid() pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, extra={'vif_port_id': vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', portgroup_id=pg.id, uuid=uuidutils.generate_uuid() ) self.interface._clear_vif_from_port_like_obj(pg) pg.refresh() self.assertNotIn('vif_port_id', pg.extra) self.assertNotIn(common.TENANT_VIF_KEY, pg.internal_info) def test__clear_vif_from_port_like_obj_in_internal_info_portgroup(self): vif_id = uuidutils.generate_uuid() pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, internal_info={common.TENANT_VIF_KEY: vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:01', portgroup_id=pg.id, uuid=uuidutils.generate_uuid() ) self.interface._clear_vif_from_port_like_obj(pg) pg.refresh() self.assertNotIn('vif_port_id', pg.extra) self.assertNotIn(common.TENANT_VIF_KEY, pg.internal_info) def test__get_port_like_obj_by_vif_id_in_extra(self): vif_id = self.port.extra["vif_port_id"] with task_manager.acquire(self.context, self.node.id) as task: result = self.interface._get_port_like_obj_by_vif_id(task, vif_id) self.assertEqual(self.port.id, result.id) def test__get_port_like_obj_by_vif_id_in_internal_info(self): vif_id = self.port.extra["vif_port_id"] self.port.internal_info = {common.TENANT_VIF_KEY: vif_id} self.port.extra = {} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: result = self.interface._get_port_like_obj_by_vif_id(task, vif_id) self.assertEqual(self.port.id, result.id) def test__get_port_like_obj_by_vif_id_not_attached(self): vif_id = self.port.extra["vif_port_id"] self.port.extra = {} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex(exception.VifNotAttached, "it is not attached to it.", self.interface._get_port_like_obj_by_vif_id, task, vif_id) def test__get_vif_id_by_port_like_obj_in_extra(self): vif_id = self.port.extra["vif_port_id"] result = self.interface._get_vif_id_by_port_like_obj(self.port) self.assertEqual(vif_id, result) def test__get_vif_id_by_port_like_obj_in_internal_info(self): vif_id = self.port.extra["vif_port_id"] self.port.internal_info = {common.TENANT_VIF_KEY: vif_id} self.port.extra = {} self.port.save() result = self.interface._get_vif_id_by_port_like_obj(self.port) self.assertEqual(vif_id, result) def test__get_vif_id_by_port_like_obj_not_attached(self): self.port.extra = {} self.port.save() result = self.interface._get_vif_id_by_port_like_obj(self.port) self.assertIsNone(result) def test_vif_list_extra(self): vif_id = uuidutils.generate_uuid() self.port.extra = {'vif_port_id': vif_id} self.port.save() pg_vif_id = uuidutils.generate_uuid() portgroup = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, address='52:54:00:00:00:00', internal_info={common.TENANT_VIF_KEY: pg_vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: vifs = self.interface.vif_list(task) self.assertItemsEqual([{'id': pg_vif_id}, {'id': vif_id}], vifs) def test_vif_list_internal(self): vif_id = uuidutils.generate_uuid() self.port.internal_info = {common.TENANT_VIF_KEY: vif_id} self.port.save() pg_vif_id = uuidutils.generate_uuid() portgroup = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, address='52:54:00:00:00:00', internal_info={common.TENANT_VIF_KEY: pg_vif_id}) obj_utils.create_test_port( self.context, node_id=self.node.id, portgroup_id=portgroup.id, address='52:54:00:cf:2d:01', uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: vifs = self.interface.vif_list(task) self.assertItemsEqual([{'id': pg_vif_id}, {'id': vif_id}], vifs) def test_vif_list_extra_and_internal_priority(self): vif_id = uuidutils.generate_uuid() vif_id2 = uuidutils.generate_uuid() self.port.extra = {'vif_port_id': vif_id2} self.port.internal_info = {common.TENANT_VIF_KEY: vif_id} self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vifs = self.interface.vif_list(task) self.assertEqual([{'id': vif_id}], vifs) def test_get_current_vif_extra_vif_port_id(self): extra = {'vif_port_id': 'foo'} self.port.extra = extra self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertEqual('foo', vif) def test_get_current_vif_internal_info_cleaning(self): internal_info = {'cleaning_vif_port_id': 'foo', 'tenant_vif_port_id': 'bar'} self.port.internal_info = internal_info self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertEqual('foo', vif) def test_get_current_vif_internal_info_provisioning(self): internal_info = {'provisioning_vif_port_id': 'foo', 'tenant_vif_port_id': 'bar'} self.port.internal_info = internal_info self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertEqual('foo', vif) def test_get_current_vif_internal_info_tenant_vif(self): internal_info = {'tenant_vif_port_id': 'bar'} self.port.internal_info = internal_info self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertEqual('bar', vif) def test_get_current_vif_internal_info_rescuing(self): internal_info = {'rescuing_vif_port_id': 'foo', 'tenant_vif_port_id': 'bar'} self.port.internal_info = internal_info self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertEqual('foo', vif) def test_get_current_vif_none(self): internal_info = extra = {} self.port.internal_info = internal_info self.port.extra = extra self.port.save() with task_manager.acquire(self.context, self.node.id) as task: vif = self.interface.get_current_vif(task, self.port) self.assertIsNone(vif) class TestNeutronVifPortIDMixin(db_base.DbTestCase): def setUp(self): super(TestNeutronVifPortIDMixin, self).setUp() self.interface = common.NeutronVIFPortIDMixin() self.node = obj_utils.create_test_node(self.context, network_interface='neutron') self.port = obj_utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid(), 'client-id': 'fake1'}) self.neutron_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f00', 'mac_address': '52:54:00:cf:2d:32'} @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): vif = {'id': "fake_vif_id"} mock_gfp.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_client.assert_called_once_with(context=task.context) mock_upa.assert_called_once_with( "fake_vif_id", self.port.address, context=task.context) self.assertFalse(mock_gpbpi.called) mock_gfp.assert_called_once_with(task, 'fake_vif_id', set()) mock_save.assert_called_once_with(self.port, "fake_vif_id") @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_failure(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): vif = {'id': "fake_vif_id"} mock_gfp.side_effect = exception.NoFreePhysicalPorts(vif='fake-vif') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NoFreePhysicalPorts, self.interface.vif_attach, task, vif) mock_gfp.assert_called_once_with(task, 'fake_vif_id', set()) self.assertFalse(mock_save.called) @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_with_physnet(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): self.port.physical_network = 'physnet1' self.port.save() vif = {'id': "fake_vif_id"} mock_gpbpi.return_value = {'physnet1'} mock_gfp.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_client.assert_called_once_with(context=task.context) mock_upa.assert_called_once_with( "fake_vif_id", self.port.address, context=task.context) mock_gpbpi.assert_called_once_with(mock_client.return_value, 'fake_vif_id') mock_gfp.assert_called_once_with(task, 'fake_vif_id', {'physnet1'}) mock_save.assert_called_once_with(self.port, "fake_vif_id") @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'plug_port_to_tenant_network', autospec=True) @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_active_node(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_plug, mock_save): self.node.provision_state = states.ACTIVE self.node.save() vif = {'id': "fake_vif_id"} mock_gfp.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_client.assert_called_once_with(context=task.context) mock_upa.assert_called_once_with( "fake_vif_id", self.port.address, context=task.context) self.assertFalse(mock_gpbpi.called) mock_gfp.assert_called_once_with(task, 'fake_vif_id', set()) mock_save.assert_called_once_with(self.port, "fake_vif_id") mock_plug.assert_called_once_with(task, self.port, mock.ANY) @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'plug_port_to_tenant_network', autospec=True) @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_active_node_failure(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_plug, mock_save): self.node.provision_state = states.ACTIVE self.node.save() vif = {'id': "fake_vif_id"} mock_gfp.return_value = self.port mock_plug.side_effect = exception.NetworkError with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NetworkError, self.interface.vif_attach, task, vif) mock_client.assert_called_once_with(context=task.context) mock_upa.assert_called_once_with( "fake_vif_id", self.port.address, context=task.context) self.assertFalse(mock_gpbpi.called) mock_gfp.assert_called_once_with(task, 'fake_vif_id', set()) mock_save.assert_called_once_with(self.port, "fake_vif_id") mock_plug.assert_called_once_with(task, self.port, mock.ANY) @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address') @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_portgroup_no_address(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, address=None) mock_gfp.return_value = pg vif = {'id': "fake_vif_id"} with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_client.assert_called_once_with(context=task.context) self.assertFalse(mock_gpbpi.called) mock_gfp.assert_called_once_with(task, 'fake_vif_id', set()) self.assertFalse(mock_client.return_value.show_port.called) self.assertFalse(mock_upa.called) mock_save.assert_called_once_with(pg, "fake_vif_id") @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(neutron_common, 'get_client', autospec=True) @mock.patch.object(neutron_common, 'update_port_address') @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_update_port_exception(self, mock_gpbpi, mock_upa, mock_client, mock_save): self.port.extra = {} self.port.physical_network = 'physnet1' self.port.save() vif = {'id': "fake_vif_id"} mock_gpbpi.return_value = {'physnet1'} mock_upa.side_effect = ( exception.FailedToUpdateMacOnPort(port_id='fake')) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.NetworkError, "can not update Neutron port", self.interface.vif_attach, task, vif) mock_client.assert_called_once_with(context=task.context) mock_gpbpi.assert_called_once_with(mock_client.return_value, 'fake_vif_id') self.assertFalse(mock_save.called) @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client') @mock.patch.object(neutron_common, 'update_port_address') @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_portgroup_physnet_inconsistent(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): self.port.physical_network = 'physnet1' self.port.save() vif = {'id': "fake_vif_id"} mock_gpbpi.return_value = {'anyphysnet'} mock_gfp.side_effect = exception.PortgroupPhysnetInconsistent( portgroup='fake-portgroup-id', physical_networks='physnet1') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises( exception.PortgroupPhysnetInconsistent, self.interface.vif_attach, task, vif) mock_client.assert_called_once_with(context=task.context) mock_gpbpi.assert_called_once_with(mock_client.return_value, 'fake_vif_id') self.assertFalse(mock_upa.called) self.assertFalse(mock_save.called) @mock.patch.object(common.VIFPortIDMixin, '_save_vif_to_port_like_obj') @mock.patch.object(common, 'get_free_port_like_object', autospec=True) @mock.patch.object(neutron_common, 'get_client') @mock.patch.object(neutron_common, 'update_port_address') @mock.patch.object(neutron_common, 'get_physnets_by_port_uuid', autospec=True) def test_vif_attach_multiple_segment_mappings(self, mock_gpbpi, mock_upa, mock_client, mock_gfp, mock_save): self.port.physical_network = 'physnet1' self.port.save() obj_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', physical_network='physnet2') vif = {'id': "fake_vif_id"} mock_gpbpi.return_value = {'physnet1', 'physnet2'} with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises( exception.VifInvalidForAttach, self.interface.vif_attach, task, vif) mock_client.assert_called_once_with(context=task.context) mock_gpbpi.assert_called_once_with(mock_client.return_value, 'fake_vif_id') self.assertFalse(mock_gfp.called) self.assertFalse(mock_upa.called) self.assertFalse(mock_save.called) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach(self, mock_get, mock_unp, mock_clear): mock_get.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, 'fake_vif_id') mock_get.assert_called_once_with(task, 'fake_vif_id') self.assertFalse(mock_unp.called) mock_clear.assert_called_once_with(self.port) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach_portgroup(self, mock_get, mock_unp, mock_clear): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) mock_get.return_value = pg with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, 'fake_vif_id') mock_get.assert_called_once_with(task, 'fake_vif_id') self.assertFalse(mock_unp.called) mock_clear.assert_called_once_with(pg) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach_not_attached(self, mock_get, mock_unp, mock_clear): mock_get.side_effect = exception.VifNotAttached(vif='fake-vif', node='fake-node') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex( exception.VifNotAttached, "it is not attached to it.", self.interface.vif_detach, task, 'fake_vif_id') mock_get.assert_called_once_with(task, 'fake_vif_id') self.assertFalse(mock_unp.called) self.assertFalse(mock_clear.called) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach_active_node(self, mock_get, mock_unp, mock_clear): self.node.provision_state = states.ACTIVE self.node.save() mock_get.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, 'fake_vif_id') mock_unp.assert_called_once_with('fake_vif_id', context=task.context) mock_get.assert_called_once_with(task, 'fake_vif_id') mock_clear.assert_called_once_with(self.port) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach_deleting_node(self, mock_get, mock_unp, mock_clear): self.node.provision_state = states.DELETING self.node.save() mock_get.return_value = self.port with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, 'fake_vif_id') mock_unp.assert_called_once_with('fake_vif_id', context=task.context) mock_get.assert_called_once_with(task, 'fake_vif_id') mock_clear.assert_called_once_with(self.port) @mock.patch.object(common.VIFPortIDMixin, '_clear_vif_from_port_like_obj') @mock.patch.object(neutron_common, 'unbind_neutron_port', autospec=True) @mock.patch.object(common.VIFPortIDMixin, '_get_port_like_obj_by_vif_id') def test_vif_detach_active_node_failure(self, mock_get, mock_unp, mock_clear): self.node.provision_state = states.ACTIVE self.node.save() mock_get.return_value = self.port mock_unp.side_effect = exception.NetworkError with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NetworkError, self.interface.vif_detach, task, 'fake_vif_id') mock_unp.assert_called_once_with('fake_vif_id', context=task.context) mock_get.assert_called_once_with(task, 'fake_vif_id') mock_clear.assert_called_once_with(self.port) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_port_changed_address(self, mac_update_mock): new_address = '11:22:33:44:55:bb' self.port.address = new_address with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) mac_update_mock.assert_called_once_with( self.port.extra['vif_port_id'], new_address, context=task.context) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_port_changed_address_VIF_MAC_update_fail(self, mac_update_mock): new_address = '11:22:33:44:55:bb' self.port.address = new_address mac_update_mock.side_effect = ( exception.FailedToUpdateMacOnPort(port_id=self.port.uuid)) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.FailedToUpdateMacOnPort, self.interface.port_changed, task, self.port) mac_update_mock.assert_called_once_with( self.port.extra['vif_port_id'], new_address, context=task.context) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_port_changed_address_no_vif_id(self, mac_update_mock): self.port.extra = {} self.port.save() self.port.address = '11:22:33:44:55:bb' with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) self.assertFalse(mac_update_mock.called) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') def test_port_changed_client_id(self, dhcp_update_mock): expected_extra = {'vif_port_id': 'fake-id', 'client-id': 'fake2'} expected_dhcp_opts = [{'opt_name': '61', 'opt_value': 'fake2'}] self.port.extra = expected_extra with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) dhcp_update_mock.assert_called_once_with( 'fake-id', expected_dhcp_opts, context=task.context) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') def test_port_changed_extra_add_new_key(self, dhcp_update_mock): self.port.extra = {'vif_port_id': 'fake-id'} self.port.save() expected_extra = self.port.extra expected_extra['foo'] = 'bar' self.port.extra = expected_extra with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) self.assertFalse(dhcp_update_mock.called) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') def test_port_changed_client_id_fail(self, dhcp_update_mock): self.port.extra = {'vif_port_id': 'fake-id', 'client-id': 'fake2'} dhcp_update_mock.side_effect = ( exception.FailedToUpdateDHCPOptOnPort(port_id=self.port.uuid)) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.FailedToUpdateDHCPOptOnPort, self.interface.port_changed, task, self.port) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') def test_port_changed_client_id_no_vif_id(self, dhcp_update_mock): self.port.extra = {'client-id': 'fake1'} self.port.save() self.port.extra = {'client-id': 'fake2'} with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, self.port) self.assertFalse(dhcp_update_mock.called) @mock.patch('ironic.dhcp.neutron.NeutronDHCPApi.update_port_dhcp_opts') def test_port_changed_message_format_failure(self, dhcp_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, standalone_ports_supported=False) port = obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address="aa:bb:cc:dd:ee:01", extra={'vif_port_id': 'blah'}, pxe_enabled=False) port.portgroup_id = pg.id with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex(exception.Conflict, "VIF blah is attached to the port", self.interface.port_changed, task, port) def _test_port_changed(self, has_vif=False, in_portgroup=False, pxe_enabled=True, standalone_ports=True, expect_errors=False): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, standalone_ports_supported=standalone_ports) extra_vif = {'vif_port_id': uuidutils.generate_uuid()} if has_vif: extra = extra_vif opposite_extra = {} else: extra = {} opposite_extra = extra_vif opposite_pxe_enabled = not pxe_enabled pg_id = None if in_portgroup: pg_id = pg.id ports = [] # Update only portgroup id on existed port with different # combinations of pxe_enabled/vif_port_id p1 = obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address="aa:bb:cc:dd:ee:01", extra=extra, pxe_enabled=pxe_enabled) p1.portgroup_id = pg_id ports.append(p1) # Update portgroup_id/pxe_enabled/vif_port_id in one request p2 = obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address="aa:bb:cc:dd:ee:02", extra=opposite_extra, pxe_enabled=opposite_pxe_enabled) p2.extra = extra p2.pxe_enabled = pxe_enabled p2.portgroup_id = pg_id ports.append(p2) # Update portgroup_id and pxe_enabled p3 = obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address="aa:bb:cc:dd:ee:03", extra=extra, pxe_enabled=opposite_pxe_enabled) p3.pxe_enabled = pxe_enabled p3.portgroup_id = pg_id ports.append(p3) # Update portgroup_id and vif_port_id p4 = obj_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address="aa:bb:cc:dd:ee:04", pxe_enabled=pxe_enabled, extra=opposite_extra) p4.extra = extra p4.portgroup_id = pg_id ports.append(p4) for port in ports: with task_manager.acquire(self.context, self.node.id) as task: if not expect_errors: self.interface.port_changed(task, port) else: self.assertRaises(exception.Conflict, self.interface.port_changed, task, port) def test_port_changed_novif_pxe_noportgroup(self): self._test_port_changed(has_vif=False, in_portgroup=False, pxe_enabled=True, expect_errors=False) def test_port_changed_novif_nopxe_noportgroup(self): self._test_port_changed(has_vif=False, in_portgroup=False, pxe_enabled=False, expect_errors=False) def test_port_changed_vif_pxe_noportgroup(self): self._test_port_changed(has_vif=True, in_portgroup=False, pxe_enabled=True, expect_errors=False) def test_port_changed_vif_nopxe_noportgroup(self): self._test_port_changed(has_vif=True, in_portgroup=False, pxe_enabled=False, expect_errors=False) def test_port_changed_novif_pxe_portgroup_standalone_ports(self): self._test_port_changed(has_vif=False, in_portgroup=True, pxe_enabled=True, standalone_ports=True, expect_errors=False) def test_port_changed_novif_pxe_portgroup_nostandalone_ports(self): self._test_port_changed(has_vif=False, in_portgroup=True, pxe_enabled=True, standalone_ports=False, expect_errors=True) def test_port_changed_novif_nopxe_portgroup_standalone_ports(self): self._test_port_changed(has_vif=False, in_portgroup=True, pxe_enabled=False, standalone_ports=True, expect_errors=False) def test_port_changed_novif_nopxe_portgroup_nostandalone_ports(self): self._test_port_changed(has_vif=False, in_portgroup=True, pxe_enabled=False, standalone_ports=False, expect_errors=False) def test_port_changed_vif_pxe_portgroup_standalone_ports(self): self._test_port_changed(has_vif=True, in_portgroup=True, pxe_enabled=True, standalone_ports=True, expect_errors=False) def test_port_changed_vif_pxe_portgroup_nostandalone_ports(self): self._test_port_changed(has_vif=True, in_portgroup=True, pxe_enabled=True, standalone_ports=False, expect_errors=True) def test_port_changed_vif_nopxe_portgroup_standalone_ports(self): self._test_port_changed(has_vif=True, in_portgroup=True, pxe_enabled=True, standalone_ports=True, expect_errors=False) def test_port_changed_vif_nopxe_portgroup_nostandalone_ports(self): self._test_port_changed(has_vif=True, in_portgroup=True, pxe_enabled=False, standalone_ports=False, expect_errors=True) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_update_portgroup_address(self, mac_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, extra={'vif_port_id': 'fake-id'}) new_address = '11:22:33:44:55:bb' pg.address = new_address with task_manager.acquire(self.context, self.node.id) as task: self.interface.portgroup_changed(task, pg) mac_update_mock.assert_called_once_with( 'fake-id', new_address, context=task.context) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_update_portgroup_remove_address(self, mac_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, extra={'vif_port_id': 'fake-id'}) pg.address = None with task_manager.acquire(self.context, self.node.id) as task: self.interface.portgroup_changed(task, pg) self.assertFalse(mac_update_mock.called) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_update_portgroup_address_fail(self, mac_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, extra={'vif_port_id': 'fake-id'}) new_address = '11:22:33:44:55:bb' pg.address = new_address mac_update_mock.side_effect = ( exception.FailedToUpdateMacOnPort('boom')) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.FailedToUpdateMacOnPort, self.interface.portgroup_changed, task, pg) mac_update_mock.assert_called_once_with( 'fake-id', new_address, context=task.context) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_update_portgroup_address_no_vif(self, mac_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) new_address = '11:22:33:44:55:bb' pg.address = new_address with task_manager.acquire(self.context, self.node.id) as task: self.interface.portgroup_changed(task, pg) self.assertEqual(new_address, pg.address) self.assertFalse(mac_update_mock.called) @mock.patch.object(neutron_common, 'update_port_address', autospec=True) def test_update_portgroup_nostandalone_ports_pxe_ports_exc( self, mac_update_mock): pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id) extra = {'vif_port_id': 'foo'} obj_utils.create_test_port( self.context, node_id=self.node.id, extra=extra, pxe_enabled=True, portgroup_id=pg.id, address="aa:bb:cc:dd:ee:01", uuid=uuidutils.generate_uuid()) pg.standalone_ports_supported = False with task_manager.acquire(self.context, self.node.id) as task: self.assertRaisesRegex(exception.Conflict, "VIF foo is attached to this port", self.interface.portgroup_changed, task, pg) def _test_update_portgroup(self, has_vif=False, with_ports=False, pxe_enabled=True, standalone_ports=True, expect_errors=False): # NOTE(vsaienko) make sure that old values are opposite to new, # to guarantee that object.what_changes() returns true. old_standalone_ports_supported = not standalone_ports pg = obj_utils.create_test_portgroup( self.context, node_id=self.node.id, standalone_ports_supported=old_standalone_ports_supported) if with_ports: extra = {} if has_vif: extra = {'vif_port_id': uuidutils.generate_uuid()} obj_utils.create_test_port( self.context, node_id=self.node.id, extra=extra, pxe_enabled=pxe_enabled, portgroup_id=pg.id, address="aa:bb:cc:dd:ee:01", uuid=uuidutils.generate_uuid()) pg.standalone_ports_supported = standalone_ports with task_manager.acquire(self.context, self.node.id) as task: if not expect_errors: self.interface.portgroup_changed(task, pg) else: self.assertRaises(exception.Conflict, self.interface.portgroup_changed, task, pg) def test_update_portgroup_standalone_ports_noports(self): self._test_update_portgroup(with_ports=False, standalone_ports=True, expect_errors=False) def test_update_portgroup_standalone_ports_novif_pxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=True, has_vif=False, pxe_enabled=True, expect_errors=False) def test_update_portgroup_nostandalone_ports_novif_pxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=False, has_vif=False, pxe_enabled=True, expect_errors=True) def test_update_portgroup_nostandalone_ports_novif_nopxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=False, has_vif=False, pxe_enabled=False, expect_errors=False) def test_update_portgroup_standalone_ports_novif_nopxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=True, has_vif=False, pxe_enabled=False, expect_errors=False) def test_update_portgroup_standalone_ports_vif_pxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=True, has_vif=True, pxe_enabled=True, expect_errors=False) def test_update_portgroup_nostandalone_ports_vif_pxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=False, has_vif=True, pxe_enabled=True, expect_errors=True) def test_update_portgroup_standalone_ports_vif_nopxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=True, has_vif=True, pxe_enabled=False, expect_errors=False) def test_update_portgroup_nostandalone_ports_vif_nopxe_ports(self): self._test_update_portgroup(with_ports=True, standalone_ports=False, has_vif=True, pxe_enabled=False, expect_errors=True) ironic-15.0.0/ironic/tests/unit/drivers/modules/network/__init__.py0000664000175000017500000000000013652514273025434 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/drivers/modules/network/test_flat.py0000664000175000017500000004126113652514273025700 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as neutron_exceptions from oslo_config import cfg from oslo_utils import uuidutils from ironic.common import exception from ironic.common import neutron from ironic.conductor import task_manager from ironic.drivers.modules.network import flat as flat_interface from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils CONF = cfg.CONF VIFMIXINPATH = 'ironic.drivers.modules.network.common.NeutronVIFPortIDMixin' class TestFlatInterface(db_base.DbTestCase): def setUp(self): super(TestFlatInterface, self).setUp() self.interface = flat_interface.FlatNetwork() self.node = utils.create_test_node(self.context) self.port = utils.create_test_port( self.context, node_id=self.node.id, internal_info={ 'cleaning_vif_port_id': uuidutils.generate_uuid()}) @mock.patch('%s.vif_list' % VIFMIXINPATH) def test_vif_list(self, mock_vif_list): with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_list(task) mock_vif_list.assert_called_once_with(task) @mock.patch('%s.vif_attach' % VIFMIXINPATH) def test_vif_attach(self, mock_vif_attach): vif = mock.MagicMock() with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_attach(task, vif) mock_vif_attach.assert_called_once_with(task, vif) @mock.patch('%s.vif_detach' % VIFMIXINPATH) def test_vif_detach(self, mock_vif_detach): vif_id = "vif" with task_manager.acquire(self.context, self.node.id) as task: self.interface.vif_detach(task, vif_id) mock_vif_detach.assert_called_once_with(task, vif_id) @mock.patch('%s.port_changed' % VIFMIXINPATH) def test_vif_port_changed(self, mock_p_changed): port = mock.MagicMock() with task_manager.acquire(self.context, self.node.id) as task: self.interface.port_changed(task, port) mock_p_changed.assert_called_once_with(task, port) @mock.patch.object(flat_interface, 'LOG') def test_init_no_cleaning_network(self, mock_log): self.config(cleaning_network=None, group='neutron') flat_interface.FlatNetwork() self.assertTrue(mock_log.warning.called) @mock.patch.object(neutron, 'validate_network', autospec=True) def test_validate(self, validate_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) validate_mock.assert_called_once_with( CONF.neutron.cleaning_network, 'cleaning network', context=task.context) @mock.patch.object(neutron, 'validate_network', autospec=True) def test_validate_from_node(self, validate_mock): cleaning_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['cleaning_network'] = cleaning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate(task) validate_mock.assert_called_once_with( cleaning_network_uuid, 'cleaning network', context=task.context) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'add_ports_to_network') @mock.patch.object(neutron, 'rollback_ports') def test_add_cleaning_network(self, rollback_mock, add_mock, validate_mock): add_mock.return_value = {self.port.uuid: 'vif-port-id'} with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_cleaning_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) add_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) validate_mock.assert_called_once_with( CONF.neutron.cleaning_network, 'cleaning network', context=task.context) self.port.refresh() self.assertEqual('vif-port-id', self.port.internal_info['cleaning_vif_port_id']) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'add_ports_to_network') @mock.patch.object(neutron, 'rollback_ports') def test_add_cleaning_network_from_node(self, rollback_mock, add_mock, validate_mock): add_mock.return_value = {self.port.uuid: 'vif-port-id'} # Make sure that changing the network UUID works for cleaning_network_uuid in ['3aea0de6-4b92-44da-9aa0-52d134c83fdf', '438be438-6aae-4fb1-bbcb-613ad7a38286']: driver_info = self.node.driver_info driver_info['cleaning_network'] = cleaning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_cleaning_network(task) rollback_mock.assert_called_with( task, cleaning_network_uuid) add_mock.assert_called_with(task, cleaning_network_uuid) validate_mock.assert_called_with( cleaning_network_uuid, 'cleaning network', context=task.context) self.port.refresh() self.assertEqual('vif-port-id', self.port.internal_info['cleaning_vif_port_id']) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'remove_ports_from_network') def test_remove_cleaning_network(self, remove_mock, validate_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_cleaning_network(task) remove_mock.assert_called_once_with( task, CONF.neutron.cleaning_network) validate_mock.assert_called_once_with( CONF.neutron.cleaning_network, 'cleaning network', context=task.context) self.port.refresh() self.assertNotIn('cleaning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'remove_ports_from_network') def test_remove_cleaning_network_from_node(self, remove_mock, validate_mock): cleaning_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['cleaning_network'] = cleaning_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_cleaning_network(task) remove_mock.assert_called_once_with(task, cleaning_network_uuid) validate_mock.assert_called_once_with( cleaning_network_uuid, 'cleaning network', context=task.context) self.port.refresh() self.assertNotIn('cleaning_vif_port_id', self.port.internal_info) @mock.patch.object(neutron, 'update_neutron_port') def test__bind_flat_ports_set_binding_host_id(self, update_mock): extra = {'vif_port_id': 'foo'} utils.create_test_port(self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', extra=extra, uuid=uuidutils.generate_uuid()) exp_body = {'port': {'binding:host_id': self.node.uuid, 'binding:vnic_type': neutron.VNIC_BAREMETAL, 'mac_address': '52:54:00:cf:2d:33'}} with task_manager.acquire(self.context, self.node.id) as task: self.interface._bind_flat_ports(task) update_mock.assert_called_once_with(self.context, 'foo', exp_body) @mock.patch.object(neutron, 'update_neutron_port') def test__bind_flat_ports_set_binding_host_id_portgroup(self, update_mock): internal_info = {'tenant_vif_port_id': 'foo'} utils.create_test_portgroup( self.context, node_id=self.node.id, internal_info=internal_info, uuid=uuidutils.generate_uuid()) utils.create_test_port( self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', extra={'vif_port_id': 'bar'}, uuid=uuidutils.generate_uuid()) exp_body1 = {'port': {'binding:host_id': self.node.uuid, 'binding:vnic_type': neutron.VNIC_BAREMETAL, 'mac_address': '52:54:00:cf:2d:33'}} exp_body2 = {'port': {'binding:host_id': self.node.uuid, 'binding:vnic_type': neutron.VNIC_BAREMETAL, 'mac_address': '52:54:00:cf:2d:31'}} with task_manager.acquire(self.context, self.node.id) as task: self.interface._bind_flat_ports(task) update_mock.assert_has_calls([ mock.call(self.context, 'bar', exp_body1), mock.call(self.context, 'foo', exp_body2)]) @mock.patch.object(neutron, 'unbind_neutron_port') def test__unbind_flat_ports(self, unbind_neutron_port_mock): extra = {'vif_port_id': 'foo'} utils.create_test_port(self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', extra=extra, uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: self.interface._unbind_flat_ports(task) unbind_neutron_port_mock.assert_called_once_with('foo', context=self.context) @mock.patch.object(neutron, 'unbind_neutron_port') def test__unbind_flat_ports_portgroup(self, unbind_neutron_port_mock): internal_info = {'tenant_vif_port_id': 'foo'} utils.create_test_portgroup(self.context, node_id=self.node.id, internal_info=internal_info, uuid=uuidutils.generate_uuid()) extra = {'vif_port_id': 'bar'} utils.create_test_port(self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', extra=extra, uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: self.interface._unbind_flat_ports(task) unbind_neutron_port_mock.has_calls( [mock.call('foo', context=self.context), mock.call('bar', context=self.context)]) @mock.patch.object(neutron, 'update_neutron_port') def test__bind_flat_ports_set_binding_host_id_raise(self, update_mock): update_mock.side_effect = (neutron_exceptions.ConnectionFailed()) extra = {'vif_port_id': 'foo'} utils.create_test_port(self.context, node_id=self.node.id, address='52:54:00:cf:2d:33', extra=extra, uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.NetworkError, self.interface._bind_flat_ports, task) @mock.patch.object(flat_interface.FlatNetwork, '_bind_flat_ports') def test_add_rescuing_network(self, bind_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_rescuing_network(task) bind_mock.assert_called_once_with(task) @mock.patch.object(flat_interface.FlatNetwork, '_unbind_flat_ports') def test_remove_rescuing_network(self, unbind_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_rescuing_network(task) unbind_mock.assert_called_once_with(task) @mock.patch.object(flat_interface.FlatNetwork, '_bind_flat_ports') def test_add_provisioning_network(self, bind_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_provisioning_network(task) bind_mock.assert_called_once_with(task) @mock.patch.object(flat_interface.FlatNetwork, '_unbind_flat_ports') def test_remove_provisioning_network(self, unbind_mock): with task_manager.acquire(self.context, self.node.id) as task: self.interface.remove_provisioning_network(task) unbind_mock.assert_called_once_with(task) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'add_ports_to_network') @mock.patch.object(neutron, 'rollback_ports') def test_add_inspection_network(self, rollback_mock, add_mock, validate_mock): add_mock.return_value = {self.port.uuid: 'vif-port-id'} with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_inspection_network(task) rollback_mock.assert_called_once_with( task, CONF.neutron.inspection_network) add_mock.assert_called_once_with( task, CONF.neutron.inspection_network) validate_mock.assert_called_once_with( CONF.neutron.inspection_network, 'inspection network', context=task.context) self.port.refresh() self.assertEqual('vif-port-id', self.port.internal_info['inspection_vif_port_id']) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) @mock.patch.object(neutron, 'add_ports_to_network') @mock.patch.object(neutron, 'rollback_ports') def test_add_inspection_network_from_node(self, rollback_mock, add_mock, validate_mock): add_mock.return_value = {self.port.uuid: 'vif-port-id'} # Make sure that changing the network UUID works for inspection_network_uuid in [ '3aea0de6-4b92-44da-9aa0-52d134c83fdf', '438be438-6aae-4fb1-bbcb-613ad7a38286']: driver_info = self.node.driver_info driver_info['inspection_network'] = inspection_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.add_inspection_network(task) rollback_mock.assert_called_with( task, inspection_network_uuid) add_mock.assert_called_with(task, inspection_network_uuid) validate_mock.assert_called_with( inspection_network_uuid, 'inspection network', context=task.context) self.port.refresh() self.assertEqual('vif-port-id', self.port.internal_info['inspection_vif_port_id']) @mock.patch.object(neutron, 'validate_network', side_effect=lambda n, t, context=None: n) def test_validate_inspection(self, validate_mock): inspection_network_uuid = '3aea0de6-4b92-44da-9aa0-52d134c83fdf' driver_info = self.node.driver_info driver_info['inspection_network'] = inspection_network_uuid self.node.driver_info = driver_info self.node.save() with task_manager.acquire(self.context, self.node.id) as task: self.interface.validate_inspection(task) validate_mock.assert_called_once_with( inspection_network_uuid, 'inspection network', context=task.context), def test_validate_inspection_exc(self): self.config(inspection_network="", group='neutron') with task_manager.acquire(self.context, self.node.id) as task: self.assertRaises(exception.UnsupportedDriverExtension, self.interface.validate_inspection, task) ironic-15.0.0/ironic/tests/unit/drivers/modules/__init__.py0000664000175000017500000000000013652514273023743 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/common/0000775000175000017500000000000013652514443020005 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/common/test_utils.py0000664000175000017500000006332013652514273022563 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import errno import os import os.path import shutil import tempfile import jinja2 import mock from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import netutils from ironic.common import exception from ironic.common import utils from ironic.tests import base CONF = cfg.CONF class BareMetalUtilsTestCase(base.TestCase): def test_create_link(self): with mock.patch.object(os, "symlink", autospec=True) as symlink_mock: symlink_mock.return_value = None utils.create_link_without_raise("/fake/source", "/fake/link") symlink_mock.assert_called_once_with("/fake/source", "/fake/link") def test_create_link_EEXIST(self): with mock.patch.object(os, "symlink", autospec=True) as symlink_mock: symlink_mock.side_effect = OSError(errno.EEXIST) utils.create_link_without_raise("/fake/source", "/fake/link") symlink_mock.assert_called_once_with("/fake/source", "/fake/link") class ExecuteTestCase(base.TestCase): @mock.patch.object(processutils, 'execute', autospec=True) @mock.patch.object(os.environ, 'copy', return_value={}, autospec=True) def test_execute_use_standard_locale_no_env_variables(self, env_mock, execute_mock): utils.execute('foo', use_standard_locale=True) execute_mock.assert_called_once_with('foo', env_variables={'LC_ALL': 'C'}) @mock.patch.object(processutils, 'execute', autospec=True) def test_execute_use_standard_locale_with_env_variables(self, execute_mock): utils.execute('foo', use_standard_locale=True, env_variables={'foo': 'bar'}) execute_mock.assert_called_once_with('foo', env_variables={'LC_ALL': 'C', 'foo': 'bar'}) @mock.patch.object(processutils, 'execute', autospec=True) def test_execute_not_use_standard_locale(self, execute_mock): utils.execute('foo', use_standard_locale=False, env_variables={'foo': 'bar'}) execute_mock.assert_called_once_with('foo', env_variables={'foo': 'bar'}) def test_execute_get_root_helper(self): with mock.patch.object( processutils, 'execute', autospec=True) as execute_mock: helper = utils._get_root_helper() utils.execute('foo', run_as_root=True) execute_mock.assert_called_once_with('foo', run_as_root=True, root_helper=helper) def test_execute_without_root_helper(self): with mock.patch.object( processutils, 'execute', autospec=True) as execute_mock: utils.execute('foo', run_as_root=False) execute_mock.assert_called_once_with('foo', run_as_root=False) class GenericUtilsTestCase(base.TestCase): @mock.patch.object(utils, 'hashlib', autospec=True) def test__get_hash_object(self, hashlib_mock): algorithms_available = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') hashlib_mock.algorithms_guaranteed = algorithms_available hashlib_mock.algorithms = algorithms_available # | WHEN | utils._get_hash_object('md5') utils._get_hash_object('sha1') utils._get_hash_object('sha224') utils._get_hash_object('sha256') utils._get_hash_object('sha384') utils._get_hash_object('sha512') # | THEN | calls = [mock.call.md5(), mock.call.sha1(), mock.call.sha224(), mock.call.sha256(), mock.call.sha384(), mock.call.sha512()] hashlib_mock.assert_has_calls(calls) def test__get_hash_object_throws_for_invalid_or_unsupported_hash_name( self): # | WHEN | & | THEN | self.assertRaises(exception.InvalidParameterValue, utils._get_hash_object, 'hickory-dickory-dock') def test_file_has_content_equal(self): data = b'Mary had a little lamb, its fleece as white as snow' ref = data with mock.patch('oslo_utils.fileutils.open', mock.mock_open(read_data=data)) as mopen: self.assertTrue(utils.file_has_content('foo', ref)) mopen.assert_called_once_with('foo', 'rb') def test_file_has_content_equal_not_binary(self): data = ('Mary had a little lamb, its fleece as white as ' 'sno\u0449').encode('utf-8') ref = data with mock.patch('oslo_utils.fileutils.open', mock.mock_open(read_data=data)) as mopen: self.assertTrue(utils.file_has_content('foo', ref)) mopen.assert_called_once_with('foo', 'rb') def test_file_has_content_differ(self): data = b'Mary had a little lamb, its fleece as white as snow' ref = data + b'!' with mock.patch('oslo_utils.fileutils.open', mock.mock_open(read_data=data)) as mopen: self.assertFalse(utils.file_has_content('foo', ref)) mopen.assert_called_once_with('foo', 'rb') def test_is_valid_datapath_id(self): self.assertTrue(utils.is_valid_datapath_id("525400cf2d319fdf")) self.assertTrue(utils.is_valid_datapath_id("525400CF2D319FDF")) self.assertFalse(utils.is_valid_datapath_id("52")) self.assertFalse(utils.is_valid_datapath_id("52:54:00:cf:2d:31")) self.assertFalse(utils.is_valid_datapath_id("notadatapathid00")) self.assertFalse(utils.is_valid_datapath_id("5525400CF2D319FDF")) def test_is_hostname_safe(self): self.assertTrue(utils.is_hostname_safe('spam')) self.assertFalse(utils.is_hostname_safe('spAm')) self.assertFalse(utils.is_hostname_safe('SPAM')) self.assertFalse(utils.is_hostname_safe('-spam')) self.assertFalse(utils.is_hostname_safe('spam-')) self.assertTrue(utils.is_hostname_safe('spam-eggs')) self.assertFalse(utils.is_hostname_safe('spam_eggs')) self.assertFalse(utils.is_hostname_safe('spam eggs')) self.assertTrue(utils.is_hostname_safe('spam.eggs')) self.assertTrue(utils.is_hostname_safe('9spam')) self.assertTrue(utils.is_hostname_safe('spam7')) self.assertTrue(utils.is_hostname_safe('br34kf4st')) self.assertFalse(utils.is_hostname_safe('$pam')) self.assertFalse(utils.is_hostname_safe('egg$')) self.assertFalse(utils.is_hostname_safe('spam#eggs')) self.assertFalse(utils.is_hostname_safe(' eggs')) self.assertFalse(utils.is_hostname_safe('spam ')) self.assertTrue(utils.is_hostname_safe('s')) self.assertTrue(utils.is_hostname_safe('s' * 63)) self.assertFalse(utils.is_hostname_safe('s' * 64)) self.assertFalse(utils.is_hostname_safe('')) self.assertFalse(utils.is_hostname_safe(None)) # Need to ensure a binary response for success or fail self.assertIsNotNone(utils.is_hostname_safe('spam')) self.assertIsNotNone(utils.is_hostname_safe('-spam')) self.assertTrue(utils.is_hostname_safe('www.rackspace.com')) self.assertTrue(utils.is_hostname_safe('www.rackspace.com.')) self.assertTrue(utils.is_hostname_safe('http._sctp.www.example.com')) self.assertTrue(utils.is_hostname_safe('mail.pets_r_us.net')) self.assertTrue(utils.is_hostname_safe('mail-server-15.my_host.org')) self.assertFalse(utils.is_hostname_safe('www.nothere.com_')) self.assertFalse(utils.is_hostname_safe('www.nothere_.com')) self.assertFalse(utils.is_hostname_safe('www..nothere.com')) long_str = 'a' * 63 + '.' + 'b' * 63 + '.' + 'c' * 63 + '.' + 'd' * 63 self.assertTrue(utils.is_hostname_safe(long_str)) self.assertFalse(utils.is_hostname_safe(long_str + '.')) self.assertFalse(utils.is_hostname_safe('a' * 255)) def test_is_valid_logical_name(self): valid = ( 'spam', 'spAm', 'SPAM', 'spam-eggs', 'spam.eggs', 'spam_eggs', 'spam~eggs', '9spam', 'spam7', '~spam', '.spam', '.~-_', '~', 'br34kf4st', 's', 's' * 63, 's' * 255) invalid = ( ' ', 'spam eggs', '$pam', 'egg$', 'spam#eggs', ' eggs', 'spam ', '', None, 'spam%20') for hostname in valid: result = utils.is_valid_logical_name(hostname) # Need to ensure a binary response for success. assertTrue # is too generous, and would pass this test if, for # instance, a regex Match object were returned. self.assertIs(result, True, "%s is unexpectedly invalid" % hostname) for hostname in invalid: result = utils.is_valid_logical_name(hostname) # Need to ensure a binary response for # success. assertFalse is too generous and would pass this # test if None were returned. self.assertIs(result, False, "%s is unexpectedly valid" % hostname) def test_validate_and_normalize_mac(self): mac = 'AA:BB:CC:DD:EE:FF' with mock.patch.object(netutils, 'is_valid_mac', autospec=True) as m_mock: m_mock.return_value = True self.assertEqual(mac.lower(), utils.validate_and_normalize_mac(mac)) def test_validate_and_normalize_datapath_id(self): datapath_id = 'AA:BB:CC:DD:EE:FF' with mock.patch.object(utils, 'is_valid_datapath_id', autospec=True) as m_mock: m_mock.return_value = True self.assertEqual(datapath_id.lower(), utils.validate_and_normalize_datapath_id( datapath_id)) def test_validate_and_normalize_mac_invalid_format(self): with mock.patch.object(netutils, 'is_valid_mac', autospec=True) as m_mock: m_mock.return_value = False self.assertRaises(exception.InvalidMAC, utils.validate_and_normalize_mac, 'invalid-mac') def test_safe_rstrip(self): value = '/test/' rstripped_value = '/test' not_rstripped = '/' self.assertEqual(rstripped_value, utils.safe_rstrip(value, '/')) self.assertEqual(not_rstripped, utils.safe_rstrip(not_rstripped, '/')) def test_safe_rstrip_not_raises_exceptions(self): # Supplying an integer should normally raise an exception because it # does not save the rstrip() method. value = 10 # In the case of raising an exception safe_rstrip() should return the # original value. self.assertEqual(value, utils.safe_rstrip(value)) @mock.patch.object(os.path, 'getmtime', return_value=1439465889.4964755, autospec=True) def test_unix_file_modification_datetime(self, mtime_mock): expected = datetime.datetime(2015, 8, 13, 11, 38, 9, 496475) self.assertEqual(expected, utils.unix_file_modification_datetime('foo')) mtime_mock.assert_called_once_with('foo') def test_is_valid_no_proxy(self): # Valid values for 'no_proxy' valid_no_proxy = [ ('a' * 63 + '.' + '0' * 63 + '.c.' + 'd' * 61 + '.' + 'e' * 61), ('A' * 63 + '.' + '0' * 63 + '.C.' + 'D' * 61 + '.' + 'E' * 61), ('.' + 'a' * 62 + '.' + '0' * 62 + '.c.' + 'd' * 61 + '.' + 'e' * 61), ',,example.com:3128,', '192.168.1.1', # IP should be valid ] # Test each one individually, so if failure easier to determine which # one failed. for no_proxy in valid_no_proxy: self.assertTrue( utils.is_valid_no_proxy(no_proxy), msg="'no_proxy' value should be valid: {}".format(no_proxy)) # Test valid when joined together self.assertTrue(utils.is_valid_no_proxy(','.join(valid_no_proxy))) # Test valid when joined together with whitespace self.assertTrue(utils.is_valid_no_proxy(' , '.join(valid_no_proxy))) # empty string should also be valid self.assertTrue(utils.is_valid_no_proxy('')) # Invalid values for 'no_proxy' invalid_no_proxy = [ ('A' * 64 + '.' + '0' * 63 + '.C.' + 'D' * 61 + '.' + 'E' * 61), # too long (> 253) ('a' * 100), 'a..com', ('.' + 'a' * 63 + '.' + '0' * 62 + '.c.' + 'd' * 61 + '.' + 'e' * 61), # too long (> 251 after deleting .) ('*.' + 'a' * 60 + '.' + '0' * 60 + '.c.' + 'd' * 61 + '.' + 'e' * 61), # starts with *. 'c.-a.com', 'c.a-.com', ] for no_proxy in invalid_no_proxy: self.assertFalse( utils.is_valid_no_proxy(no_proxy), msg="'no_proxy' value should be invalid: {}".format(no_proxy)) @mock.patch.object(utils, 'LOG', autospec=True) def test_warn_about_deprecated_extra_vif_port_id(self, mock_log): # Set variable to default value utils.warn_deprecated_extra_vif_port_id = False utils.warn_about_deprecated_extra_vif_port_id() utils.warn_about_deprecated_extra_vif_port_id() self.assertEqual(1, mock_log.warning.call_count) self.assertIn("extra['vif_port_id'] is deprecated and will not", mock_log.warning.call_args[0][0]) class TempFilesTestCase(base.TestCase): def test_tempdir(self): dirname = None with utils.tempdir() as tempdir: self.assertTrue(os.path.isdir(tempdir)) dirname = tempdir self.assertFalse(os.path.exists(dirname)) @mock.patch.object(shutil, 'rmtree', autospec=True) @mock.patch.object(tempfile, 'mkdtemp', autospec=True) def test_tempdir_mocked(self, mkdtemp_mock, rmtree_mock): self.config(tempdir='abc') mkdtemp_mock.return_value = 'temp-dir' kwargs = {'dir': 'b'} with utils.tempdir(**kwargs) as tempdir: self.assertEqual('temp-dir', tempdir) tempdir_created = tempdir mkdtemp_mock.assert_called_once_with(**kwargs) rmtree_mock.assert_called_once_with(tempdir_created) @mock.patch.object(utils, 'LOG', autospec=True) @mock.patch.object(shutil, 'rmtree', autospec=True) @mock.patch.object(tempfile, 'mkdtemp', autospec=True) def test_tempdir_mocked_error_on_rmtree(self, mkdtemp_mock, rmtree_mock, log_mock): self.config(tempdir='abc') mkdtemp_mock.return_value = 'temp-dir' rmtree_mock.side_effect = OSError with utils.tempdir() as tempdir: self.assertEqual('temp-dir', tempdir) tempdir_created = tempdir rmtree_mock.assert_called_once_with(tempdir_created) self.assertTrue(log_mock.error.called) @mock.patch.object(os.path, 'exists', autospec=True) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_with_pass_in(self, mock_free_space, mock_dir_writable, mock_exists): mock_exists.return_value = True # test passing in a directory and size utils.check_dir(directory_to_check='/fake/path', required_space=5) mock_exists.assert_called_once_with('/fake/path') mock_dir_writable.assert_called_once_with('/fake/path') mock_free_space.assert_called_once_with('/fake/path', 5) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_no_dir(self, mock_free_space, mock_dir_writable): self.config(tempdir='/fake/path') # NOTE(dtantsur): self.config uses os.path.exists, so we cannot mock # on the method level. with mock.patch.object(os.path, 'exists', autospec=True) as mock_exists: mock_exists.return_value = False self.assertRaises(exception.PathNotFound, utils.check_dir) mock_exists.assert_called_once_with(CONF.tempdir) self.assertFalse(mock_free_space.called) self.assertFalse(mock_dir_writable.called) @mock.patch.object(utils, '_check_dir_writable', autospec=True) @mock.patch.object(utils, '_check_dir_free_space', autospec=True) def test_check_dir_ok(self, mock_free_space, mock_dir_writable): self.config(tempdir='/fake/path') # NOTE(dtantsur): self.config uses os.path.exists, so we cannot mock # on the method level. with mock.patch.object(os.path, 'exists', autospec=True) as mock_exists: mock_exists.return_value = True utils.check_dir() mock_exists.assert_called_once_with(CONF.tempdir) mock_dir_writable.assert_called_once_with(CONF.tempdir) mock_free_space.assert_called_once_with(CONF.tempdir, 1) @mock.patch.object(os, 'access', autospec=True) def test__check_dir_writable_ok(self, mock_access): mock_access.return_value = True self.assertIsNone(utils._check_dir_writable("/fake/path")) mock_access.assert_called_once_with("/fake/path", os.W_OK) @mock.patch.object(os, 'access', autospec=True) def test__check_dir_writable_not_writable(self, mock_access): mock_access.return_value = False self.assertRaises(exception.DirectoryNotWritable, utils._check_dir_writable, "/fake/path") mock_access.assert_called_once_with("/fake/path", os.W_OK) @mock.patch.object(os, 'statvfs', autospec=True) def test__check_dir_free_space_ok(self, mock_stat): statvfs_mock_return = mock.MagicMock() statvfs_mock_return.f_bsize = 5 statvfs_mock_return.f_frsize = 0 statvfs_mock_return.f_blocks = 0 statvfs_mock_return.f_bfree = 0 statvfs_mock_return.f_bavail = 1024 * 1024 statvfs_mock_return.f_files = 0 statvfs_mock_return.f_ffree = 0 statvfs_mock_return.f_favail = 0 statvfs_mock_return.f_flag = 0 statvfs_mock_return.f_namemax = 0 mock_stat.return_value = statvfs_mock_return utils._check_dir_free_space("/fake/path") mock_stat.assert_called_once_with("/fake/path") @mock.patch.object(os, 'statvfs', autospec=True) def test_check_dir_free_space_raises(self, mock_stat): statvfs_mock_return = mock.MagicMock() statvfs_mock_return.f_bsize = 1 statvfs_mock_return.f_frsize = 0 statvfs_mock_return.f_blocks = 0 statvfs_mock_return.f_bfree = 0 statvfs_mock_return.f_bavail = 1024 statvfs_mock_return.f_files = 0 statvfs_mock_return.f_ffree = 0 statvfs_mock_return.f_favail = 0 statvfs_mock_return.f_flag = 0 statvfs_mock_return.f_namemax = 0 mock_stat.return_value = statvfs_mock_return self.assertRaises(exception.InsufficientDiskSpace, utils._check_dir_free_space, "/fake/path") mock_stat.assert_called_once_with("/fake/path") class GetUpdatedCapabilitiesTestCase(base.TestCase): def test_get_updated_capabilities(self): capabilities = {'ilo_firmware_version': 'xyz'} cap_string = 'ilo_firmware_version:xyz' cap_returned = utils.get_updated_capabilities(None, capabilities) self.assertEqual(cap_string, cap_returned) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_multiple_keys(self): capabilities = {'ilo_firmware_version': 'xyz', 'foo': 'bar', 'somekey': 'value'} cap_string = 'ilo_firmware_version:xyz,foo:bar,somekey:value' cap_returned = utils.get_updated_capabilities(None, capabilities) set1 = set(cap_string.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_invalid_capabilities(self): capabilities = 'ilo_firmware_version' self.assertRaises(ValueError, utils.get_updated_capabilities, capabilities, {}) def test_get_updated_capabilities_capabilities_not_dict(self): capabilities = ['ilo_firmware_version:xyz', 'foo:bar'] self.assertRaises(ValueError, utils.get_updated_capabilities, None, capabilities) def test_get_updated_capabilities_add_to_existing_capabilities(self): new_capabilities = {'BootMode': 'uefi'} expected_capabilities = 'BootMode:uefi,foo:bar' cap_returned = utils.get_updated_capabilities('foo:bar', new_capabilities) set1 = set(expected_capabilities.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_get_updated_capabilities_replace_to_existing_capabilities(self): new_capabilities = {'BootMode': 'bios'} expected_capabilities = 'BootMode:bios' cap_returned = utils.get_updated_capabilities('BootMode:uefi', new_capabilities) set1 = set(expected_capabilities.split(',')) set2 = set(cap_returned.split(',')) self.assertEqual(set1, set2) self.assertIsInstance(cap_returned, str) def test_validate_network_port(self): port = utils.validate_network_port('0', 'message') self.assertEqual(0, port) port = utils.validate_network_port('65535') self.assertEqual(65535, port) def test_validate_network_port_fail(self): self.assertRaisesRegex(exception.InvalidParameterValue, 'Port "65536" is not a valid port.', utils.validate_network_port, '65536') self.assertRaisesRegex(exception.InvalidParameterValue, 'fake_port "-1" is not a valid port.', utils.validate_network_port, '-1', 'fake_port') self.assertRaisesRegex(exception.InvalidParameterValue, 'Port "invalid" is not a valid port.', utils.validate_network_port, 'invalid') class JinjaTemplatingTestCase(base.TestCase): def setUp(self): super(JinjaTemplatingTestCase, self).setUp() self.template = '{{ foo }} {{ bar }}' self.params = {'foo': 'spam', 'bar': 'ham'} self.expected = 'spam ham' def test_render_string(self): self.assertEqual(self.expected, utils.render_template(self.template, self.params, is_file=False)) def test_render_with_quotes(self): """test jinja2 autoescaping for everything is disabled """ self.expected = '"spam" ham' self.params = {'foo': '"spam"', 'bar': 'ham'} self.assertEqual(self.expected, utils.render_template(self.template, self.params, is_file=False)) @mock.patch('ironic.common.utils.jinja2.FileSystemLoader', autospec=True) def test_render_file(self, jinja_fsl_mock): path = '/path/to/template.j2' jinja_fsl_mock.return_value = jinja2.DictLoader( {'template.j2': self.template}) self.assertEqual(self.expected, utils.render_template(path, self.params)) jinja_fsl_mock.assert_called_once_with('/path/to') class ValidateConductorGroupTestCase(base.TestCase): def test_validate_conductor_group_success(self): self.assertIsNone(utils.validate_conductor_group('foo')) self.assertIsNone(utils.validate_conductor_group('group1')) self.assertIsNone(utils.validate_conductor_group('group1.with.dot')) self.assertIsNone(utils.validate_conductor_group('group1_with_under')) self.assertIsNone(utils.validate_conductor_group('group1-with-dash')) def test_validate_conductor_group_fail(self): self.assertRaises(exception.InvalidConductorGroup, utils.validate_conductor_group, 'foo:bar') self.assertRaises(exception.InvalidConductorGroup, utils.validate_conductor_group, 'foo*bar') self.assertRaises(exception.InvalidConductorGroup, utils.validate_conductor_group, 'foo$bar') self.assertRaises(exception.InvalidConductorGroup, utils.validate_conductor_group, object()) self.assertRaises(exception.InvalidConductorGroup, utils.validate_conductor_group, None) ironic-15.0.0/ironic/tests/unit/common/test_release_mappings.py0000664000175000017500000001744413652514273024747 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import versionutils from ironic.api.controllers.v1 import versions as api_versions from ironic.common import release_mappings from ironic.conductor import rpcapi from ironic.db.sqlalchemy import models from ironic.objects import base as obj_base from ironic.tests import base def _check_versions_compatibility(conf_version, actual_version): """Checks the configured version against the actual version. Returns True if the configured version is <= the actual version; otherwise returns False. :param conf_version: configured version, a string with dots :param actual_version: actual version, a string with dots :returns: True if the configured version is <= the actual version; False otherwise. """ conf_cap = versionutils.convert_version_to_tuple(conf_version) actual_cap = versionutils.convert_version_to_tuple(actual_version) return conf_cap <= actual_cap class ReleaseMappingsTestCase(base.TestCase): """Tests the dict release_mappings.RELEASE_MAPPING. Tests whether the dict release_mappings.RELEASE_MAPPING is correct, valid and consistent. """ def test_structure(self): for value in release_mappings.RELEASE_MAPPING.values(): self.assertIsInstance(value, dict) self.assertEqual({'api', 'rpc', 'objects'}, set(value)) self.assertIsInstance(value['api'], str) (major, minor) = value['api'].split('.') self.assertEqual(1, int(major)) self.assertLessEqual(int(minor), api_versions.MINOR_MAX_VERSION) self.assertIsInstance(value['rpc'], str) self.assertIsInstance(value['objects'], dict) for obj_value in value['objects'].values(): self.assertIsInstance(obj_value, list) for ver in obj_value: self.assertIsInstance(ver, str) tuple_ver = versionutils.convert_version_to_tuple(ver) self.assertEqual(2, len(tuple_ver)) def test_object_names_are_registered(self): registered_objects = set(obj_base.IronicObjectRegistry.obj_classes()) for mapping in release_mappings.RELEASE_MAPPING.values(): objects = set(mapping['objects']) self.assertTrue(objects.issubset(registered_objects)) def test_current_rpc_version(self): self.assertEqual(rpcapi.ConductorAPI.RPC_API_VERSION, release_mappings.RELEASE_MAPPING['master']['rpc']) def test_current_object_versions(self): registered_objects = obj_base.IronicObjectRegistry.obj_classes() obj_versions = release_mappings.get_object_versions( releases=['master']) for obj, vers in obj_versions.items(): # vers is a set of versions, not ordered self.assertIn(registered_objects[obj][0].VERSION, vers) def test_contains_all_db_objects(self): self.assertIn('master', release_mappings.RELEASE_MAPPING) model_names = set((s.__name__ for s in models.Base.__subclasses__())) exceptions = set(['NodeTag', 'ConductorHardwareInterfaces', 'NodeTrait', 'BIOSSetting', 'DeployTemplateStep']) # NOTE(xek): As a rule, all models which can be changed between # releases or are sent through RPC should have their counterpart # versioned objects. model_names -= exceptions # NodeTrait maps to two objects model_names |= set(['Trait', 'TraitList']) object_names = set( release_mappings.RELEASE_MAPPING['master']['objects']) self.assertEqual(model_names, object_names) def test_rpc_and_objects_versions_supported(self): registered_objects = obj_base.IronicObjectRegistry.obj_classes() for versions in release_mappings.RELEASE_MAPPING.values(): self.assertTrue(_check_versions_compatibility( versions['rpc'], rpcapi.ConductorAPI.RPC_API_VERSION)) for obj_name, obj_vers in versions['objects'].items(): for ver in obj_vers: self.assertTrue(_check_versions_compatibility( ver, registered_objects[obj_name][0].VERSION)) class GetObjectVersionsTestCase(base.TestCase): TEST_MAPPING = { '7.0': { 'api': '1.30', 'rpc': '1.40', 'objects': { 'Node': ['1.21'], 'Conductor': ['1.2'], 'Port': ['1.6'], 'Portgroup': ['1.3'], } }, '8.0': { 'api': '1.30', 'rpc': '1.40', 'objects': { 'Node': ['1.22'], 'Conductor': ['1.2'], 'Chassis': ['1.3'], 'Port': ['1.6'], 'Portgroup': ['1.5', '1.4'], } }, 'master': { 'api': '1.34', 'rpc': '1.40', 'objects': { 'Node': ['1.23'], 'Conductor': ['1.2'], 'Chassis': ['1.3'], 'Port': ['1.7'], 'Portgroup': ['1.5'], } }, } TEST_MAPPING['ocata'] = TEST_MAPPING['7.0'] def test_get_object_versions(self): with mock.patch.dict(release_mappings.RELEASE_MAPPING, self.TEST_MAPPING, clear=True): actual_versions = release_mappings.get_object_versions() expected_versions = { 'Node': set(['1.21', '1.22', '1.23']), 'Conductor': set(['1.2']), 'Chassis': set(['1.3']), 'Port': set(['1.6', '1.7']), 'Portgroup': set(['1.3', '1.4', '1.5']), } self.assertEqual(expected_versions, actual_versions) def test_get_object_versions_releases(self): with mock.patch.dict(release_mappings.RELEASE_MAPPING, self.TEST_MAPPING, clear=True): actual_versions = release_mappings.get_object_versions( releases=['ocata']) expected_versions = { 'Node': set(['1.21']), 'Conductor': set(['1.2']), 'Port': set(['1.6']), 'Portgroup': set(['1.3']), } self.assertEqual(expected_versions, actual_versions) def test_get_object_versions_objects(self): with mock.patch.dict(release_mappings.RELEASE_MAPPING, self.TEST_MAPPING, clear=True): actual_versions = release_mappings.get_object_versions( objects=['Portgroup', 'Chassis']) expected_versions = { 'Portgroup': set(['1.3', '1.4', '1.5']), 'Chassis': set(['1.3']), } self.assertEqual(expected_versions, actual_versions) def test_get_object_versions_releases_objects(self): with mock.patch.dict(release_mappings.RELEASE_MAPPING, self.TEST_MAPPING, clear=True): actual_versions = release_mappings.get_object_versions( releases=['7.0'], objects=['Portgroup', 'Chassis']) expected_versions = { 'Portgroup': set(['1.3']), } self.assertEqual(expected_versions, actual_versions) ironic-15.0.0/ironic/tests/unit/common/test_pxe_utils.py0000664000175000017500000024523313652514273023444 0ustar zuulzuul00000000000000# # Copyright 2014 Rackspace, Inc # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile from ironic_lib import utils as ironic_utils import mock from oslo_config import cfg from oslo_utils import fileutils from oslo_utils import uuidutils from ironic.common import exception from ironic.common.glance_service import image_service from ironic.common import pxe_utils from ironic.common import states from ironic.common import utils from ironic.conductor import task_manager from ironic.drivers.modules import deploy_utils from ironic.drivers.modules import ipxe from ironic.drivers.modules import pxe from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils CONF = cfg.CONF INST_INFO_DICT = db_utils.get_test_pxe_instance_info() DRV_INFO_DICT = db_utils.get_test_pxe_driver_info() DRV_INTERNAL_INFO_DICT = db_utils.get_test_pxe_driver_internal_info() # Prevent /httpboot validation on creating the node @mock.patch('ironic.drivers.modules.pxe.PXEBoot.__init__', lambda self: None) class TestPXEUtils(db_base.DbTestCase): def setUp(self): super(TestPXEUtils, self).setUp() self.pxe_options = { 'deployment_aki_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-' u'c02d7f33c123/deploy_kernel', 'aki_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/' u'kernel', 'ari_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7f33c123/' u'ramdisk', 'pxe_append_params': 'test_param', 'deployment_ari_path': u'/tftpboot/1be26c0b-03f2-4d2e-ae87-c02d7' u'f33c123/deploy_ramdisk', 'ipa-api-url': 'http://192.168.122.184:6385', 'ipxe_timeout': 0, 'ramdisk_opts': 'ramdisk_param', } self.ipxe_options = self.pxe_options.copy() self.ipxe_options.update({ 'deployment_aki_path': 'http://1.2.3.4:1234/deploy_kernel', 'deployment_ari_path': 'http://1.2.3.4:1234/deploy_ramdisk', 'aki_path': 'http://1.2.3.4:1234/kernel', 'ari_path': 'http://1.2.3.4:1234/ramdisk', 'initrd_filename': 'deploy_ramdisk', }) self.ipxe_options_timeout = self.ipxe_options.copy() self.ipxe_options_timeout.update({ 'ipxe_timeout': 120 }) self.ipxe_options_boot_from_volume_no_extra_volume = \ self.ipxe_options.copy() self.ipxe_options_boot_from_volume_no_extra_volume.update({ 'boot_from_volume': True, 'iscsi_boot_url': 'iscsi:fake_host::3260:0:fake_iqn', 'iscsi_initiator_iqn': 'fake_iqn', 'iscsi_volumes': [], 'username': 'fake_username', 'password': 'fake_password', }) self.ipxe_options_boot_from_volume_extra_volume = \ self.ipxe_options.copy() self.ipxe_options_boot_from_volume_extra_volume.update({ 'boot_from_volume': True, 'iscsi_boot_url': 'iscsi:fake_host::3260:0:fake_iqn', 'iscsi_initiator_iqn': 'fake_iqn', 'iscsi_volumes': [{'url': 'iscsi:fake_host::3260:1:fake_iqn', 'username': 'fake_username_1', 'password': 'fake_password_1', }], 'username': 'fake_username', 'password': 'fake_password', }) self.ipxe_options_boot_from_volume_no_extra_volume.pop( 'initrd_filename', None) self.ipxe_options_boot_from_volume_extra_volume.pop( 'initrd_filename', None) self.node = object_utils.create_test_node(self.context) def test_default_pxe_config(self): rendered_template = utils.render_template( CONF.pxe.pxe_config_template, {'pxe_options': self.pxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) with open('ironic/tests/unit/drivers/pxe_config.template') as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_ipxe_boot_script(self): rendered_template = utils.render_template( CONF.pxe.ipxe_boot_script, {'ipxe_for_mac_uri': 'pxelinux.cfg/'}) with open('ironic/tests/unit/drivers/boot.ipxe') as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_ipxe_config(self): # NOTE(lucasagomes): iPXE is just an extension of the PXE driver, # it doesn't have it's own configuration option for template. # More info: # https://docs.openstack.org/ironic/latest/install/ self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = utils.render_template( CONF.pxe.pxe_config_template, {'pxe_options': self.ipxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) templ_file = 'ironic/tests/unit/drivers/ipxe_config.template' with open(templ_file) as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_ipxe_timeout_config(self): # NOTE(lucasagomes): iPXE is just an extension of the PXE driver, # it doesn't have it's own configuration option for template. # More info: # https://docs.openstack.org/ironic/latest/install/ self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = utils.render_template( CONF.pxe.pxe_config_template, {'pxe_options': self.ipxe_options_timeout, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) templ_file = 'ironic/tests/unit/drivers/ipxe_config_timeout.template' with open(templ_file) as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_ipxe_boot_from_volume_config(self): self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') rendered_template = utils.render_template( CONF.pxe.pxe_config_template, {'pxe_options': self.ipxe_options_boot_from_volume_extra_volume, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) templ_file = 'ironic/tests/unit/drivers/' \ 'ipxe_config_boot_from_volume_extra_volume.template' with open(templ_file) as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_ipxe_boot_from_volume_config_no_extra_volumes(self): self.config( pxe_config_template='ironic/drivers/modules/ipxe_config.template', group='pxe' ) self.config(http_url='http://1.2.3.4:1234', group='deploy') pxe_options = self.ipxe_options_boot_from_volume_no_extra_volume pxe_options['iscsi_volumes'] = [] rendered_template = utils.render_template( CONF.pxe.pxe_config_template, {'pxe_options': pxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) templ_file = 'ironic/tests/unit/drivers/' \ 'ipxe_config_boot_from_volume_no_extra_volumes.template' with open(templ_file) as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) def test_default_grub_config(self): pxe_opts = self.pxe_options pxe_opts['boot_mode'] = 'uefi' pxe_opts['tftp_server'] = '192.0.2.1' rendered_template = utils.render_template( CONF.pxe.uefi_pxe_config_template, {'pxe_options': pxe_opts, 'ROOT': '(( ROOT ))', 'DISK_IDENTIFIER': '(( DISK_IDENTIFIER ))'}) templ_file = 'ironic/tests/unit/drivers/pxe_grub_config.template' with open(templ_file) as f: expected_template = f.read().rstrip() self.assertEqual(str(expected_template), rendered_template) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test__write_mac_pxe_configs(self, unlink_mock, create_link_mock): port_1 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:66', uuid=uuidutils.generate_uuid()) port_2 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:67', uuid=uuidutils.generate_uuid()) create_link_calls = [ mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/01-11-22-33-44-55-66'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/11:22:33:44:55:66.conf'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/01-11-22-33-44-55-67'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/11:22:33:44:55:67.conf') ] unlink_calls = [ mock.call('/tftpboot/pxelinux.cfg/01-11-22-33-44-55-66'), mock.call('/tftpboot/11:22:33:44:55:66.conf'), mock.call('/tftpboot/pxelinux.cfg/01-11-22-33-44-55-67'), mock.call('/tftpboot/11:22:33:44:55:67.conf') ] with task_manager.acquire(self.context, self.node.uuid) as task: task.ports = [port_1, port_2] pxe_utils._link_mac_pxe_configs(task) unlink_mock.assert_has_calls(unlink_calls) create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test__write_infiniband_mac_pxe_configs( self, unlink_mock, create_link_mock): client_id1 = ( '20:00:55:04:01:fe:80:00:00:00:00:00:00:00:02:c9:02:00:23:13:92') port_1 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:66', uuid=uuidutils.generate_uuid(), extra={'client-id': client_id1}) client_id2 = ( '20:00:55:04:01:fe:80:00:00:00:00:00:00:00:02:c9:02:00:23:45:12') port_2 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:67', uuid=uuidutils.generate_uuid(), extra={'client-id': client_id2}) create_link_calls = [ mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/20-11-22-33-44-55-66'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/11:22:33:44:55:66.conf'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/pxelinux.cfg/20-11-22-33-44-55-67'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/tftpboot/11:22:33:44:55:67.conf') ] unlink_calls = [ mock.call('/tftpboot/pxelinux.cfg/20-11-22-33-44-55-66'), mock.call('/tftpboot/11:22:33:44:55:66.conf'), mock.call('/tftpboot/pxelinux.cfg/20-11-22-33-44-55-67'), mock.call('/tftpboot/11:22:33:44:55:67.conf') ] with task_manager.acquire(self.context, self.node.uuid) as task: task.ports = [port_1, port_2] pxe_utils._link_mac_pxe_configs(task) unlink_mock.assert_has_calls(unlink_calls) create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test__write_mac_ipxe_configs(self, unlink_mock, create_link_mock): port_1 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:66', uuid=uuidutils.generate_uuid()) port_2 = object_utils.create_test_port( self.context, node_id=self.node.id, address='11:22:33:44:55:67', uuid=uuidutils.generate_uuid()) create_link_calls = [ mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/11-22-33-44-55-66'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/11:22:33:44:55:66.conf'), mock.call(u'../1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/pxelinux.cfg/11-22-33-44-55-67'), mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', '/httpboot/11:22:33:44:55:67.conf') ] unlink_calls = [ mock.call('/httpboot/pxelinux.cfg/11-22-33-44-55-66'), mock.call('/httpboot/11:22:33:44:55:66.conf'), mock.call('/httpboot/pxelinux.cfg/11-22-33-44-55-67'), mock.call('/httpboot/11:22:33:44:55:67.conf'), ] with task_manager.acquire(self.context, self.node.uuid) as task: task.ports = [port_1, port_2] pxe_utils._link_mac_pxe_configs(task, ipxe_enabled=True) unlink_mock.assert_has_calls(unlink_calls) create_link_mock.assert_has_calls(create_link_calls) @mock.patch('ironic.common.utils.create_link_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider', autospec=True) def test__link_ip_address_pxe_configs(self, provider_mock, unlink_mock, create_link_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] create_link_calls = [ mock.call(u'1be26c0b-03f2-4d2e-ae87-c02d7f33c123/config', u'/tftpboot/10.10.0.1.conf'), ] with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils._link_ip_address_pxe_configs(task, False) unlink_mock.assert_called_once_with('/tftpboot/10.10.0.1.conf') create_link_mock.assert_has_calls(create_link_calls) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config(self, ensure_tree_mock, render_mock, write_mock, chmod_mock): with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils.create_pxe_config(task, self.pxe_options, CONF.pxe.pxe_config_template) render_mock.assert_called_with( CONF.pxe.pxe_config_template, {'pxe_options': self.pxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'} ) node_dir = os.path.join(CONF.pxe.tftp_root, self.node.uuid) pxe_dir = os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg') ensure_calls = [ mock.call(node_dir), mock.call(pxe_dir), ] ensure_tree_mock.assert_has_calls(ensure_calls) chmod_mock.assert_not_called() pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_set_dir_permission(self, ensure_tree_mock, render_mock, write_mock, chmod_mock): self.config(dir_permission=0o755, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils.create_pxe_config(task, self.pxe_options, CONF.pxe.pxe_config_template) render_mock.assert_called_with( CONF.pxe.pxe_config_template, {'pxe_options': self.pxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'} ) node_dir = os.path.join(CONF.pxe.tftp_root, self.node.uuid) pxe_dir = os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg') ensure_calls = [ mock.call(node_dir), mock.call(pxe_dir), ] ensure_tree_mock.assert_has_calls(ensure_calls) chmod_calls = [mock.call(node_dir, 0o755), mock.call(pxe_dir, 0o755)] chmod_mock.assert_has_calls(chmod_calls) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch.object(os.path, 'isdir', autospec=True) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_existing_dirs(self, ensure_tree_mock, render_mock, write_mock, chmod_mock, isdir_mock): self.config(dir_permission=0o755, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: isdir_mock.return_value = True pxe_utils.create_pxe_config(task, self.pxe_options, CONF.pxe.pxe_config_template) render_mock.assert_called_with( CONF.pxe.pxe_config_template, {'pxe_options': self.pxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'} ) ensure_tree_mock.assert_has_calls([]) chmod_mock.assert_not_called() isdir_mock.assert_has_calls([]) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.pxe_utils._link_ip_address_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_grub(self, ensure_tree_mock, render_mock, write_mock, link_ip_configs_mock, chmod_mock): grub_tmplte = "ironic/drivers/modules/pxe_grub_config.template" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.pxe_options, grub_tmplte) ensure_calls = [ mock.call(os.path.join(CONF.pxe.tftp_root, self.node.uuid)), mock.call(os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg')), ] ensure_tree_mock.assert_has_calls(ensure_calls) chmod_mock.assert_not_called() render_mock.assert_called_with( grub_tmplte, {'pxe_options': self.pxe_options, 'ROOT': '(( ROOT ))', 'DISK_IDENTIFIER': '(( DISK_IDENTIFIER ))'}) link_ip_configs_mock.assert_called_once_with(task, False) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.pxe_utils._link_mac_pxe_configs', autospec=True) @mock.patch('ironic.common.pxe_utils._link_ip_address_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_mac_address( self, ensure_tree_mock, render_mock, write_mock, link_ip_configs_mock, link_mac_pxe_configs_mock, chmod_mock): # TODO(TheJulia): We should... like... fix the template to # enable mac address usage..... grub_tmplte = "ironic/drivers/modules/pxe_grub_config.template" self.config(dhcp_provider='none', group='dhcp') link_ip_configs_mock.side_effect = \ exception.FailedToGetIPAddressOnPort(port_id='blah') with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.pxe_options, grub_tmplte) ensure_calls = [ mock.call(os.path.join(CONF.pxe.tftp_root, self.node.uuid)), mock.call(os.path.join(CONF.pxe.tftp_root, 'pxelinux.cfg')), ] ensure_tree_mock.assert_has_calls(ensure_calls) chmod_mock.assert_not_called() render_mock.assert_called_with( grub_tmplte, {'pxe_options': self.pxe_options, 'ROOT': '(( ROOT ))', 'DISK_IDENTIFIER': '(( DISK_IDENTIFIER ))'}) link_mac_pxe_configs_mock.assert_called_once_with( task, ipxe_enabled=False) link_ip_configs_mock.assert_called_once_with(task, False) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path(self.node.uuid) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch.object(os, 'chmod', autospec=True) @mock.patch('ironic.common.pxe_utils._link_mac_pxe_configs', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) @mock.patch('oslo_utils.fileutils.ensure_tree', autospec=True) def test_create_pxe_config_uefi_ipxe(self, ensure_tree_mock, render_mock, write_mock, link_mac_pxe_mock, chmod_mock): ipxe_template = "ironic/drivers/modules/ipxe_config.template" with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' pxe_utils.create_pxe_config(task, self.ipxe_options, ipxe_template, ipxe_enabled=True) ensure_calls = [ mock.call(os.path.join(CONF.deploy.http_root, self.node.uuid)), mock.call(os.path.join(CONF.deploy.http_root, 'pxelinux.cfg')), ] ensure_tree_mock.assert_has_calls(ensure_calls) chmod_mock.assert_not_called() render_mock.assert_called_with( ipxe_template, {'pxe_options': self.ipxe_options, 'ROOT': '{{ ROOT }}', 'DISK_IDENTIFIER': '{{ DISK_IDENTIFIER }}'}) link_mac_pxe_mock.assert_called_once_with(task, ipxe_enabled=True) pxe_cfg_file_path = pxe_utils.get_pxe_config_file_path( self.node.uuid, ipxe_enabled=True) write_mock.assert_called_with(pxe_cfg_file_path, render_mock.return_value) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test_clean_up_pxe_config(self, unlink_mock, rmtree_mock): address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) with task_manager.acquire(self.context, self.node.uuid) as task: pxe_utils.clean_up_pxe_config(task) ensure_calls = [ mock.call("/tftpboot/pxelinux.cfg/01-%s" % address.replace(':', '-')), mock.call("/tftpboot/%s.conf" % address) ] unlink_mock.assert_has_calls(ensure_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) @mock.patch.object(os.path, 'isfile', lambda path: False) @mock.patch('ironic.common.utils.file_has_content', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_create_ipxe_boot_script(self, render_mock, write_mock, file_has_content_mock): render_mock.return_value = 'foo' pxe_utils.create_ipxe_boot_script() self.assertFalse(file_has_content_mock.called) write_mock.assert_called_once_with( os.path.join(CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') render_mock.assert_called_once_with( CONF.pxe.ipxe_boot_script, {'ipxe_for_mac_uri': 'pxelinux.cfg/'}) @mock.patch.object(os.path, 'isfile', lambda path: True) @mock.patch('ironic.common.utils.file_has_content', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_create_ipxe_boot_script_copy_file_different( self, render_mock, write_mock, file_has_content_mock): file_has_content_mock.return_value = False render_mock.return_value = 'foo' pxe_utils.create_ipxe_boot_script() file_has_content_mock.assert_called_once_with( os.path.join(CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') write_mock.assert_called_once_with( os.path.join(CONF.deploy.http_root, os.path.basename(CONF.pxe.ipxe_boot_script)), 'foo') render_mock.assert_called_once_with( CONF.pxe.ipxe_boot_script, {'ipxe_for_mac_uri': 'pxelinux.cfg/'}) @mock.patch.object(os.path, 'isfile', lambda path: True) @mock.patch('ironic.common.utils.file_has_content', autospec=True) @mock.patch('ironic.common.utils.write_to_file', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def test_create_ipxe_boot_script_already_exists(self, render_mock, write_mock, file_has_content_mock): file_has_content_mock.return_value = True pxe_utils.create_ipxe_boot_script() self.assertFalse(write_mock.called) def test__get_pxe_mac_path(self): mac = '00:11:22:33:44:55:66' self.assertEqual('/tftpboot/pxelinux.cfg/01-00-11-22-33-44-55-66', pxe_utils._get_pxe_mac_path(mac)) def test__get_pxe_mac_path_ipxe(self): self.config(http_root='/httpboot', group='deploy') mac = '00:11:22:33:AA:BB:CC' self.assertEqual('/httpboot/pxelinux.cfg/00-11-22-33-aa-bb-cc', pxe_utils._get_pxe_mac_path(mac, ipxe_enabled=True)) def test__get_pxe_ip_address_path(self): ipaddress = '10.10.0.1' self.assertEqual('/tftpboot/10.10.0.1.conf', pxe_utils._get_pxe_ip_address_path(ipaddress)) def test_get_root_dir(self): expected_dir = '/tftproot' self.config(tftp_root=expected_dir, group='pxe') self.assertEqual(expected_dir, pxe_utils.get_root_dir()) def test_get_pxe_config_file_path(self): self.assertEqual(os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'config'), pxe_utils.get_pxe_config_file_path(self.node.uuid)) def _dhcp_options_for_instance(self, ip_version=4): self.config(ip_version=ip_version, group='pxe') if ip_version == 4: self.config(tftp_server='192.0.2.1', group='pxe') elif ip_version == 6: self.config(tftp_server='ff80::1', group='pxe') self.config(pxe_bootfile_name='fake-bootfile', group='pxe') self.config(tftp_root='/tftp-path/', group='pxe') if ip_version == 6: # NOTE(TheJulia): DHCPv6 RFCs seem to indicate that the prior # options are not imported, although they may be supported # by vendors. The apparent proper option is to return a # URL in the field https://tools.ietf.org/html/rfc5970#section-3 expected_info = [{'opt_name': '59', 'opt_value': 'tftp://[ff80::1]/fake-bootfile', 'ip_version': ip_version}] elif ip_version == 4: expected_info = [{'opt_name': '67', 'opt_value': 'fake-bootfile', 'ip_version': ip_version}, {'opt_name': '210', 'opt_value': '/tftp-path/', 'ip_version': ip_version}, {'opt_name': '66', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': '150', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': ip_version} ] with task_manager.acquire(self.context, self.node.uuid) as task: self.assertEqual(expected_info, pxe_utils.dhcp_options_for_instance(task)) def test_dhcp_options_for_instance(self): self._dhcp_options_for_instance(ip_version=4) def test_dhcp_options_for_instance_ipv6(self): self.config(tftp_server='ff80::1', group='pxe') self._dhcp_options_for_instance(ip_version=6) def _test_get_kernel_ramdisk_info(self, expected_dir, mode='deploy', ipxe_enabled=False): node_uuid = 'fake-node' driver_info = { '%s_kernel' % mode: 'glance://%s-kernel' % mode, '%s_ramdisk' % mode: 'glance://%s-ramdisk' % mode, } expected = {} for k, v in driver_info.items(): expected[k] = (v, expected_dir + '/fake-node/%s' % k) kr_info = pxe_utils.get_kernel_ramdisk_info(node_uuid, driver_info, mode=mode, ipxe_enabled=ipxe_enabled) self.assertEqual(expected, kr_info) def test_get_kernel_ramdisk_info(self): expected_dir = '/tftp' self.config(tftp_root=expected_dir, group='pxe') self._test_get_kernel_ramdisk_info(expected_dir) def test_get_kernel_ramdisk_info_ipxe(self): expected_dir = '/http' self.config(http_root=expected_dir, group='deploy') self._test_get_kernel_ramdisk_info(expected_dir, ipxe_enabled=True) def test_get_kernel_ramdisk_info_bad_driver_info(self): self.config(tftp_root='/tftp', group='pxe') node_uuid = 'fake-node' driver_info = {} self.assertRaises(KeyError, pxe_utils.get_kernel_ramdisk_info, node_uuid, driver_info) def test_get_rescue_kr_info(self): expected_dir = '/tftp' self.config(tftp_root=expected_dir, group='pxe') self._test_get_kernel_ramdisk_info(expected_dir, mode='rescue') def test_get_rescue_kr_info_ipxe(self): expected_dir = '/http' self.config(http_root=expected_dir, group='deploy') self._test_get_kernel_ramdisk_info(expected_dir, mode='rescue', ipxe_enabled=True) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider', autospec=True) def test_clean_up_pxe_config_uefi(self, provider_mock, unlink_mock, rmtree_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" properties = {'capabilities': 'boot_mode:uefi'} object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties = properties pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/tftpboot/10.10.0.1.conf'), mock.call('/tftpboot/pxelinux.cfg/01-aa-aa-aa-aa-aa-aa'), mock.call('/tftpboot/' + address + '.conf') ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider', autospec=True) def test_clean_up_pxe_config_uefi_mac_address( self, provider_mock, unlink_mock, rmtree_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" properties = {'capabilities': 'boot_mode:uefi'} object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties = properties pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/tftpboot/10.10.0.1.conf'), mock.call('/tftpboot/pxelinux.cfg/01-%s' % address.replace(':', '-')), mock.call('/tftpboot/' + address + '.conf') ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) @mock.patch('ironic.common.dhcp_factory.DHCPFactory.provider', autospec=True) def test_clean_up_pxe_config_uefi_instance_info(self, provider_mock, unlink_mock, rmtree_mock): ip_address = '10.10.0.1' address = "aa:aa:aa:aa:aa:aa" object_utils.create_test_port(self.context, node_id=self.node.id, address=address) provider_mock.get_ip_addresses.return_value = [ip_address] with task_manager.acquire(self.context, self.node.uuid) as task: task.node.instance_info['deploy_boot_mode'] = 'uefi' pxe_utils.clean_up_pxe_config(task) unlink_calls = [ mock.call('/tftpboot/10.10.0.1.conf'), mock.call('/tftpboot/pxelinux.cfg/01-aa-aa-aa-aa-aa-aa'), mock.call('/tftpboot/' + address + ".conf") ] unlink_mock.assert_has_calls(unlink_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.pxe.tftp_root, self.node.uuid)) def test_get_tftp_path_prefix_with_trailing_slash(self): self.config(tftp_root='/tftpboot-path/', group='pxe') path_prefix = pxe_utils.get_tftp_path_prefix() self.assertEqual(path_prefix, '/tftpboot-path/') def test_get_tftp_path_prefix_without_trailing_slash(self): self.config(tftp_root='/tftpboot-path', group='pxe') path_prefix = pxe_utils.get_tftp_path_prefix() self.assertEqual(path_prefix, '/tftpboot-path/') def test_get_path_relative_to_tftp_root_with_trailing_slash(self): self.config(tftp_root='/tftpboot-path/', group='pxe') test_file_path = '/tftpboot-path/pxelinux.cfg/test' relpath = pxe_utils.get_path_relative_to_tftp_root(test_file_path) self.assertEqual(relpath, 'pxelinux.cfg/test') def test_get_path_relative_to_tftp_root_without_trailing_slash(self): self.config(tftp_root='/tftpboot-path', group='pxe') test_file_path = '/tftpboot-path/pxelinux.cfg/test' relpath = pxe_utils.get_path_relative_to_tftp_root(test_file_path) self.assertEqual(relpath, 'pxelinux.cfg/test') @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) @mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None) class PXEInterfacesTestCase(db_base.DbTestCase): def setUp(self): super(PXEInterfacesTestCase, self).setUp() n = { 'driver': 'fake-hardware', 'boot_interface': 'pxe', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.config_temp_dir('http_root', group='deploy') self.node = object_utils.create_test_node(self.context, **n) def _test_parse_driver_info_missing_kernel(self, mode='deploy'): del self.node.driver_info['%s_kernel' % mode] if mode == 'rescue': self.node.provision_state = states.RESCUING self.assertRaises(exception.MissingParameterValue, pxe_utils.parse_driver_info, self.node, mode=mode) def test_parse_driver_info_missing_deploy_kernel(self): self._test_parse_driver_info_missing_kernel() def test_parse_driver_info_missing_rescue_kernel(self): self._test_parse_driver_info_missing_kernel(mode='rescue') def _test_parse_driver_info_missing_ramdisk(self, mode='deploy'): del self.node.driver_info['%s_ramdisk' % mode] if mode == 'rescue': self.node.provision_state = states.RESCUING self.assertRaises(exception.MissingParameterValue, pxe_utils.parse_driver_info, self.node, mode=mode) def test_parse_driver_info_missing_deploy_ramdisk(self): self._test_parse_driver_info_missing_ramdisk() def test_parse_driver_info_missing_rescue_ramdisk(self): self._test_parse_driver_info_missing_ramdisk(mode='rescue') def _test_parse_driver_info(self, mode='deploy'): exp_info = {'%s_ramdisk' % mode: 'glance://%s_ramdisk_uuid' % mode, '%s_kernel' % mode: 'glance://%s_kernel_uuid' % mode} image_info = pxe_utils.parse_driver_info(self.node, mode=mode) self.assertEqual(exp_info, image_info) def test_parse_driver_info_deploy(self): self._test_parse_driver_info() def test_parse_driver_info_rescue(self): self._test_parse_driver_info(mode='rescue') def _test_parse_driver_info_from_conf(self, mode='deploy'): del self.node.driver_info['%s_kernel' % mode] del self.node.driver_info['%s_ramdisk' % mode] exp_info = {'%s_ramdisk' % mode: 'glance://%s_ramdisk_uuid' % mode, '%s_kernel' % mode: 'glance://%s_kernel_uuid' % mode} self.config(group='conductor', **exp_info) image_info = pxe_utils.parse_driver_info(self.node, mode=mode) self.assertEqual(exp_info, image_info) def test_parse_driver_info_from_conf_deploy(self): self._test_parse_driver_info_from_conf() def test_parse_driver_info_from_conf_rescue(self): self._test_parse_driver_info_from_conf(mode='rescue') def test_parse_driver_info_mixed_source_deploy(self): self.config(deploy_kernel='file:///image', deploy_ramdisk='file:///image', group='conductor') self._test_parse_driver_info_missing_ramdisk() def test_parse_driver_info_mixed_source_rescue(self): self.config(rescue_kernel='file:///image', rescue_ramdisk='file:///image', group='conductor') self._test_parse_driver_info_missing_ramdisk(mode='rescue') def test__get_deploy_image_info(self): expected_info = {'deploy_ramdisk': (DRV_INFO_DICT['deploy_ramdisk'], os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_ramdisk')), 'deploy_kernel': (DRV_INFO_DICT['deploy_kernel'], os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'deploy_kernel'))} image_info = pxe_utils.get_image_info(self.node) self.assertEqual(expected_info, image_info) def test__get_deploy_image_info_ipxe(self): expected_info = {'deploy_ramdisk': (DRV_INFO_DICT['deploy_ramdisk'], os.path.join(CONF.deploy.http_root, self.node.uuid, 'deploy_ramdisk')), 'deploy_kernel': (DRV_INFO_DICT['deploy_kernel'], os.path.join(CONF.deploy.http_root, self.node.uuid, 'deploy_kernel'))} image_info = pxe_utils.get_image_info(self.node, ipxe_enabled=True) self.assertEqual(expected_info, image_info) def test__get_deploy_image_info_missing_deploy_kernel(self): del self.node.driver_info['deploy_kernel'] self.assertRaises(exception.MissingParameterValue, pxe_utils.get_image_info, self.node) def test__get_deploy_image_info_deploy_ramdisk(self): del self.node.driver_info['deploy_ramdisk'] self.assertRaises(exception.MissingParameterValue, pxe_utils.get_image_info, self.node) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def _test_get_instance_image_info(self, show_mock): properties = {'properties': {u'kernel_id': u'instance_kernel_uuid', u'ramdisk_id': u'instance_ramdisk_uuid'}} expected_info = {'ramdisk': ('instance_ramdisk_uuid', os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'ramdisk')), 'kernel': ('instance_kernel_uuid', os.path.join(CONF.pxe.tftp_root, self.node.uuid, 'kernel'))} show_mock.return_value = properties self.context.auth_token = 'fake' with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: image_info = pxe_utils.get_instance_image_info(task) show_mock.assert_called_once_with(mock.ANY, 'glance://image_uuid') self.assertEqual(expected_info, image_info) # test with saved info show_mock.reset_mock() image_info = pxe_utils.get_instance_image_info(task) self.assertEqual(expected_info, image_info) self.assertFalse(show_mock.called) self.assertEqual('instance_kernel_uuid', task.node.instance_info['kernel']) self.assertEqual('instance_ramdisk_uuid', task.node.instance_info['ramdisk']) def test_get_instance_image_info(self): # Tests when 'is_whole_disk_image' exists in driver_internal_info # NOTE(TheJulia): The method being tested is primarily geared for # only netboot operation as the information should only need to be # looked up again during network booting. self.config(group="deploy", default_boot_option="netboot") self._test_get_instance_image_info() def test_get_instance_image_info_without_is_whole_disk_image(self): # NOTE(TheJulia): The method being tested is primarily geared for # only netboot operation as the information should only need to be # looked up again during network booting. self.config(group="deploy", default_boot_option="netboot") # Tests when 'is_whole_disk_image' doesn't exists in # driver_internal_info del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() self._test_get_instance_image_info() @mock.patch('ironic.drivers.modules.deploy_utils.get_boot_option', return_value='local') def test_get_instance_image_info_localboot(self, boot_opt_mock): self.node.driver_internal_info['is_whole_disk_image'] = False self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: image_info = pxe_utils.get_instance_image_info(task) self.assertEqual({}, image_info) boot_opt_mock.assert_called_once_with(task.node) @mock.patch.object(image_service.GlanceImageService, 'show', autospec=True) def test_get_instance_image_info_whole_disk_image(self, show_mock): properties = {'properties': None} show_mock.return_value = properties with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: task.node.driver_internal_info['is_whole_disk_image'] = True image_info = pxe_utils.get_instance_image_info(task) self.assertEqual({}, image_info) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test__cache_tftp_images_master_path(self, mock_fetch_image): temp_dir = tempfile.mkdtemp() self.config(tftp_root=temp_dir, group='pxe') self.config(tftp_master_path=os.path.join(temp_dir, 'tftp_master_path'), group='pxe') image_path = os.path.join(temp_dir, self.node.uuid, 'deploy_kernel') image_info = {'deploy_kernel': ('deploy_kernel', image_path)} fileutils.ensure_tree(CONF.pxe.tftp_master_path) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_utils.cache_ramdisk_kernel(task, image_info) mock_fetch_image.assert_called_once_with(self.context, mock.ANY, [('deploy_kernel', image_path)], True) @mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None) @mock.patch.object(fileutils, 'ensure_tree', autospec=True) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test_cache_ramdisk_kernel(self, mock_fetch_image, mock_ensure_tree): fake_pxe_info = {'foo': 'bar'} expected_path = os.path.join(CONF.pxe.tftp_root, self.node.uuid) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_utils.cache_ramdisk_kernel(task, fake_pxe_info) mock_ensure_tree.assert_called_with(expected_path) mock_fetch_image.assert_called_once_with( self.context, mock.ANY, list(fake_pxe_info.values()), True) @mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None) @mock.patch.object(fileutils, 'ensure_tree', autospec=True) @mock.patch.object(deploy_utils, 'fetch_images', autospec=True) def test_cache_ramdisk_kernel_ipxe(self, mock_fetch_image, mock_ensure_tree): fake_pxe_info = {'foo': 'bar'} expected_path = os.path.join(CONF.deploy.http_root, self.node.uuid) with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_utils.cache_ramdisk_kernel(task, fake_pxe_info, ipxe_enabled=True) mock_ensure_tree.assert_called_with(expected_path) mock_fetch_image.assert_called_once_with(self.context, mock.ANY, list(fake_pxe_info.values()), True) @mock.patch.object(pxe_utils.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_one(self, mock_log): properties = {'capabilities': 'boot_mode:uefi'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, pxe_utils.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe_utils.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_two(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "local"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False self.assertRaises(exception.InvalidParameterValue, pxe_utils.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe_utils.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_three(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = True self.assertRaises(exception.InvalidParameterValue, pxe_utils.validate_boot_parameters_for_trusted_boot, self.node) self.assertTrue(mock_log.called) @mock.patch.object(pxe_utils.LOG, 'error', autospec=True) def test_validate_boot_parameters_for_trusted_boot_pass(self, mock_log): properties = {'capabilities': 'boot_mode:bios'} instance_info = {"boot_option": "netboot"} self.node.properties = properties self.node.instance_info['capabilities'] = instance_info self.node.driver_internal_info['is_whole_disk_image'] = False pxe_utils.validate_boot_parameters_for_trusted_boot(self.node) self.assertFalse(mock_log.called) @mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None) class PXEBuildConfigOptionsTestCase(db_base.DbTestCase): def setUp(self): super(PXEBuildConfigOptionsTestCase, self).setUp() n = { 'driver': 'fake-hardware', 'boot_interface': 'pxe', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.config_temp_dir('http_root', group='deploy') self.node = object_utils.create_test_node(self.context, **n) @mock.patch('ironic.common.utils.render_template', autospec=True) def _test_build_pxe_config_options_pxe(self, render_mock, whle_dsk_img=False, debug=False, mode='deploy', ramdisk_params=None): self.config(debug=debug) self.config(pxe_append_params='test_param', group='pxe') # NOTE: right '/' should be removed from url string self.config(api_url='http://192.168.122.184:6385', group='conductor') driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = whle_dsk_img self.node.driver_internal_info = driver_internal_info self.node.save() tftp_server = CONF.pxe.tftp_server kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode pxe_kernel = os.path.join(self.node.uuid, kernel_label) pxe_ramdisk = os.path.join(self.node.uuid, ramdisk_label) kernel = os.path.join(self.node.uuid, 'kernel') ramdisk = os.path.join(self.node.uuid, 'ramdisk') root_dir = CONF.pxe.tftp_root image_info = { kernel_label: (kernel_label, os.path.join(root_dir, self.node.uuid, kernel_label)), ramdisk_label: (ramdisk_label, os.path.join(root_dir, self.node.uuid, ramdisk_label)) } if whle_dsk_img or deploy_utils.get_boot_option(self.node) == 'local': ramdisk = 'no_ramdisk' kernel = 'no_kernel' else: image_info.update({ 'kernel': ('kernel_id', os.path.join(root_dir, self.node.uuid, 'kernel')), 'ramdisk': ('ramdisk_id', os.path.join(root_dir, self.node.uuid, 'ramdisk')) }) expected_pxe_params = 'test_param' if debug: expected_pxe_params += ' ipa-debug=1' if ramdisk_params: expected_pxe_params += ' ' + ' '.join( '%s=%s' % tpl for tpl in ramdisk_params.items()) expected_options = { 'deployment_ari_path': pxe_ramdisk, 'pxe_append_params': expected_pxe_params, 'deployment_aki_path': pxe_kernel, 'tftp_server': tftp_server, 'ipxe_timeout': 0, 'ari_path': ramdisk, 'aki_path': kernel, } if mode == 'rescue': self.node.provision_state = states.RESCUING self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.build_pxe_config_options( task, image_info, ramdisk_params=ramdisk_params) self.assertEqual(expected_options, options) def test_build_pxe_config_options_pxe(self): self._test_build_pxe_config_options_pxe(whle_dsk_img=True) def test_build_pxe_config_options_pxe_ipa_debug(self): self._test_build_pxe_config_options_pxe(debug=True) def test_build_pxe_config_options_pxe_rescue(self): del self.node.driver_internal_info['is_whole_disk_image'] self._test_build_pxe_config_options_pxe(mode='rescue') def test_build_pxe_config_options_ipa_debug_rescue(self): del self.node.driver_internal_info['is_whole_disk_image'] self._test_build_pxe_config_options_pxe(debug=True, mode='rescue') def test_build_pxe_config_options_pxe_local_boot(self): del self.node.driver_internal_info['is_whole_disk_image'] i_info = self.node.instance_info i_info.update({'capabilities': {'boot_option': 'local'}}) self.node.instance_info = i_info self.node.save() self._test_build_pxe_config_options_pxe(whle_dsk_img=False) def test_build_pxe_config_options_pxe_without_is_whole_disk_image(self): del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() self._test_build_pxe_config_options_pxe(whle_dsk_img=False) def test_build_pxe_config_options_ramdisk_params(self): self._test_build_pxe_config_options_pxe(whle_dsk_img=True, ramdisk_params={'foo': 'bar'}) def test_build_pxe_config_options_pxe_no_kernel_no_ramdisk(self): del self.node.driver_internal_info['is_whole_disk_image'] self.node.save() pxe_params = 'my-pxe-append-params ipa-debug=0' self.config(group='pxe', tftp_server='my-tftp-server') self.config(group='pxe', pxe_append_params=pxe_params) self.config(group='pxe', tftp_root='/tftp-path/') image_info = { 'deploy_kernel': ('deploy_kernel', os.path.join(CONF.pxe.tftp_root, 'path-to-deploy_kernel')), 'deploy_ramdisk': ('deploy_ramdisk', os.path.join(CONF.pxe.tftp_root, 'path-to-deploy_ramdisk'))} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.build_pxe_config_options(task, image_info) expected_options = { 'aki_path': 'no_kernel', 'ari_path': 'no_ramdisk', 'deployment_aki_path': 'path-to-deploy_kernel', 'deployment_ari_path': 'path-to-deploy_ramdisk', 'pxe_append_params': pxe_params, 'tftp_server': 'my-tftp-server', 'ipxe_timeout': 0} self.assertEqual(expected_options, options) @mock.patch.object(ipxe.iPXEBoot, '__init__', lambda self: None) class iPXEBuildConfigOptionsTestCase(db_base.DbTestCase): def setUp(self): super(iPXEBuildConfigOptionsTestCase, self).setUp() n = { 'driver': 'fake-hardware', 'boot_interface': 'ipxe', 'instance_info': INST_INFO_DICT, 'driver_info': DRV_INFO_DICT, 'driver_internal_info': DRV_INTERNAL_INFO_DICT, } self.config(enabled_boot_interfaces=['ipxe']) self.config_temp_dir('http_root', group='deploy') self.node = object_utils.create_test_node(self.context, **n) def _dhcp_options_for_instance_ipxe(self, task, boot_file, ip_version=4): self.config(ipxe_boot_script='/test/boot.ipxe', group='pxe') self.config(tftp_root='/tftp-path/', group='pxe') if ip_version == 4: self.config(tftp_server='192.0.2.1', group='pxe') self.config(http_url='http://192.0.3.2:1234', group='deploy') self.config(ipxe_boot_script='/test/boot.ipxe', group='pxe') elif ip_version == 6: self.config(tftp_server='ff80::1', group='pxe') self.config(http_url='http://[ff80::1]:1234', group='deploy') self.config(dhcp_provider='isc', group='dhcp') if ip_version == 6: # NOTE(TheJulia): DHCPv6 RFCs seem to indicate that the prior # options are not imported, although they may be supported # by vendors. The apparent proper option is to return a # URL in the field https://tools.ietf.org/html/rfc5970#section-3 expected_boot_script_url = 'http://[ff80::1]:1234/boot.ipxe' expected_info = [{'opt_name': '!175,59', 'opt_value': 'tftp://[ff80::1]/fake-bootfile', 'ip_version': ip_version}, {'opt_name': '59', 'opt_value': expected_boot_script_url, 'ip_version': ip_version}] elif ip_version == 4: expected_boot_script_url = 'http://192.0.3.2:1234/boot.ipxe' expected_info = [{'opt_name': '!175,67', 'opt_value': boot_file, 'ip_version': ip_version}, {'opt_name': '66', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': '150', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': '67', 'opt_value': expected_boot_script_url, 'ip_version': ip_version}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': ip_version}] self.assertItemsEqual(expected_info, pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True)) self.config(dhcp_provider='neutron', group='dhcp') if ip_version == 6: # Boot URL variable set from prior test of isc parameters. expected_info = [{'opt_name': 'tag:!ipxe6,59', 'opt_value': 'tftp://[ff80::1]/fake-bootfile', 'ip_version': ip_version}, {'opt_name': 'tag:ipxe6,59', 'opt_value': expected_boot_script_url, 'ip_version': ip_version}] elif ip_version == 4: expected_info = [{'opt_name': 'tag:!ipxe,67', 'opt_value': boot_file, 'ip_version': ip_version}, {'opt_name': '66', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': '150', 'opt_value': '192.0.2.1', 'ip_version': ip_version}, {'opt_name': 'tag:ipxe,67', 'opt_value': expected_boot_script_url, 'ip_version': ip_version}, {'opt_name': 'server-ip-address', 'opt_value': '192.0.2.1', 'ip_version': ip_version}] self.assertItemsEqual(expected_info, pxe_utils.dhcp_options_for_instance( task, ipxe_enabled=True)) def test_dhcp_options_for_instance_ipxe_bios(self): self.config(ip_version=4, group='pxe') boot_file = 'fake-bootfile-bios' self.config(pxe_bootfile_name=boot_file, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: self._dhcp_options_for_instance_ipxe(task, boot_file) def test_dhcp_options_for_instance_ipxe_uefi(self): self.config(ip_version=4, group='pxe') boot_file = 'fake-bootfile-uefi' self.config(uefi_pxe_bootfile_name=boot_file, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties['capabilities'] = 'boot_mode:uefi' self._dhcp_options_for_instance_ipxe(task, boot_file) def test_dhcp_options_for_ipxe_ipv6(self): self.config(ip_version=6, group='pxe') boot_file = 'fake-bootfile' self.config(pxe_bootfile_name=boot_file, group='pxe') with task_manager.acquire(self.context, self.node.uuid) as task: self._dhcp_options_for_instance_ipxe(task, boot_file, ip_version=6) @mock.patch('ironic.common.image_service.GlanceImageService', autospec=True) @mock.patch('ironic.common.utils.render_template', autospec=True) def _test_build_pxe_config_options_ipxe(self, render_mock, glance_mock, whle_dsk_img=False, ipxe_timeout=0, ipxe_use_swift=False, debug=False, boot_from_volume=False, mode='deploy'): self.config(debug=debug) self.config(pxe_append_params='test_param', group='pxe') # NOTE: right '/' should be removed from url string self.config(api_url='http://192.168.122.184:6385', group='conductor') self.config(ipxe_timeout=ipxe_timeout, group='pxe') root_dir = CONF.deploy.http_root driver_internal_info = self.node.driver_internal_info driver_internal_info['is_whole_disk_image'] = whle_dsk_img self.node.driver_internal_info = driver_internal_info self.node.save() tftp_server = CONF.pxe.tftp_server http_url = 'http://192.1.2.3:1234' self.config(http_url=http_url, group='deploy') kernel_label = '%s_kernel' % mode ramdisk_label = '%s_ramdisk' % mode if ipxe_use_swift: self.config(ipxe_use_swift=True, group='pxe') glance = mock.Mock() glance_mock.return_value = glance glance.swift_temp_url.side_effect = [ pxe_kernel, pxe_ramdisk] = [ 'swift_kernel', 'swift_ramdisk'] image_info = { kernel_label: (uuidutils.generate_uuid(), os.path.join(root_dir, self.node.uuid, kernel_label)), ramdisk_label: (uuidutils.generate_uuid(), os.path.join(root_dir, self.node.uuid, ramdisk_label)) } else: pxe_kernel = os.path.join(http_url, self.node.uuid, kernel_label) pxe_ramdisk = os.path.join(http_url, self.node.uuid, ramdisk_label) image_info = { kernel_label: (kernel_label, os.path.join(root_dir, self.node.uuid, kernel_label)), ramdisk_label: (ramdisk_label, os.path.join(root_dir, self.node.uuid, ramdisk_label)) } kernel = os.path.join(http_url, self.node.uuid, 'kernel') ramdisk = os.path.join(http_url, self.node.uuid, 'ramdisk') if whle_dsk_img or deploy_utils.get_boot_option(self.node) == 'local': ramdisk = 'no_ramdisk' kernel = 'no_kernel' else: image_info.update({ 'kernel': ('kernel_id', os.path.join(root_dir, self.node.uuid, 'kernel')), 'ramdisk': ('ramdisk_id', os.path.join(root_dir, self.node.uuid, 'ramdisk')) }) ipxe_timeout_in_ms = ipxe_timeout * 1000 expected_pxe_params = 'test_param' if debug: expected_pxe_params += ' ipa-debug=1' expected_options = { 'deployment_ari_path': pxe_ramdisk, 'pxe_append_params': expected_pxe_params, 'deployment_aki_path': pxe_kernel, 'tftp_server': tftp_server, 'ipxe_timeout': ipxe_timeout_in_ms, 'ari_path': ramdisk, 'aki_path': kernel, 'initrd_filename': ramdisk_label, } if mode == 'rescue': self.node.provision_state = states.RESCUING self.node.save() if boot_from_volume: expected_options.update({ 'boot_from_volume': True, 'iscsi_boot_url': 'iscsi:fake_host::3260:0:fake_iqn', 'iscsi_initiator_iqn': 'fake_iqn_initiator', 'iscsi_volumes': [{'url': 'iscsi:fake_host::3260:1:fake_iqn', 'username': 'fake_username_1', 'password': 'fake_password_1' }], 'username': 'fake_username', 'password': 'fake_password' }) expected_options.pop('deployment_aki_path') expected_options.pop('deployment_ari_path') expected_options.pop('initrd_filename') with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.build_pxe_config_options(task, image_info, ipxe_enabled=True) self.assertEqual(expected_options, options) def test_build_pxe_config_options_ipxe(self): self._test_build_pxe_config_options_ipxe(whle_dsk_img=True) def test_build_pxe_config_options_ipxe_ipa_debug(self): self._test_build_pxe_config_options_ipxe(debug=True) def test_build_pxe_config_options_ipxe_local_boot(self): del self.node.driver_internal_info['is_whole_disk_image'] i_info = self.node.instance_info i_info.update({'capabilities': {'boot_option': 'local'}}) self.node.instance_info = i_info self.node.save() self._test_build_pxe_config_options_ipxe(whle_dsk_img=False) def test_build_pxe_config_options_ipxe_swift_wdi(self): self._test_build_pxe_config_options_ipxe(whle_dsk_img=True, ipxe_use_swift=True) def test_build_pxe_config_options_ipxe_swift_partition(self): self._test_build_pxe_config_options_ipxe(whle_dsk_img=False, ipxe_use_swift=True) def test_build_pxe_config_options_ipxe_and_ipxe_timeout(self): self._test_build_pxe_config_options_ipxe(whle_dsk_img=True, ipxe_timeout=120) def test_build_pxe_config_options_ipxe_and_iscsi_boot(self): vol_id = uuidutils.generate_uuid() vol_id2 = uuidutils.generate_uuid() object_utils.create_test_volume_connector( self.context, uuid=uuidutils.generate_uuid(), type='iqn', node_id=self.node.id, connector_id='fake_iqn_initiator') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='1235', uuid=vol_id2, properties={'target_lun': 1, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username_1', 'auth_password': 'fake_password_1'}) self.node.driver_internal_info.update({'boot_from_volume': vol_id}) self._test_build_pxe_config_options_ipxe(boot_from_volume=True) def test_build_pxe_config_options_ipxe_and_iscsi_boot_from_lists(self): vol_id = uuidutils.generate_uuid() vol_id2 = uuidutils.generate_uuid() object_utils.create_test_volume_connector( self.context, uuid=uuidutils.generate_uuid(), type='iqn', node_id=self.node.id, connector_id='fake_iqn_initiator') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_luns': [0, 2], 'target_portals': ['fake_host:3260', 'faker_host:3261'], 'target_iqns': ['fake_iqn', 'faker_iqn'], 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='1235', uuid=vol_id2, properties={'target_lun': [1, 3], 'target_portal': ['fake_host:3260', 'faker_host:3261'], 'target_iqn': ['fake_iqn', 'faker_iqn'], 'auth_username': 'fake_username_1', 'auth_password': 'fake_password_1'}) self.node.driver_internal_info.update({'boot_from_volume': vol_id}) self._test_build_pxe_config_options_ipxe(boot_from_volume=True) def test_get_volume_pxe_options(self): vol_id = uuidutils.generate_uuid() vol_id2 = uuidutils.generate_uuid() object_utils.create_test_volume_connector( self.context, uuid=uuidutils.generate_uuid(), type='iqn', node_id=self.node.id, connector_id='fake_iqn_initiator') object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': [0, 1, 3], 'target_portal': 'fake_host:3260', 'target_iqns': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=1, volume_id='1235', uuid=vol_id2, properties={'target_lun': 1, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username_1', 'auth_password': 'fake_password_1'}) self.node.driver_internal_info.update({'boot_from_volume': vol_id}) driver_internal_info = self.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id self.node.driver_internal_info = driver_internal_info self.node.save() expected = {'boot_from_volume': True, 'username': 'fake_username', 'password': 'fake_password', 'iscsi_boot_url': 'iscsi:fake_host::3260:0:fake_iqn', 'iscsi_initiator_iqn': 'fake_iqn_initiator', 'iscsi_volumes': [{ 'url': 'iscsi:fake_host::3260:1:fake_iqn', 'username': 'fake_username_1', 'password': 'fake_password_1' }] } with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.get_volume_pxe_options(task) self.assertEqual(expected, options) def test_get_volume_pxe_options_unsupported_volume_type(self): vol_id = uuidutils.generate_uuid() object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fake_type', boot_index=0, volume_id='1234', uuid=vol_id, properties={'foo': 'bar'}) driver_internal_info = self.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.get_volume_pxe_options(task) self.assertEqual({}, options) def test_get_volume_pxe_options_unsupported_additional_volume_type(self): vol_id = uuidutils.generate_uuid() vol_id2 = uuidutils.generate_uuid() object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='iscsi', boot_index=0, volume_id='1234', uuid=vol_id, properties={'target_lun': 0, 'target_portal': 'fake_host:3260', 'target_iqn': 'fake_iqn', 'auth_username': 'fake_username', 'auth_password': 'fake_password'}) object_utils.create_test_volume_target( self.context, node_id=self.node.id, volume_type='fake_type', boot_index=1, volume_id='1234', uuid=vol_id2, properties={'foo': 'bar'}) driver_internal_info = self.node.driver_internal_info driver_internal_info['boot_from_volume'] = vol_id self.node.driver_internal_info = driver_internal_info self.node.save() with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: options = pxe_utils.get_volume_pxe_options(task) self.assertEqual([], options['iscsi_volumes']) def test_build_pxe_config_options_ipxe_rescue(self): self._test_build_pxe_config_options_ipxe(mode='rescue') def test_build_pxe_config_options_ipxe_rescue_swift(self): self._test_build_pxe_config_options_ipxe(mode='rescue', ipxe_use_swift=True) def test_build_pxe_config_options_ipxe_rescue_timeout(self): self._test_build_pxe_config_options_ipxe(mode='rescue', ipxe_timeout=120) @mock.patch('ironic.common.utils.rmtree_without_raise', autospec=True) @mock.patch('ironic_lib.utils.unlink_without_raise', autospec=True) def test_clean_up_ipxe_config_uefi(self, unlink_mock, rmtree_mock): self.config(http_root='/httpboot', group='deploy') address = "aa:aa:aa:aa:aa:aa" properties = {'capabilities': 'boot_mode:uefi'} object_utils.create_test_port(self.context, node_id=self.node.id, address=address) with task_manager.acquire(self.context, self.node.uuid) as task: task.node.properties = properties pxe_utils.clean_up_pxe_config(task, ipxe_enabled=True) ensure_calls = [ mock.call("/httpboot/pxelinux.cfg/%s" % address.replace(':', '-')), mock.call("/httpboot/%s.conf" % address) ] unlink_mock.assert_has_calls(ensure_calls) rmtree_mock.assert_called_once_with( os.path.join(CONF.deploy.http_root, self.node.uuid)) @mock.patch.object(ironic_utils, 'unlink_without_raise', autospec=True) @mock.patch.object(pxe_utils, 'clean_up_pxe_config', autospec=True) @mock.patch.object(pxe_utils, 'TFTPImageCache', autospec=True) class CleanUpPxeEnvTestCase(db_base.DbTestCase): def setUp(self): super(CleanUpPxeEnvTestCase, self).setUp() instance_info = INST_INFO_DICT instance_info['deploy_key'] = 'fake-56789' self.node = object_utils.create_test_node( self.context, boot_interface='pxe', instance_info=instance_info, driver_info=DRV_INFO_DICT, driver_internal_info=DRV_INTERNAL_INFO_DICT, ) def test__clean_up_pxe_env(self, mock_cache, mock_pxe_clean, mock_unlink): image_info = {'label': ['', 'deploy_kernel']} with task_manager.acquire(self.context, self.node.uuid, shared=True) as task: pxe_utils.clean_up_pxe_env(task, image_info) mock_pxe_clean.assert_called_once_with(task, ipxe_enabled=False) mock_unlink.assert_any_call('deploy_kernel') mock_cache.return_value.clean_up.assert_called_once_with() class TFTPImageCacheTestCase(db_base.DbTestCase): @mock.patch.object(fileutils, 'ensure_tree') def test_with_master_path(self, mock_ensure_tree): self.config(tftp_master_path='/fake/path', group='pxe') self.config(image_cache_size=500, group='pxe') self.config(image_cache_ttl=30, group='pxe') cache = pxe_utils.TFTPImageCache() mock_ensure_tree.assert_called_once_with('/fake/path') self.assertEqual(500 * 1024 * 1024, cache._cache_size) self.assertEqual(30 * 60, cache._cache_ttl) @mock.patch.object(fileutils, 'ensure_tree') def test_without_master_path(self, mock_ensure_tree): self.config(tftp_master_path='', group='pxe') self.config(image_cache_size=500, group='pxe') self.config(image_cache_ttl=30, group='pxe') cache = pxe_utils.TFTPImageCache() mock_ensure_tree.assert_not_called() self.assertEqual(500 * 1024 * 1024, cache._cache_size) self.assertEqual(30 * 60, cache._cache_ttl) ironic-15.0.0/ironic/tests/unit/common/test_policy.py0000664000175000017500000001635513652514273022730 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import mock from oslo_config import cfg from oslo_policy import policy as oslo_policy from ironic.common import exception from ironic.common import policy from ironic.tests import base class PolicyInCodeTestCase(base.TestCase): """Tests whether the configuration of the policy engine is corect.""" def test_admin_api(self): creds = ({'roles': ['admin']}, {'roles': ['administrator']}, {'roles': ['admin', 'administrator']}) for c in creds: self.assertTrue(policy.check('admin_api', c, c)) def test_public_api(self): creds = {'is_public_api': 'True'} self.assertTrue(policy.check('public_api', creds, creds)) def test_show_password(self): creds = {'roles': [u'admin'], 'project_name': 'admin', 'project_domain_id': 'default'} self.assertFalse(policy.check('show_password', creds, creds)) def test_is_member(self): creds = [{'project_name': 'demo', 'project_domain_id': 'default'}, {'project_name': 'baremetal', 'project_domain_id': 'default'}, {'project_name': 'demo', 'project_domain_id': None}, {'project_name': 'baremetal', 'project_domain_id': None}] for c in creds: self.assertTrue(policy.check('is_member', c, c)) c = {'project_name': 'demo1', 'project_domain_id': 'default2'} self.assertFalse(policy.check('is_member', c, c)) def test_is_node_owner(self): c1 = {'project_id': '1234', 'project_name': 'demo', 'project_domain_id': 'default'} c2 = {'project_id': '5678', 'project_name': 'demo', 'project_domain_id': 'default'} target = dict.copy(c1) target['node.owner'] = '1234' self.assertTrue(policy.check('is_node_owner', target, c1)) self.assertFalse(policy.check('is_node_owner', target, c2)) def test_is_node_lessee(self): c1 = {'project_id': '1234', 'project_name': 'demo', 'project_domain_id': 'default'} c2 = {'project_id': '5678', 'project_name': 'demo', 'project_domain_id': 'default'} target = dict.copy(c1) target['node.lessee'] = '1234' self.assertTrue(policy.check('is_node_lessee', target, c1)) self.assertFalse(policy.check('is_node_lessee', target, c2)) def test_is_allocation_owner(self): c1 = {'project_id': '1234', 'project_name': 'demo', 'project_domain_id': 'default'} c2 = {'project_id': '5678', 'project_name': 'demo', 'project_domain_id': 'default'} target = dict.copy(c1) target['allocation.owner'] = '1234' self.assertTrue(policy.check('is_allocation_owner', target, c1)) self.assertFalse(policy.check('is_allocation_owner', target, c2)) def test_node_get(self): creds = {'roles': ['baremetal_observer'], 'project_name': 'demo', 'project_domain_id': 'default'} self.assertTrue(policy.check('baremetal:node:get', creds, creds)) def test_node_create(self): creds = {'roles': ['baremetal_admin'], 'project_name': 'demo', 'project_domain_id': 'default'} self.assertTrue(policy.check('baremetal:node:create', creds, creds)) class PolicyInCodeTestCaseNegative(base.TestCase): """Tests whether the configuration of the policy engine is corect.""" def test_admin_api(self): creds = {'roles': ['Member']} self.assertFalse(policy.check('admin_api', creds, creds)) def test_public_api(self): creds = ({'is_public_api': 'False'}, {}) for c in creds: self.assertFalse(policy.check('public_api', c, c)) def test_show_password(self): creds = {'roles': [u'admin'], 'tenant': 'demo'} self.assertFalse(policy.check('show_password', creds, creds)) def test_node_get(self): creds = {'roles': ['generic_user'], 'tenant': 'demo'} self.assertFalse(policy.check('baremetal:node:get', creds, creds)) def test_node_create(self): creds = {'roles': ['baremetal_observer'], 'tenant': 'demo'} self.assertFalse(policy.check('baremetal:node:create', creds, creds)) class PolicyTestCase(base.TestCase): """Tests whether ironic.common.policy behaves as expected.""" def setUp(self): super(PolicyTestCase, self).setUp() rule = oslo_policy.RuleDefault('has_foo_role', "role:foo") enforcer = policy.get_enforcer() enforcer.register_default(rule) def test_authorize_passes(self): creds = {'roles': ['foo']} policy.authorize('has_foo_role', creds, creds) def test_authorize_access_forbidden(self): creds = {'roles': ['bar']} self.assertRaises( exception.HTTPForbidden, policy.authorize, 'has_foo_role', creds, creds) def test_authorize_policy_not_registered(self): creds = {'roles': ['foo']} self.assertRaises( oslo_policy.PolicyNotRegistered, policy.authorize, 'has_bar_role', creds, creds) @mock.patch.object(cfg, 'CONF', autospec=True) @mock.patch.object(policy, 'get_enforcer', autospec=True) def test_get_oslo_policy_enforcer_no_args(self, mock_gpe, mock_cfg): mock_gpe.return_value = mock.Mock() args = [] with mock.patch.object(sys, 'argv', args): policy.get_oslo_policy_enforcer() mock_cfg.assert_called_once_with([], project='ironic') self.assertEqual(1, mock_gpe.call_count) @mock.patch.object(cfg, 'CONF', autospec=True) @mock.patch.object(policy, 'get_enforcer', autospec=True) def test_get_oslo_policy_enforcer_namespace(self, mock_gpe, mock_cfg): mock_gpe.return_value = mock.Mock() args = ['opg', '--namespace', 'ironic'] with mock.patch.object(sys, 'argv', args): policy.get_oslo_policy_enforcer() mock_cfg.assert_called_once_with([], project='ironic') self.assertEqual(1, mock_gpe.call_count) @mock.patch.object(cfg, 'CONF', autospec=True) @mock.patch.object(policy, 'get_enforcer', autospec=True) def test_get_oslo_policy_enforcer_config_file(self, mock_gpe, mock_cfg): mock_gpe.return_value = mock.Mock() args = ['opg', '--namespace', 'ironic', '--config-file', 'my.cfg'] with mock.patch.object(sys, 'argv', args): policy.get_oslo_policy_enforcer() mock_cfg.assert_called_once_with(['--config-file', 'my.cfg'], project='ironic') self.assertEqual(1, mock_gpe.call_count) ironic-15.0.0/ironic/tests/unit/common/test_network.py0000664000175000017500000004073213652514273023116 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils from ironic.common import exception from ironic.common import network from ironic.common import neutron as neutron_common from ironic.common import states from ironic.conductor import task_manager from ironic.drivers.modules.network import common as driver_common from ironic.tests.unit.db import base as db_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as object_utils class TestNetwork(db_base.DbTestCase): def setUp(self): super(TestNetwork, self).setUp() self.node = object_utils.create_test_node(self.context) def test_get_node_vif_ids_no_ports_no_portgroups(self): expected = {'portgroups': {}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def _test_get_node_vif_ids_one_port(self, key): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} port1 = db_utils.create_test_port(node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), **kwargs1) expected = {'portgroups': {}, 'ports': {port1.uuid: 'test-vif-A'}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_one_port_extra(self): self._test_get_node_vif_ids_one_port("extra") def test_get_node_vif_ids_one_port_int_info(self): self._test_get_node_vif_ids_one_port("internal_info") def _test_get_node_vif_ids_one_portgroup(self, key): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} pg1 = db_utils.create_test_portgroup( node_id=self.node.id, **kwargs1) expected = {'portgroups': {pg1.uuid: 'test-vif-A'}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_one_portgroup_extra(self): self._test_get_node_vif_ids_one_portgroup("extra") def test_get_node_vif_ids_one_portgroup_int_info(self): self._test_get_node_vif_ids_one_portgroup("internal_info") def _test_get_node_vif_ids_two_ports(self, key): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} kwargs2 = {key: {'vif_port_id': 'test-vif-B'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} kwargs2 = {key: {'tenant_vif_port_id': 'test-vif-B'}} port1 = db_utils.create_test_port(node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', uuid=uuidutils.generate_uuid(), **kwargs1) port2 = db_utils.create_test_port(node_id=self.node.id, address='dd:ee:ff:aa:bb:cc', uuid=uuidutils.generate_uuid(), **kwargs2) expected = {'portgroups': {}, 'ports': {port1.uuid: 'test-vif-A', port2.uuid: 'test-vif-B'}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_two_ports_extra(self): self._test_get_node_vif_ids_two_ports('extra') def test_get_node_vif_ids_two_ports_int_info(self): self._test_get_node_vif_ids_two_ports('internal_info') def _test_get_node_vif_ids_two_portgroups(self, key): if key == "extra": kwargs1 = {key: {'vif_port_id': 'test-vif-A'}} kwargs2 = {key: {'vif_port_id': 'test-vif-B'}} else: kwargs1 = {key: {'tenant_vif_port_id': 'test-vif-A'}} kwargs2 = {key: {'tenant_vif_port_id': 'test-vif-B'}} pg1 = db_utils.create_test_portgroup( node_id=self.node.id, **kwargs1) pg2 = db_utils.create_test_portgroup( uuid=uuidutils.generate_uuid(), address='dd:ee:ff:aa:bb:cc', node_id=self.node.id, name='barname', **kwargs2) expected = {'portgroups': {pg1.uuid: 'test-vif-A', pg2.uuid: 'test-vif-B'}, 'ports': {}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_two_portgroups_extra(self): self._test_get_node_vif_ids_two_portgroups('extra') def test_get_node_vif_ids_two_portgroups_int_info(self): self._test_get_node_vif_ids_two_portgroups('internal_info') def _test_get_node_vif_ids_multitenancy(self, int_info_key): port = db_utils.create_test_port( node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', internal_info={int_info_key: 'test-vif-A'}) portgroup = db_utils.create_test_portgroup( node_id=self.node.id, address='dd:ee:ff:aa:bb:cc', internal_info={int_info_key: 'test-vif-B'}) expected = {'ports': {port.uuid: 'test-vif-A'}, 'portgroups': {portgroup.uuid: 'test-vif-B'}} with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual(expected, result) def test_get_node_vif_ids_during_cleaning(self): self._test_get_node_vif_ids_multitenancy('cleaning_vif_port_id') def test_get_node_vif_ids_during_provisioning(self): self._test_get_node_vif_ids_multitenancy('provisioning_vif_port_id') def test_get_node_vif_ids_during_rescuing(self): self._test_get_node_vif_ids_multitenancy('rescuing_vif_port_id') def test_remove_vifs_from_node(self): db_utils.create_test_port( node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', internal_info={driver_common.TENANT_VIF_KEY: 'test-vif-A'}) db_utils.create_test_portgroup( node_id=self.node.id, address='dd:ee:ff:aa:bb:cc', internal_info={driver_common.TENANT_VIF_KEY: 'test-vif-B'}) with task_manager.acquire(self.context, self.node.uuid) as task: network.remove_vifs_from_node(task) with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual({}, result['ports']) self.assertEqual({}, result['portgroups']) class TestRemoveVifsTestCase(db_base.DbTestCase): def setUp(self): super(TestRemoveVifsTestCase, self).setUp() self.node = object_utils.create_test_node( self.context, network_interface='flat', provision_state=states.DELETING) @mock.patch.object(neutron_common, 'unbind_neutron_port') def test_remove_vifs_from_node_failure(self, mock_unbind): db_utils.create_test_port( node_id=self.node.id, address='aa:bb:cc:dd:ee:ff', internal_info={driver_common.TENANT_VIF_KEY: 'test-vif-A'}) db_utils.create_test_portgroup( node_id=self.node.id, address='dd:ee:ff:aa:bb:cc', internal_info={driver_common.TENANT_VIF_KEY: 'test-vif-B'}) mock_unbind.side_effect = [exception.NetworkError, None] with task_manager.acquire(self.context, self.node.uuid) as task: network.remove_vifs_from_node(task) with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_node_vif_ids(task) self.assertEqual({}, result['ports']) self.assertEqual({}, result['portgroups']) self.assertEqual(2, mock_unbind.call_count) class GetPortgroupByIdTestCase(db_base.DbTestCase): def test_portgroup_by_id(self): node = object_utils.create_test_node(self.context) portgroup = object_utils.create_test_portgroup(self.context, node_id=node.id) object_utils.create_test_portgroup(self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', name='pg2') with task_manager.acquire(self.context, node.uuid) as task: res = network.get_portgroup_by_id(task, portgroup.id) self.assertEqual(portgroup.id, res.id) def test_portgroup_by_id_no_such_portgroup(self): node = object_utils.create_test_node(self.context) object_utils.create_test_portgroup(self.context, node_id=node.id) with task_manager.acquire(self.context, node.uuid) as task: portgroup_id = 'invalid-portgroup-id' res = network.get_portgroup_by_id(task, portgroup_id) self.assertIsNone(res) class GetPortsByPortgroupIdTestCase(db_base.DbTestCase): def test_ports_by_portgroup_id(self): node = object_utils.create_test_node(self.context) portgroup = object_utils.create_test_portgroup(self.context, node_id=node.id) port = object_utils.create_test_port(self.context, node_id=node.id, portgroup_id=portgroup.id) object_utils.create_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55') with task_manager.acquire(self.context, node.uuid) as task: res = network.get_ports_by_portgroup_id(task, portgroup.id) self.assertEqual([port.id], [p.id for p in res]) def test_ports_by_portgroup_id_empty(self): node = object_utils.create_test_node(self.context) portgroup = object_utils.create_test_portgroup(self.context, node_id=node.id) with task_manager.acquire(self.context, node.uuid) as task: res = network.get_ports_by_portgroup_id(task, portgroup.id) self.assertEqual([], res) class GetPhysnetsForNodeTestCase(db_base.DbTestCase): def test_get_physnets_for_node_no_ports(self): node = object_utils.create_test_node(self.context) with task_manager.acquire(self.context, node.uuid) as task: res = network.get_physnets_for_node(task) self.assertEqual(set(), res) def test_get_physnets_for_node_excludes_None(self): node = object_utils.create_test_node(self.context) object_utils.create_test_port(self.context, node_id=node.id) with task_manager.acquire(self.context, node.uuid) as task: res = network.get_physnets_for_node(task) self.assertEqual(set(), res) def test_get_physnets_for_node_multiple_ports(self): node = object_utils.create_test_node(self.context) object_utils.create_test_port(self.context, node_id=node.id, physical_network='physnet1') object_utils.create_test_port(self.context, node_id=node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', physical_network='physnet2') with task_manager.acquire(self.context, node.uuid) as task: res = network.get_physnets_for_node(task) self.assertEqual({'physnet1', 'physnet2'}, res) class GetPhysnetsByPortgroupID(db_base.DbTestCase): def setUp(self): super(GetPhysnetsByPortgroupID, self).setUp() self.node = object_utils.create_test_node(self.context) self.portgroup = object_utils.create_test_portgroup( self.context, node_id=self.node.id) def _test(self, expected_result, exclude_port=None): with task_manager.acquire(self.context, self.node.uuid) as task: result = network.get_physnets_by_portgroup_id(task, self.portgroup.id, exclude_port) self.assertEqual(expected_result, result) def test_empty(self): self._test(set()) def test_one_port(self): object_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=self.portgroup.id, physical_network='physnet1') self._test({'physnet1'}) def test_two_ports(self): object_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=self.portgroup.id, physical_network='physnet1') object_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', portgroup_id=self.portgroup.id, physical_network='physnet1') self._test({'physnet1'}) def test_exclude_port(self): object_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=self.portgroup.id, physical_network='physnet1') port2 = object_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', portgroup_id=self.portgroup.id, physical_network='physnet2') self._test({'physnet1'}, port2) def test_exclude_port_no_id(self): # During port creation there may be no 'id' field. object_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=self.portgroup.id, physical_network='physnet1') port2 = object_utils.get_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', portgroup_id=self.portgroup.id, physical_network='physnet2') self._test({'physnet1'}, port2) def test_two_ports_inconsistent(self): object_utils.create_test_port(self.context, node_id=self.node.id, portgroup_id=self.portgroup.id, physical_network='physnet1') object_utils.create_test_port(self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='00:11:22:33:44:55', portgroup_id=self.portgroup.id, physical_network='physnet2') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.PortgroupPhysnetInconsistent, network.get_physnets_by_portgroup_id, task, self.portgroup.id) ironic-15.0.0/ironic/tests/unit/common/test_neutron.py0000664000175000017500000017572313652514273023130 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time from keystoneauth1 import loading as kaloading import mock from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client from oslo_utils import uuidutils from ironic.common import context from ironic.common import exception from ironic.common import neutron from ironic.conductor import task_manager from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils @mock.patch('ironic.common.keystone.get_service_auth', autospec=True, return_value=mock.sentinel.sauth) @mock.patch('ironic.common.keystone.get_auth', autospec=True, return_value=mock.sentinel.auth) @mock.patch('ironic.common.keystone.get_adapter', autospec=True) @mock.patch('ironic.common.keystone.get_session', autospec=True, return_value=mock.sentinel.session) @mock.patch.object(client.Client, "__init__", return_value=None, autospec=True) class TestNeutronClient(base.TestCase): def setUp(self): super(TestNeutronClient, self).setUp() # NOTE(pas-ha) register keystoneauth dynamic options manually plugin = kaloading.get_plugin_loader('password') opts = kaloading.get_auth_plugin_conf_options(plugin) self.cfg_fixture.register_opts(opts, group='neutron') self.config(retries=2, group='neutron') self.config(username='test-admin-user', project_name='test-admin-tenant', password='test-admin-password', auth_url='test-auth-uri', auth_type='password', interface='internal', service_type='network', timeout=10, group='neutron') # force-reset the global session object neutron._NEUTRON_SESSION = None self.context = context.RequestContext(global_request_id='global') def _call_and_assert_client(self, client_mock, url, auth=mock.sentinel.auth): neutron.get_client(context=self.context) client_mock.assert_called_once_with(mock.ANY, # this is 'self' session=mock.sentinel.session, auth=auth, retries=2, endpoint_override=url, global_request_id='global', timeout=45) @mock.patch('ironic.common.context.RequestContext', autospec=True) def test_get_neutron_client_with_token(self, mock_ctxt, mock_client_init, mock_session, mock_adapter, mock_auth, mock_sauth): mock_ctxt.return_value = ctxt = mock.Mock() ctxt.auth_token = 'test-token-123' mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'neutron_url' neutron.get_client(token='test-token-123') mock_ctxt.assert_called_once_with(auth_token='test-token-123') mock_client_init.assert_called_once_with( mock.ANY, # this is 'self' session=mock.sentinel.session, auth=mock.sentinel.sauth, retries=2, endpoint_override='neutron_url', global_request_id=ctxt.global_id, timeout=45) # testing handling of default url_timeout mock_session.assert_called_once_with('neutron', timeout=10) mock_adapter.assert_called_once_with('neutron', session=mock.sentinel.session, auth=mock.sentinel.auth) mock_sauth.assert_called_once_with(mock_ctxt.return_value, 'neutron_url', mock.sentinel.auth) def test_get_neutron_client_with_context(self, mock_client_init, mock_session, mock_adapter, mock_auth, mock_sauth): self.context = context.RequestContext(global_request_id='global', auth_token='test-token-123') mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'neutron_url' self._call_and_assert_client(mock_client_init, 'neutron_url', auth=mock.sentinel.sauth) # testing handling of default url_timeout mock_session.assert_called_once_with('neutron', timeout=10) mock_adapter.assert_called_once_with('neutron', session=mock.sentinel.session, auth=mock.sentinel.auth) mock_sauth.assert_called_once_with(self.context, 'neutron_url', mock.sentinel.auth) def test_get_neutron_client_without_token(self, mock_client_init, mock_session, mock_adapter, mock_auth, mock_sauth): mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'neutron_url' self._call_and_assert_client(mock_client_init, 'neutron_url') mock_session.assert_called_once_with('neutron', timeout=10) mock_adapter.assert_called_once_with('neutron', session=mock.sentinel.session, auth=mock.sentinel.auth) self.assertEqual(0, mock_sauth.call_count) def test_get_neutron_client_noauth(self, mock_client_init, mock_session, mock_adapter, mock_auth, mock_sauth): self.config(endpoint_override='neutron_url', auth_type='none', timeout=10, group='neutron') mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'neutron_url' self._call_and_assert_client(mock_client_init, 'neutron_url') self.assertEqual('none', neutron.CONF.neutron.auth_type) mock_session.assert_called_once_with('neutron', timeout=10) mock_adapter.assert_called_once_with('neutron', session=mock.sentinel.session, auth=mock.sentinel.auth) mock_auth.assert_called_once_with('neutron') self.assertEqual(0, mock_sauth.call_count) class TestNeutronConfClient(base.TestCase): def setUp(self): super(TestNeutronConfClient, self).setUp() # NOTE(pas-ha) register keystoneauth dynamic options manually plugin = kaloading.get_plugin_loader('password') opts = kaloading.get_auth_plugin_conf_options(plugin) self.cfg_fixture.register_opts(opts, group='neutron') self.config(retries=2, group='neutron') self.config(username='test-admin-user', project_name='test-admin-tenant', password='test-admin-password', auth_url='test-auth-uri', auth_type='password', interface='internal', service_type='network', timeout=10, group='neutron') # force-reset the global session object neutron._NEUTRON_SESSION = None self.context = context.RequestContext(global_request_id='global') @mock.patch('keystoneauth1.loading.load_auth_from_conf_options', autospec=True, return_value=mock.sentinel.auth) @mock.patch('keystoneauth1.loading.load_session_from_conf_options', autospec=True, return_value=mock.sentinel.session) @mock.patch('ironic.common.keystone.get_endpoint', autospec=True, return_value='neutron_url') @mock.patch.object(client.Client, "__init__", return_value=None, autospec=True) def test_get_neutron_conf_client(self, mock_client, mock_get_endpoint, mock_session, mock_auth): neutron._get_conf_client(self.context) mock_client.assert_called_once_with(mock.ANY, # this is 'self' session=mock.sentinel.session, auth=mock.sentinel.auth, retries=2, endpoint_override='neutron_url', global_request_id='global', timeout=45) class TestUpdateNeutronPort(base.TestCase): def setUp(self): super(TestUpdateNeutronPort, self).setUp() self.uuid = uuidutils.generate_uuid() self.context = context.RequestContext() self.update_body = {'port': {}} @mock.patch.object(neutron, 'get_client', autospec=True) @mock.patch.object(neutron, '_get_conf_client', autospec=True) def test_update_neutron_port(self, conf_client_mock, client_mock): client_mock.return_value.show_port.return_value = {'port': {}} conf_client_mock.return_value.update_port.return_value = {'port': {}} neutron.update_neutron_port(self.context, self.uuid, self.update_body) client_mock.assert_called_once_with(context=self.context) client_mock.return_value.show_port.assert_called_once_with(self.uuid) conf_client_mock.assert_called_once_with(self.context) conf_client_mock.return_value.update_port.assert_called_once_with( self.uuid, self.update_body) @mock.patch.object(neutron, 'get_client', autospec=True) @mock.patch.object(neutron, '_get_conf_client', autospec=True) def test_update_neutron_port_with_client(self, conf_client_mock, client_mock): client_mock.return_value.show_port.return_value = {'port': {}} conf_client_mock.return_value.update_port.return_value = {'port': {}} client = mock.Mock() client.update_port.return_value = {'port': {}} neutron.update_neutron_port(self.context, self.uuid, self.update_body, client) self.assertFalse(client_mock.called) self.assertFalse(conf_client_mock.called) client.update_port.assert_called_once_with(self.uuid, self.update_body) @mock.patch.object(neutron, 'get_client', autospec=True) @mock.patch.object(neutron, '_get_conf_client', autospec=True) def test_update_neutron_port_with_exception(self, conf_client_mock, client_mock): client_mock.return_value.show_port.side_effect = \ neutron_client_exc.NeutronClientException conf_client_mock.return_value.update_port.return_value = {'port': {}} self.assertRaises( neutron_client_exc.NeutronClientException, neutron.update_neutron_port, self.context, self.uuid, self.update_body) client_mock.assert_called_once_with(context=self.context) client_mock.return_value.show_port.assert_called_once_with(self.uuid) self.assertFalse(conf_client_mock.called) class TestNeutronNetworkActions(db_base.DbTestCase): _CLIENT_ID = ( '20:00:55:04:01:fe:80:00:00:00:00:00:00:00:02:c9:02:00:23:13:92') def setUp(self): super(TestNeutronNetworkActions, self).setUp() self.node = object_utils.create_test_node(self.context) self.ports = [object_utils.create_test_port( self.context, node_id=self.node.id, uuid='1be26c0b-03f2-4d2e-ae87-c02d7f33c782', address='52:54:00:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid()} )] # Very simple neutron port representation self.neutron_port = {'id': '132f871f-eaec-4fed-9475-0d54465e0f00', 'mac_address': '52:54:00:cf:2d:32', 'fixed_ips': []} self.network_uuid = uuidutils.generate_uuid() self.client_mock = mock.Mock() self.client_mock.list_agents.return_value = { 'agents': [{'alive': True}]} patcher = mock.patch('ironic.common.neutron.get_client', return_value=self.client_mock, autospec=True) patcher.start() self.addCleanup(patcher.stop) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) def _test_add_ports_to_network(self, update_mock, is_client_id, security_groups=None, add_all_ports=False): # Ports will be created only if pxe_enabled is True self.node.network_interface = 'neutron' self.node.save() port2 = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='54:00:00:cf:2d:22', pxe_enabled=False ) if add_all_ports: self.config(add_all_ports=True, group="neutron") port = self.ports[0] if is_client_id: extra = port.extra extra['client-id'] = self._CLIENT_ID port.extra = extra port.save() expected_create_body = { 'port': { 'network_id': self.network_uuid, 'admin_state_up': True, 'binding:vnic_type': 'baremetal', 'device_id': self.node.uuid, } } expected_update_body = { 'port': { 'device_owner': 'baremetal:none', 'binding:host_id': self.node.uuid, 'mac_address': port.address, 'binding:profile': { 'local_link_information': [port.local_link_connection] } } } if security_groups: expected_create_body['port']['security_groups'] = security_groups if is_client_id: expected_create_body['port']['extra_dhcp_opts'] = ( [{'opt_name': '61', 'opt_value': self._CLIENT_ID}]) if add_all_ports: expected_create_body2 = copy.deepcopy(expected_create_body) expected_update_body2 = copy.deepcopy(expected_update_body) expected_update_body2['port']['mac_address'] = port2.address expected_create_body2['fixed_ips'] = [] neutron_port2 = {'id': '132f871f-eaec-4fed-9475-0d54465e0f01', 'mac_address': port2.address, 'fixed_ips': []} self.client_mock.create_port.side_effect = [ {'port': self.neutron_port}, {'port': neutron_port2} ] expected = {port.uuid: self.neutron_port['id'], port2.uuid: neutron_port2['id']} else: self.client_mock.create_port.return_value = { 'port': self.neutron_port} expected = {port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.uuid) as task: ports = neutron.add_ports_to_network( task, self.network_uuid, security_groups=security_groups) self.assertEqual(expected, ports) if add_all_ports: create_calls = [mock.call(expected_create_body), mock.call(expected_create_body2)] update_calls = [ mock.call(self.context, self.neutron_port['id'], expected_update_body), mock.call(self.context, neutron_port2['id'], expected_update_body2)] self.client_mock.create_port.assert_has_calls(create_calls) update_mock.assert_has_calls(update_calls) else: self.client_mock.create_port.assert_called_once_with( expected_create_body) update_mock.assert_called_once_with( self.context, self.neutron_port['id'], expected_update_body) def test_add_ports_to_network(self): self._test_add_ports_to_network(is_client_id=False, security_groups=None) def test_add_ports_to_network_all_ports(self): self._test_add_ports_to_network(is_client_id=False, security_groups=None, add_all_ports=True) @mock.patch.object(neutron, '_verify_security_groups', autospec=True) def test_add_ports_to_network_with_sg(self, verify_mock): sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) self._test_add_ports_to_network(is_client_id=False, security_groups=sg_ids) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) def test__add_ip_addresses_for_ipv6_stateful(self, mock_update): subnet_id = uuidutils.generate_uuid() self.client_mock.show_subnet.return_value = { 'subnet': { 'id': subnet_id, 'ip_version': 6, 'ipv6_address_mode': 'dhcpv6-stateful' } } self.neutron_port['fixed_ips'] = [{'subnet_id': subnet_id, 'ip_address': '2001:db8::1'}] expected_body = { 'port': { 'fixed_ips': [ {'subnet_id': subnet_id, 'ip_address': '2001:db8::1'}, {'subnet_id': subnet_id}, {'subnet_id': subnet_id}, {'subnet_id': subnet_id} ] } } neutron._add_ip_addresses_for_ipv6_stateful( self.context, {'port': self.neutron_port}, self.client_mock ) mock_update.assert_called_once_with( self.context, self.neutron_port['id'], expected_body) def test_verify_sec_groups(self): sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) expected_vals = {'security_groups': []} for sg in sg_ids: expected_vals['security_groups'].append({'id': sg}) client = mock.MagicMock() client.list_security_groups.return_value = expected_vals self.assertIsNone( neutron._verify_security_groups(sg_ids, client)) client.list_security_groups.assert_called_once_with( fields='id', id=sg_ids) def test_verify_sec_groups_less_than_configured(self): sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) expected_vals = {'security_groups': [{'id': sg_ids[0]}]} client = mock.MagicMock() client.list_security_groups.return_value = expected_vals self.assertIsNone( neutron._verify_security_groups(sg_ids[:1], client)) client.list_security_groups.assert_called_once_with( fields='id', id=sg_ids[:1]) def test_verify_sec_groups_more_than_configured(self): sg_ids = [] for i in range(1): sg_ids.append(uuidutils.generate_uuid()) client = mock.MagicMock() expected_vals = {'security_groups': []} client.list_security_groups.return_value = expected_vals self.assertRaises( exception.NetworkError, neutron._verify_security_groups, sg_ids, client) client.list_security_groups.assert_called_once_with( fields='id', id=sg_ids) def test_verify_sec_groups_no_sg_from_neutron(self): sg_ids = [] for i in range(1): sg_ids.append(uuidutils.generate_uuid()) client = mock.MagicMock() client.list_security_groups.return_value = {} self.assertRaises( exception.NetworkError, neutron._verify_security_groups, sg_ids, client) client.list_security_groups.assert_called_once_with( fields='id', id=sg_ids) def test_verify_sec_groups_exception_by_neutronclient(self): sg_ids = [] for i in range(2): sg_ids.append(uuidutils.generate_uuid()) client = mock.MagicMock() client.list_security_groups.side_effect = \ neutron_client_exc.NeutronClientException self.assertRaisesRegex( exception.NetworkError, "Could not retrieve security groups", neutron._verify_security_groups, sg_ids, client) client.list_security_groups.assert_called_once_with( fields='id', id=sg_ids) def test_add_ports_with_client_id_to_network(self): self._test_add_ports_to_network(is_client_id=True) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'validate_port_info', autospec=True) def test_add_ports_to_network_instance_uuid(self, vpi_mock, update_mock): self.node.instance_uuid = uuidutils.generate_uuid() self.node.network_interface = 'neutron' self.node.save() port = self.ports[0] expected_create_body = { 'port': { 'network_id': self.network_uuid, 'admin_state_up': True, 'binding:vnic_type': 'baremetal', 'device_id': self.node.instance_uuid, } } expected_update_body = { 'port': { 'device_owner': 'baremetal:none', 'binding:host_id': self.node.uuid, 'mac_address': port.address, 'binding:profile': { 'local_link_information': [port.local_link_connection] } } } vpi_mock.return_value = True # Ensure we can create ports self.client_mock.create_port.return_value = {'port': self.neutron_port} expected = {port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.uuid) as task: ports = neutron.add_ports_to_network(task, self.network_uuid) self.assertEqual(expected, ports) self.client_mock.create_port.assert_called_once_with( expected_create_body) update_mock.assert_called_once_with(self.context, self.neutron_port['id'], expected_update_body) self.assertTrue(vpi_mock.called) @mock.patch.object(neutron, 'rollback_ports', autospec=True) def test_add_network_all_ports_fail(self, rollback_mock): # Check that if creating a port fails, the ports are cleaned up self.client_mock.create_port.side_effect = \ neutron_client_exc.ConnectionFailed with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises( exception.NetworkError, neutron.add_ports_to_network, task, self.network_uuid) rollback_mock.assert_called_once_with(task, self.network_uuid) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'LOG', autospec=True) def test_add_network_create_some_ports_fail(self, log_mock, update_mock): object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:55:cf:2d:32', extra={'vif_port_id': uuidutils.generate_uuid()} ) self.client_mock.create_port.side_effect = [ {'port': self.neutron_port}, neutron_client_exc.ConnectionFailed] with task_manager.acquire(self.context, self.node.uuid) as task: neutron.add_ports_to_network(task, self.network_uuid) self.assertIn("Could not create neutron port for node's", log_mock.warning.call_args_list[0][0][0]) self.assertIn("Some errors were encountered when updating", log_mock.warning.call_args_list[1][0][0]) def test_add_network_no_port(self): # No port registered node = object_utils.create_test_node(self.context, uuid=uuidutils.generate_uuid()) with task_manager.acquire(self.context, node.uuid) as task: self.assertEqual([], task.ports) self.assertRaisesRegex(exception.NetworkError, 'No available', neutron.add_ports_to_network, task, self.network_uuid) def test_add_network_no_pxe_enabled_ports(self): # Have port but no PXE enabled port = self.ports[0] port.pxe_enabled = False port.save() with task_manager.acquire(self.context, self.node.uuid) as task: self.assertFalse(task.ports[0].pxe_enabled) self.assertRaisesRegex(exception.NetworkError, 'No available', neutron.add_ports_to_network, task, self.network_uuid) @mock.patch.object(neutron, 'remove_neutron_ports', autospec=True) def test_remove_ports_from_network(self, remove_mock): with task_manager.acquire(self.context, self.node.uuid) as task: neutron.remove_ports_from_network(task, self.network_uuid) remove_mock.assert_called_once_with( task, {'network_id': self.network_uuid, 'mac_address': [self.ports[0].address]} ) @mock.patch.object(neutron, 'remove_neutron_ports', autospec=True) def test_remove_ports_from_network_not_all_pxe_enabled(self, remove_mock): object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:55:cf:2d:32', pxe_enabled=False ) with task_manager.acquire(self.context, self.node.uuid) as task: neutron.remove_ports_from_network(task, self.network_uuid) remove_mock.assert_called_once_with( task, {'network_id': self.network_uuid, 'mac_address': [self.ports[0].address]} ) @mock.patch.object(neutron, 'remove_neutron_ports', autospec=True) def test_remove_ports_from_network_not_all_pxe_enabled_all_ports( self, remove_mock): self.config(add_all_ports=True, group="neutron") object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:55:cf:2d:32', pxe_enabled=False ) with task_manager.acquire(self.context, self.node.uuid) as task: neutron.remove_ports_from_network(task, self.network_uuid) calls = [ mock.call(task, {'network_id': self.network_uuid, 'mac_address': [task.ports[0].address, task.ports[1].address]}), ] remove_mock.assert_has_calls(calls) def test_remove_neutron_ports(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.client_mock.list_ports.return_value = { 'ports': [self.neutron_port]} neutron.remove_neutron_ports(task, {'param': 'value'}) self.client_mock.list_ports.assert_called_once_with( **{'param': 'value'}) self.client_mock.delete_port.assert_called_once_with( self.neutron_port['id']) def test_remove_neutron_ports_list_fail(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.client_mock.list_ports.side_effect = \ neutron_client_exc.ConnectionFailed self.assertRaisesRegex( exception.NetworkError, 'Could not get given network VIF', neutron.remove_neutron_ports, task, {'param': 'value'}) self.client_mock.list_ports.assert_called_once_with( **{'param': 'value'}) def test_remove_neutron_ports_delete_fail(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.client_mock.delete_port.side_effect = \ neutron_client_exc.ConnectionFailed self.client_mock.list_ports.return_value = { 'ports': [self.neutron_port]} self.assertRaisesRegex( exception.NetworkError, 'Could not remove VIF', neutron.remove_neutron_ports, task, {'param': 'value'}) self.client_mock.list_ports.assert_called_once_with( **{'param': 'value'}) self.client_mock.delete_port.assert_called_once_with( self.neutron_port['id']) def test_remove_neutron_ports_delete_race(self): with task_manager.acquire(self.context, self.node.uuid) as task: self.client_mock.delete_port.side_effect = \ neutron_client_exc.PortNotFoundClient self.client_mock.list_ports.return_value = { 'ports': [self.neutron_port]} neutron.remove_neutron_ports(task, {'param': 'value'}) self.client_mock.list_ports.assert_called_once_with( **{'param': 'value'}) self.client_mock.delete_port.assert_called_once_with( self.neutron_port['id']) def test_get_node_portmap(self): with task_manager.acquire(self.context, self.node.uuid) as task: portmap = neutron.get_node_portmap(task) self.assertEqual( {self.ports[0].uuid: self.ports[0].local_link_connection}, portmap ) def test_get_local_group_information(self): pg = object_utils.create_test_portgroup( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:55:cf:2d:32', mode='802.3ad', properties={'bond_opt1': 'foo', 'opt2': 'bar'}, name='test-pg' ) expected = { 'id': pg.uuid, 'name': pg.name, 'bond_mode': pg.mode, 'bond_properties': {'bond_opt1': 'foo', 'bond_opt2': 'bar'}, } with task_manager.acquire(self.context, self.node.uuid) as task: res = neutron.get_local_group_information(task, pg) self.assertEqual(expected, res) @mock.patch.object(neutron, 'remove_ports_from_network', autospec=True) def test_rollback_ports(self, remove_mock): with task_manager.acquire(self.context, self.node.uuid) as task: neutron.rollback_ports(task, self.network_uuid) remove_mock.assert_called_once_with(task, self.network_uuid) @mock.patch.object(neutron, 'LOG', autospec=True) @mock.patch.object(neutron, 'remove_ports_from_network', autospec=True) def test_rollback_ports_exception(self, remove_mock, log_mock): remove_mock.side_effect = exception.NetworkError('boom') with task_manager.acquire(self.context, self.node.uuid) as task: neutron.rollback_ports(task, self.network_uuid) self.assertTrue(log_mock.exception.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_interface(self, log_mock): self.node.network_interface = 'neutron' self.node.save() port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33') res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.warning.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_interface_missed_info(self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc) res = neutron.validate_port_info(self.node, port) self.assertFalse(res) self.assertTrue(log_mock.warning.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_flat_interface(self, log_mock): self.node.network_interface = 'flat' self.node.save() llc = {} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc) res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.warning.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_flat_interface_with_client_id(self, log_mock): self.node.network_interface = 'flat' self.node.save() llc = {} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc, extra={'client-id': self._CLIENT_ID}) res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.warning.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_interface_with_client_id( self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc, extra={'client-id': self._CLIENT_ID}) res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.warning.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_with_smartnic_and_link_info( self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {'hostname': 'host1', 'port_id': 'rep0-0'} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc, is_smartnic=True) res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.error.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_with_no_smartnic_and_link_info( self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {'hostname': 'host1', 'port_id': 'rep0-0'} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc, is_smartnic=False) res = neutron.validate_port_info(self.node, port) self.assertFalse(res) self.assertTrue(log_mock.error.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_with_smartnic_and_no_link_info( self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {'switch_id': 'switch', 'port_id': 'rep0-0'} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc, is_smartnic=True) res = neutron.validate_port_info(self.node, port) self.assertFalse(res) self.assertTrue(log_mock.error.called) @mock.patch.object(neutron, 'LOG', autospec=True) def test_validate_port_info_neutron_with_network_type_unmanaged( self, log_mock): self.node.network_interface = 'neutron' self.node.save() llc = {'network_type': 'unmanaged'} port = object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:33', local_link_connection=llc) res = neutron.validate_port_info(self.node, port) self.assertTrue(res) self.assertFalse(log_mock.warning.called) def test_validate_agent_up(self): self.client_mock.list_agents.return_value = { 'agents': [{'alive': True}]} self.assertTrue(neutron._validate_agent(self.client_mock)) def test_validate_agent_down(self): self.client_mock.list_agents.return_value = { 'agents': [{'alive': False}]} self.assertFalse(neutron._validate_agent(self.client_mock)) def test_is_smartnic_port_true(self): port = self.ports[0] port.is_smartnic = True self.assertTrue(neutron.is_smartnic_port(port)) def test_is_smartnic_port_false(self): port = self.ports[0] self.assertFalse(neutron.is_smartnic_port(port)) @mock.patch.object(neutron, '_validate_agent') @mock.patch.object(time, 'sleep') def test_wait_for_host_agent_up_target_state_up( self, sleep_mock, validate_agent_mock): validate_agent_mock.return_value = True self.assertTrue(neutron.wait_for_host_agent( self.client_mock, 'hostname')) sleep_mock.assert_not_called() @mock.patch.object(neutron, '_validate_agent') @mock.patch.object(time, 'sleep') def test_wait_for_host_agent_down_target_state_up( self, sleep_mock, validate_agent_mock): validate_agent_mock.return_value = False self.assertRaises(exception.NetworkError, neutron.wait_for_host_agent, self.client_mock, 'hostname') @mock.patch.object(neutron, '_validate_agent') @mock.patch.object(time, 'sleep') def test_wait_for_host_agent_up_target_state_down( self, sleep_mock, validate_agent_mock): validate_agent_mock.return_value = True self.assertRaises(exception.NetworkError, neutron.wait_for_host_agent, self.client_mock, 'hostname', target_state='down') @mock.patch.object(neutron, '_validate_agent') @mock.patch.object(time, 'sleep') def test_wait_for_host_agent_down_target_state_down( self, sleep_mock, validate_agent_mock): validate_agent_mock.return_value = False self.assertTrue( neutron.wait_for_host_agent(self.client_mock, 'hostname', target_state='down')) sleep_mock.assert_not_called() @mock.patch.object(neutron, '_get_port_by_uuid') @mock.patch.object(time, 'sleep') def test_wait_for_port_status_up(self, sleep_mock, get_port_mock): get_port_mock.return_value = {'status': 'ACTIVE'} neutron.wait_for_port_status(self.client_mock, 'port_id', 'ACTIVE') sleep_mock.assert_not_called() @mock.patch.object(neutron, '_get_port_by_uuid') @mock.patch.object(time, 'sleep') def test_wait_for_port_status_down(self, sleep_mock, get_port_mock): get_port_mock.side_effect = [{'status': 'DOWN'}, {'status': 'ACTIVE'}] neutron.wait_for_port_status(self.client_mock, 'port_id', 'ACTIVE') sleep_mock.assert_called_once() @mock.patch.object(neutron, '_get_port_by_uuid') @mock.patch.object(time, 'sleep') def test_wait_for_port_status_active_max_retry(self, sleep_mock, get_port_mock): get_port_mock.return_value = {'status': 'DOWN'} self.assertRaises(exception.NetworkError, neutron.wait_for_port_status, self.client_mock, 'port_id', 'ACTIVE') @mock.patch.object(neutron, '_get_port_by_uuid') @mock.patch.object(time, 'sleep') def test_wait_for_port_status_down_max_retry(self, sleep_mock, get_port_mock): get_port_mock.return_value = {'status': 'ACTIVE'} self.assertRaises(exception.NetworkError, neutron.wait_for_port_status, self.client_mock, 'port_id', 'DOWN') @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'wait_for_host_agent', autospec=True) @mock.patch.object(neutron, 'wait_for_port_status', autospec=True) def test_add_smartnic_port_to_network( self, wait_port_mock, wait_agent_mock, update_mock): # Ports will be created only if pxe_enabled is True self.node.network_interface = 'neutron' self.node.save() object_utils.create_test_port( self.context, node_id=self.node.id, uuid=uuidutils.generate_uuid(), address='52:54:00:cf:2d:22', pxe_enabled=False ) port = self.ports[0] local_link_connection = port.local_link_connection local_link_connection['hostname'] = 'hostname' port.local_link_connection = local_link_connection port.is_smartnic = True port.save() expected_create_body = { 'port': { 'network_id': self.network_uuid, 'admin_state_up': True, 'binding:vnic_type': 'smart-nic', 'device_id': self.node.uuid, } } expected_update_body = { 'port': { 'device_owner': 'baremetal:none', 'binding:host_id': port.local_link_connection['hostname'], 'mac_address': port.address, 'binding:profile': { 'local_link_information': [port.local_link_connection] } } } # Ensure we can create ports self.client_mock.create_port.return_value = { 'port': self.neutron_port} expected = {port.uuid: self.neutron_port['id']} with task_manager.acquire(self.context, self.node.uuid) as task: ports = neutron.add_ports_to_network(task, self.network_uuid) self.assertEqual(expected, ports) self.client_mock.create_port.assert_called_once_with( expected_create_body) update_mock.assert_called_once_with( self.context, self.neutron_port['id'], expected_update_body) wait_agent_mock.assert_called_once_with( self.client_mock, 'hostname') wait_port_mock.assert_called_once_with( self.client_mock, self.neutron_port['id'], 'ACTIVE') @mock.patch.object(neutron, 'is_smartnic_port', autospec=True) @mock.patch.object(neutron, 'wait_for_host_agent', autospec=True) def test_remove_neutron_smartnic_ports( self, wait_agent_mock, is_smartnic_mock): with task_manager.acquire(self.context, self.node.uuid) as task: is_smartnic_mock.return_value = True self.neutron_port['binding:host_id'] = 'hostname' self.client_mock.list_ports.return_value = { 'ports': [self.neutron_port]} neutron.remove_neutron_ports(task, {'param': 'value'}) self.client_mock.list_ports.assert_called_once_with( **{'param': 'value'}) self.client_mock.delete_port.assert_called_once_with( self.neutron_port['id']) is_smartnic_mock.assert_called_once_with(self.neutron_port) wait_agent_mock.assert_called_once_with(self.client_mock, 'hostname') @mock.patch.object(neutron, 'get_client', autospec=True) class TestValidateNetwork(base.TestCase): def setUp(self): super(TestValidateNetwork, self).setUp() self.uuid = uuidutils.generate_uuid() self.context = context.RequestContext() def test_by_uuid(self, client_mock): net_mock = client_mock.return_value.list_networks net_mock.return_value = { 'networks': [ {'id': self.uuid}, ] } self.assertEqual(self.uuid, neutron.validate_network( self.uuid, context=self.context)) net_mock.assert_called_once_with(fields=['id'], id=self.uuid) def test_by_name(self, client_mock): net_mock = client_mock.return_value.list_networks net_mock.return_value = { 'networks': [ {'id': self.uuid}, ] } self.assertEqual(self.uuid, neutron.validate_network( 'name', context=self.context)) net_mock.assert_called_once_with(fields=['id'], name='name') def test_not_found(self, client_mock): net_mock = client_mock.return_value.list_networks net_mock.return_value = { 'networks': [] } self.assertRaisesRegex(exception.InvalidParameterValue, 'was not found', neutron.validate_network, self.uuid, context=self.context) net_mock.assert_called_once_with(fields=['id'], id=self.uuid) def test_failure(self, client_mock): net_mock = client_mock.return_value.list_networks net_mock.side_effect = neutron_client_exc.NeutronClientException('foo') self.assertRaisesRegex(exception.NetworkError, 'foo', neutron.validate_network, 'name', context=self.context) net_mock.assert_called_once_with(fields=['id'], name='name') def test_duplicate(self, client_mock): net_mock = client_mock.return_value.list_networks net_mock.return_value = { 'networks': [{'id': self.uuid}, {'id': 'uuid2'}] } self.assertRaisesRegex(exception.InvalidParameterValue, 'More than one network', neutron.validate_network, 'name', context=self.context) net_mock.assert_called_once_with(fields=['id'], name='name') @mock.patch.object(neutron, 'get_client', autospec=True) class TestUpdatePortAddress(base.TestCase): def setUp(self): super(TestUpdatePortAddress, self).setUp() self.context = context.RequestContext() @mock.patch.object(neutron, 'update_neutron_port', autospec=True) def test_update_port_address(self, mock_unp, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' expected = {'port': {'mac_address': address}} mock_client.return_value.show_port.return_value = {} neutron.update_port_address(port_id, address, context=self.context) mock_unp.assert_called_once_with(self.context, port_id, expected) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'unbind_neutron_port', autospec=True) def test_update_port_address_with_binding(self, mock_unp, mock_update, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' mock_client.return_value.show_port.return_value = { 'port': {'binding:host_id': 'host', 'binding:profile': 'foo'}} calls = [mock.call(self.context, port_id, {'port': {'mac_address': address}}), mock.call(self.context, port_id, {'port': {'binding:host_id': 'host', 'binding:profile': 'foo'}})] neutron.update_port_address(port_id, address, context=self.context) mock_unp.assert_called_once_with( port_id, context=self.context) mock_update.assert_has_calls(calls) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'unbind_neutron_port', autospec=True) def test_update_port_address_without_binding(self, mock_unp, mock_update, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' expected = {'port': {'mac_address': address}} mock_client.return_value.show_port.return_value = { 'port': {'binding:profile': 'foo'}} neutron.update_port_address(port_id, address, context=self.context) self.assertFalse(mock_unp.called) mock_update.assert_any_call(self.context, port_id, expected) def test_update_port_address_show_failed(self, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' mock_client.return_value.show_port.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.FailedToUpdateMacOnPort, neutron.update_port_address, port_id, address, context=self.context) self.assertFalse(mock_client.return_value.update_port.called) @mock.patch.object(neutron, 'unbind_neutron_port', autospec=True) def test_update_port_address_unbind_port_failed(self, mock_unp, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' mock_client.return_value.show_port.return_value = { 'port': {'binding:profile': 'foo', 'binding:host_id': 'host'}} mock_unp.side_effect = (exception.NetworkError('boom')) self.assertRaises(exception.FailedToUpdateMacOnPort, neutron.update_port_address, port_id, address, context=self.context) mock_unp.assert_called_once_with( port_id, context=self.context) self.assertFalse(mock_client.return_value.update_port.called) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) @mock.patch.object(neutron, 'unbind_neutron_port', autospec=True) def test_update_port_address_with_exception(self, mock_unp, mock_update, mock_client): address = 'fe:54:00:77:07:d9' port_id = 'fake-port-id' mock_client.return_value.show_port.return_value = {} mock_update.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.FailedToUpdateMacOnPort, neutron.update_port_address, port_id, address, context=self.context) @mock.patch.object(neutron, 'update_neutron_port', autospec=True) class TestUnbindPort(base.TestCase): def setUp(self): super(TestUnbindPort, self).setUp() self.context = context.RequestContext() def test_unbind_neutron_port_client_passed(self, mock_unp): port_id = 'fake-port-id' body_unbind = { 'port': { 'binding:host_id': '', 'binding:profile': {} } } body_reset_mac = { 'port': { 'mac_address': None } } client = mock.MagicMock() update_calls = [ mock.call(self.context, port_id, body_unbind, client), mock.call(self.context, port_id, body_reset_mac, client) ] neutron.unbind_neutron_port(port_id, client, context=self.context) self.assertEqual(2, mock_unp.call_count) mock_unp.assert_has_calls(update_calls) @mock.patch.object(neutron, 'LOG', autospec=True) def test_unbind_neutron_port_failure(self, mock_log, mock_unp): mock_unp.side_effect = (neutron_client_exc.NeutronClientException()) body = { 'port': { 'binding:host_id': '', 'binding:profile': {} } } port_id = 'fake-port-id' self.assertRaises(exception.NetworkError, neutron.unbind_neutron_port, port_id, context=self.context) mock_unp.assert_called_once_with(self.context, port_id, body, None) mock_log.exception.assert_called_once() def test_unbind_neutron_port(self, mock_unp): port_id = 'fake-port-id' body_unbind = { 'port': { 'binding:host_id': '', 'binding:profile': {} } } body_reset_mac = { 'port': { 'mac_address': None } } update_calls = [ mock.call(self.context, port_id, body_unbind, None), mock.call(self.context, port_id, body_reset_mac, None) ] neutron.unbind_neutron_port(port_id, context=self.context) mock_unp.assert_has_calls(update_calls) @mock.patch.object(neutron, 'LOG', autospec=True) def test_unbind_neutron_port_not_found(self, mock_log, mock_unp): port_id = 'fake-port-id' mock_unp.side_effect = ( neutron_client_exc.PortNotFoundClient()) body = { 'port': { 'binding:host_id': '', 'binding:profile': {} } } neutron.unbind_neutron_port(port_id, context=self.context) mock_unp.assert_called_once_with(self.context, port_id, body, None) mock_log.info.assert_called_once_with('Port %s was not found while ' 'unbinding.', port_id) class TestGetNetworkByUUIDOrName(base.TestCase): def setUp(self): super(TestGetNetworkByUUIDOrName, self).setUp() self.client = mock.MagicMock() def test__get_network_by_uuid_or_name_uuid(self): network_uuid = '9acb0256-2c1b-420a-b9d7-62bee90b6ed7' networks = { 'networks': [{ 'field1': 'value1', 'field2': 'value2', }], } fields = ['field1', 'field2'] self.client.list_networks.return_value = networks result = neutron._get_network_by_uuid_or_name( self.client, network_uuid, fields=fields) self.client.list_networks.assert_called_once_with( id=network_uuid, fields=fields) self.assertEqual(networks['networks'][0], result) def test__get_network_by_uuid_or_name_name(self): network_name = 'test-net' networks = { 'networks': [{ 'field1': 'value1', 'field2': 'value2', }], } fields = ['field1', 'field2'] self.client.list_networks.return_value = networks result = neutron._get_network_by_uuid_or_name( self.client, network_name, fields=fields) self.client.list_networks.assert_called_once_with( name=network_name, fields=fields) self.assertEqual(networks['networks'][0], result) def test__get_network_by_uuid_or_name_failure(self): network_uuid = '9acb0256-2c1b-420a-b9d7-62bee90b6ed7' self.client.list_networks.side_effect = ( neutron_client_exc.NeutronClientException()) self.assertRaises(exception.NetworkError, neutron._get_network_by_uuid_or_name, self.client, network_uuid) self.client.list_networks.assert_called_once_with(id=network_uuid) def test__get_network_by_uuid_or_name_missing(self): network_uuid = '9acb0256-2c1b-420a-b9d7-62bee90b6ed7' networks = { 'networks': [], } self.client.list_networks.return_value = networks self.assertRaises(exception.InvalidParameterValue, neutron._get_network_by_uuid_or_name, self.client, network_uuid) self.client.list_networks.assert_called_once_with(id=network_uuid) def test__get_network_by_uuid_or_name_duplicate(self): network_name = 'test-net' networks = { 'networks': [ {'id': '9acb0256-2c1b-420a-b9d7-62bee90b6ed7'}, {'id': '9014b6a7-8291-4676-80b0-ab00988ce3c7'}, ], } self.client.list_networks.return_value = networks self.assertRaises(exception.InvalidParameterValue, neutron._get_network_by_uuid_or_name, self.client, network_name) self.client.list_networks.assert_called_once_with(name=network_name) @mock.patch.object(neutron, '_get_network_by_uuid_or_name', autospec=True) @mock.patch.object(neutron, '_get_port_by_uuid', autospec=True) class TestGetPhysnetsByPortUUID(base.TestCase): PORT_FIELDS = ['network_id'] NETWORK_FIELDS = ['provider:physical_network', 'segments'] def setUp(self): super(TestGetPhysnetsByPortUUID, self).setUp() self.client = mock.MagicMock() def test_get_physnets_by_port_uuid_single_segment(self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' physnet = 'fake-physnet' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.return_value = { 'provider:physical_network': physnet, } result = neutron.get_physnets_by_port_uuid(self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) self.assertEqual({physnet}, result) def test_get_physnets_by_port_uuid_single_segment_no_physnet( self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.return_value = { 'provider:physical_network': None, } result = neutron.get_physnets_by_port_uuid(self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) self.assertEqual(set(), result) def test_get_physnets_by_port_uuid_multiple_segments(self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' physnet1 = 'fake-physnet-1' physnet2 = 'fake-physnet-2' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.return_value = { 'segments': [ { 'provider:physical_network': physnet1, }, { 'provider:physical_network': physnet2, }, ], } result = neutron.get_physnets_by_port_uuid(self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) self.assertEqual({physnet1, physnet2}, result) def test_get_physnets_by_port_uuid_multiple_segments_no_physnet( self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.return_value = { 'segments': [ { 'provider:physical_network': None, }, { 'provider:physical_network': None, }, ], } result = neutron.get_physnets_by_port_uuid(self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) self.assertEqual(set(), result) def test_get_physnets_by_port_uuid_port_missing(self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' mock_gp.side_effect = exception.InvalidParameterValue('error') self.assertRaises(exception.InvalidParameterValue, neutron.get_physnets_by_port_uuid, self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) self.assertFalse(mock_gn.called) def test_get_physnets_by_port_uuid_port_failure(self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' mock_gp.side_effect = exception.NetworkError self.assertRaises(exception.NetworkError, neutron.get_physnets_by_port_uuid, self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) self.assertFalse(mock_gn.called) def test_get_physnets_by_port_uuid_network_missing( self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.side_effect = exception.InvalidParameterValue('error') self.assertRaises(exception.InvalidParameterValue, neutron.get_physnets_by_port_uuid, self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) def test_get_physnets_by_port_uuid_network_failure( self, mock_gp, mock_gn): port_uuid = 'fake-port-uuid' network_uuid = 'fake-network-uuid' mock_gp.return_value = { 'network_id': network_uuid, } mock_gn.side_effect = exception.NetworkError self.assertRaises(exception.NetworkError, neutron.get_physnets_by_port_uuid, self.client, port_uuid) mock_gp.assert_called_once_with(self.client, port_uuid, fields=self.PORT_FIELDS) mock_gn.assert_called_once_with(self.client, network_uuid, fields=self.NETWORK_FIELDS) ironic-15.0.0/ironic/tests/unit/common/test_raid.py0000664000175000017500000003352313652514273022344 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from ironic.common import exception from ironic.common import raid from ironic.drivers import base as drivers_base from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils from ironic.tests.unit import raid_constants class ValidateRaidConfigurationTestCase(base.TestCase): def setUp(self): with open(drivers_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: self.schema = json.load(raid_schema_fobj) super(ValidateRaidConfigurationTestCase, self).setUp() def test_validate_configuration_okay(self): raid_config = json.loads(raid_constants.RAID_CONFIG_OKAY) raid.validate_configuration( raid_config, raid_config_schema=self.schema) def test_validate_configuration_okay_software(self): raid_config = json.loads(raid_constants.RAID_SW_CONFIG_OKAY) raid.validate_configuration( raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_logical_disk(self): self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, {}, raid_config_schema=self.schema) def test_validate_configuration_zero_logical_disks(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_LOGICAL_DISKS) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_raid_level(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_RAID_LEVEL) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_raid_level(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_RAID_LEVEL) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_no_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_NO_SIZE_GB) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_zero_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_ZERO_SIZE_GB) raid.validate_configuration(raid_config, raid_config_schema=self.schema) def test_validate_configuration_max_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_MAX_SIZE_GB) raid.validate_configuration(raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_size_gb(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_SIZE_GB) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_is_root_volume(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_IS_ROOT_VOL raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_multiple_is_root_volume(self): raid_config_str = raid_constants.RAID_CONFIG_MULTIPLE_IS_ROOT_VOL raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_share_physical_disks(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_SHARE_PHY_DISKS raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_disk_type(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_DISK_TYPE) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_int_type(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_INT_TYPE) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_number_of_phy_disks(self): raid_config_str = raid_constants.RAID_CONFIG_INVALID_NUM_PHY_DISKS raid_config = json.loads(raid_config_str) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_invalid_physical_disks(self): raid_config = json.loads(raid_constants.RAID_CONFIG_INVALID_PHY_DISKS) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_too_few_physical_disks(self): raid_config = json.loads(raid_constants.RAID_CONFIG_TOO_FEW_PHY_DISKS) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_additional_property(self): raid_config = json.loads(raid_constants.RAID_CONFIG_ADDITIONAL_PROP) self.assertRaises(exception.InvalidParameterValue, raid.validate_configuration, raid_config, raid_config_schema=self.schema) def test_validate_configuration_with_jbod_volume(self): raid_config = json.loads(raid_constants.RAID_CONFIG_JBOD_VOLUME) raid.validate_configuration(raid_config, raid_config_schema=self.schema) def test_validate_configuration_custom_schema(self): raid_config = json.loads(raid_constants.CUSTOM_SCHEMA_RAID_CONFIG) schema = json.loads(raid_constants.CUSTOM_RAID_SCHEMA) raid.validate_configuration(raid_config, raid_config_schema=schema) class RaidPublicMethodsTestCase(db_base.DbTestCase): def setUp(self): super(RaidPublicMethodsTestCase, self).setUp() self.target_raid_config = { "logical_disks": [ {'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}, {'size_gb': 200, 'raid_level': 5} ]} n = { 'boot_interface': 'pxe', 'deploy_interface': 'direct', 'raid_interface': 'agent', 'target_raid_config': self.target_raid_config, } self.node = obj_utils.create_test_node(self.context, **n) def test_get_logical_disk_properties(self): with open(drivers_base.RAID_CONFIG_SCHEMA, 'r') as raid_schema_fobj: schema = json.load(raid_schema_fobj) logical_disk_properties = raid.get_logical_disk_properties(schema) self.assertIn('raid_level', logical_disk_properties) self.assertIn('size_gb', logical_disk_properties) self.assertIn('volume_name', logical_disk_properties) self.assertIn('is_root_volume', logical_disk_properties) self.assertIn('share_physical_disks', logical_disk_properties) self.assertIn('disk_type', logical_disk_properties) self.assertIn('interface_type', logical_disk_properties) self.assertIn('number_of_physical_disks', logical_disk_properties) self.assertIn('controller', logical_disk_properties) self.assertIn('physical_disks', logical_disk_properties) def test_get_logical_disk_properties_custom_schema(self): raid_schema = json.loads(raid_constants.CUSTOM_RAID_SCHEMA) logical_disk_properties = raid.get_logical_disk_properties( raid_config_schema=raid_schema) self.assertIn('raid_level', logical_disk_properties) self.assertIn('size_gb', logical_disk_properties) self.assertIn('foo', logical_disk_properties) def _test_update_raid_info(self, current_config, capabilities=None): node = self.node if capabilities: properties = node.properties properties['capabilities'] = capabilities del properties['local_gb'] node.properties = properties target_raid_config = json.loads(raid_constants.RAID_CONFIG_OKAY) node.target_raid_config = target_raid_config node.save() raid.update_raid_info(node, current_config) properties = node.properties current = node.raid_config target = node.target_raid_config self.assertIsNotNone(current['last_updated']) self.assertIsInstance(current['logical_disks'][0], dict) if current_config['logical_disks'][0].get('is_root_volume'): self.assertEqual({'wwn': '600508B100'}, properties['root_device']) self.assertEqual(100, properties['local_gb']) self.assertIn('raid_level:1', properties['capabilities']) if capabilities: self.assertIn(capabilities, properties['capabilities']) else: self.assertNotIn('local_gb', properties) self.assertNotIn('root_device', properties) if capabilities: self.assertNotIn('raid_level:1', properties['capabilities']) # Verify node.target_raid_config is preserved. self.assertEqual(target_raid_config, target) def test_update_raid_info_okay(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) self._test_update_raid_info(current_config, capabilities='boot_mode:bios') def test_update_raid_info_okay_no_root_volumes(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) del current_config['logical_disks'][0]['is_root_volume'] del current_config['logical_disks'][0]['root_device_hint'] self._test_update_raid_info(current_config, capabilities='boot_mode:bios') def test_update_raid_info_okay_current_capabilities_empty(self): current_config = json.loads(raid_constants.CURRENT_RAID_CONFIG) self._test_update_raid_info(current_config, capabilities=None) def test_update_raid_info_multiple_root_volumes(self): current_config = json.loads(raid_constants.RAID_CONFIG_MULTIPLE_ROOT) self.assertRaises(exception.InvalidParameterValue, self._test_update_raid_info, current_config) def test_filter_target_raid_config(self): result = raid.filter_target_raid_config(self.node) self.assertEqual(self.node.target_raid_config, result) def test_filter_target_raid_config_skip_root(self): result = raid.filter_target_raid_config( self.node, create_root_volume=False) exp_target_raid_config = { "logical_disks": [{'size_gb': 200, 'raid_level': 5}]} self.assertEqual(exp_target_raid_config, result) def test_filter_target_raid_config_skip_nonroot(self): result = raid.filter_target_raid_config( self.node, create_nonroot_volumes=False) exp_target_raid_config = { "logical_disks": [{'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}]} self.assertEqual(exp_target_raid_config, result) def test_filter_target_raid_config_no_target_raid_config_after_skipping( self): self.assertRaises(exception.MissingParameterValue, raid.filter_target_raid_config, self.node, create_root_volume=False, create_nonroot_volumes=False) def test_filter_target_raid_config_empty_target_raid_config(self): self.node.target_raid_config = {} self.node.save() self.assertRaises(exception.MissingParameterValue, raid.filter_target_raid_config, self.node) ironic-15.0.0/ironic/tests/unit/common/test_json_rpc.py0000664000175000017500000005405513652514273023245 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock import oslo_messaging import webob from ironic.common import context as ir_ctx from ironic.common import exception from ironic.common.json_rpc import client from ironic.common.json_rpc import server from ironic import objects from ironic.objects import base as objects_base from ironic.tests import base as test_base from ironic.tests.unit.db import utils as db_utils from ironic.tests.unit.objects import utils as obj_utils class FakeManager(object): def success(self, context, x, y=0): assert isinstance(context, ir_ctx.RequestContext) assert context.user_name == 'admin' return x - y def with_node(self, context, node): assert isinstance(context, ir_ctx.RequestContext) assert isinstance(node, objects.Node) node.extra['answer'] = 42 return node def no_result(self, context): assert isinstance(context, ir_ctx.RequestContext) return None def no_context(self): return 42 def fail(self, context, message): assert isinstance(context, ir_ctx.RequestContext) raise exception.IronicException(message) @oslo_messaging.expected_exceptions(exception.Invalid) def expected(self, context, message): assert isinstance(context, ir_ctx.RequestContext) raise exception.Invalid(message) def crash(self, context): raise RuntimeError('boom') def init_host(self, context): assert False, "This should not be exposed" def _private(self, context): assert False, "This should not be exposed" # This should not be exposed either value = 42 class TestService(test_base.TestCase): def setUp(self): super(TestService, self).setUp() self.config(auth_strategy='noauth', group='json_rpc') self.server_mock = self.useFixture(fixtures.MockPatch( 'oslo_service.wsgi.Server', autospec=True)).mock self.serializer = objects_base.IronicObjectSerializer(is_server=True) self.service = server.WSGIService(FakeManager(), self.serializer) self.app = self.service._application self.ctx = {'user_name': 'admin'} def _request(self, name=None, params=None, expected_error=None, request_id='abcd', **kwargs): body = { 'jsonrpc': '2.0', } if request_id is not None: body['id'] = request_id if name is not None: body['method'] = name if params is not None: body['params'] = params if 'json_body' not in kwargs: kwargs['json_body'] = body kwargs.setdefault('method', 'POST') kwargs.setdefault('headers', {'Content-Type': 'application/json'}) request = webob.Request.blank("/", **kwargs) response = request.get_response(self.app) self.assertEqual(response.status_code, expected_error or (200 if request_id else 204)) if request_id is not None: if expected_error: self.assertEqual(expected_error, response.json_body['error']['code']) else: return response.json_body else: self.assertFalse(response.text) def _check(self, body, result=None, error=None, request_id='abcd'): self.assertEqual('2.0', body.pop('jsonrpc')) self.assertEqual(request_id, body.pop('id')) if error is not None: self.assertEqual({'error': error}, body) else: self.assertEqual({'result': result}, body) def test_success(self): body = self._request('success', {'context': self.ctx, 'x': 42}) self._check(body, result=42) def test_success_no_result(self): body = self._request('no_result', {'context': self.ctx}) self._check(body, result=None) def test_notification(self): body = self._request('no_result', {'context': self.ctx}, request_id=None) self.assertIsNone(body) def test_no_context(self): body = self._request('no_context') self._check(body, result=42) def test_serialize_objects(self): node = obj_utils.get_test_node(self.context) node = self.serializer.serialize_entity(self.context, node) body = self._request('with_node', {'context': self.ctx, 'node': node}) self.assertNotIn('error', body) self.assertIsInstance(body['result'], dict) node = self.serializer.deserialize_entity(self.context, body['result']) self.assertEqual({'answer': 42}, node.extra) def test_non_json_body(self): for body in (b'', b'???', b"\xc3\x28"): request = webob.Request.blank("/", method='POST', body=body) response = request.get_response(self.app) self._check( response.json_body, error={ 'message': server.ParseError._msg_fmt, 'code': -32700, }, request_id=None) def test_invalid_requests(self): bodies = [ # Invalid requests with request ID. {'method': 'no_result', 'id': 'abcd', 'params': {'context': self.ctx}}, {'jsonrpc': '2.0', 'id': 'abcd', 'params': {'context': self.ctx}}, # These do not count as notifications, since they're malformed. {'method': 'no_result', 'params': {'context': self.ctx}}, {'jsonrpc': '2.0', 'params': {'context': self.ctx}}, 42, # We do not implement batched requests. [], [{'jsonrpc': '2.0', 'method': 'no_result', 'params': {'context': self.ctx}}], ] for body in bodies: body = self._request(json_body=body) self._check( body, error={ 'message': server.InvalidRequest._msg_fmt, 'code': -32600, }, request_id=body.get('id')) def test_malformed_context(self): body = self._request(json_body={'jsonrpc': '2.0', 'id': 'abcd', 'method': 'no_result', 'params': {'context': 42}}) self._check( body, error={ 'message': 'Context must be a dictionary, if provided', 'code': -32602, }) def test_expected_failure(self): body = self._request('fail', {'context': self.ctx, 'message': 'some error'}) self._check(body, error={ 'message': 'some error', 'code': 500, 'data': { 'class': 'ironic_lib.exception.IronicException' } }) def test_expected_failure_oslo(self): # Check that exceptions wrapped by oslo's expected_exceptions get # unwrapped correctly. body = self._request('expected', {'context': self.ctx, 'message': 'some error'}) self._check(body, error={ 'message': 'some error', 'code': 400, 'data': { 'class': 'ironic.common.exception.Invalid' } }) @mock.patch.object(server.LOG, 'exception', autospec=True) def test_unexpected_failure(self, mock_log): body = self._request('crash', {'context': self.ctx}) self._check(body, error={ 'message': 'boom', 'code': 500, }) self.assertTrue(mock_log.called) def test_method_not_found(self): body = self._request('banana', {'context': self.ctx}) self._check(body, error={ 'message': 'Method banana was not found', 'code': -32601, }) def test_no_blacklisted_methods(self): for name in ('__init__', '_private', 'init_host', 'value'): body = self._request(name, {'context': self.ctx}) self._check(body, error={ 'message': 'Method %s was not found' % name, 'code': -32601, }) def test_missing_argument(self): body = self._request('success', {'context': self.ctx}) # The exact error message depends on the Python version self.assertEqual(-32602, body['error']['code']) self.assertNotIn('result', body) def test_method_not_post(self): self._request('success', {'context': self.ctx, 'x': 42}, method='GET', expected_error=405) def test_authenticated(self): self.config(auth_strategy='keystone', group='json_rpc') self.service = server.WSGIService(FakeManager(), self.serializer) self.app = self.server_mock.call_args[0][2] self._request('success', {'context': self.ctx, 'x': 42}, expected_error=401) def test_authenticated_no_admin_role(self): self.config(auth_strategy='keystone', group='json_rpc') self._request('success', {'context': self.ctx, 'x': 42}, expected_error=403) @mock.patch.object(server.LOG, 'debug', autospec=True) def test_mask_secrets(self, mock_log): node = obj_utils.get_test_node( self.context, driver_info=db_utils.get_test_ipmi_info()) node = self.serializer.serialize_entity(self.context, node) body = self._request('with_node', {'context': self.ctx, 'node': node}) node = self.serializer.deserialize_entity(self.context, body['result']) logged_params = mock_log.call_args_list[0][0][2] logged_node = logged_params['node']['ironic_object.data'] self.assertEqual('***', logged_node['driver_info']['ipmi_password']) logged_resp = mock_log.call_args_list[1][0][2] logged_node = logged_resp['ironic_object.data'] self.assertEqual('***', logged_node['driver_info']['ipmi_password']) # The result is not affected, only logging self.assertEqual(db_utils.get_test_ipmi_info(), node.driver_info) @mock.patch.object(client, '_get_session', autospec=True) class TestClient(test_base.TestCase): def setUp(self): super(TestClient, self).setUp() self.serializer = objects_base.IronicObjectSerializer(is_server=True) self.client = client.Client(self.serializer) self.ctx_json = self.context.to_dict() def test_can_send_version(self, mock_session): self.assertTrue(self.client.can_send_version('1.42')) self.client = client.Client(self.serializer, version_cap='1.42') self.assertTrue(self.client.can_send_version('1.42')) self.assertTrue(self.client.can_send_version('1.0')) self.assertFalse(self.client.can_send_version('1.99')) self.assertFalse(self.client.can_send_version('2.0')) def test_call_success(self, mock_session): response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'result': 42 } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) result = cctx.call(self.context, 'do_something', answer=42) self.assertEqual(42, result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}, 'id': self.context.request_id}) def test_call_success_with_version(self, mock_session): response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'result': 42 } cctx = self.client.prepare('foo.example.com', version='1.42') self.assertEqual('example.com', cctx.host) result = cctx.call(self.context, 'do_something', answer=42) self.assertEqual(42, result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json, 'rpc.version': '1.42'}, 'id': self.context.request_id}) def test_call_success_with_version_and_cap(self, mock_session): self.client = client.Client(self.serializer, version_cap='1.99') response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'result': 42 } cctx = self.client.prepare('foo.example.com', version='1.42') self.assertEqual('example.com', cctx.host) result = cctx.call(self.context, 'do_something', answer=42) self.assertEqual(42, result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json, 'rpc.version': '1.42'}, 'id': self.context.request_id}) def test_cast_success(self, mock_session): cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) result = cctx.cast(self.context, 'do_something', answer=42) self.assertIsNone(result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}}) def test_cast_success_with_version(self, mock_session): cctx = self.client.prepare('foo.example.com', version='1.42') self.assertEqual('example.com', cctx.host) result = cctx.cast(self.context, 'do_something', answer=42) self.assertIsNone(result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json, 'rpc.version': '1.42'}}) def test_call_serialization(self, mock_session): node = obj_utils.get_test_node(self.context) node_json = self.serializer.serialize_entity(self.context, node) response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'result': node_json } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) result = cctx.call(self.context, 'do_something', node=node) self.assertIsInstance(result, objects.Node) self.assertEqual(result.uuid, node.uuid) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'node': node_json, 'context': self.ctx_json}, 'id': self.context.request_id}) def test_call_failure(self, mock_session): response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'error': { 'code': 418, 'message': 'I am a teapot', 'data': { 'class': 'ironic.common.exception.Invalid' } } } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) # Make sure that the class is restored correctly for expected errors. exc = self.assertRaises(exception.Invalid, cctx.call, self.context, 'do_something', answer=42) # Code from the body has priority over one in the class. self.assertEqual(418, exc.code) self.assertIn('I am a teapot', str(exc)) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}, 'id': self.context.request_id}) def test_call_unexpected_failure(self, mock_session): response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'error': { 'code': 500, 'message': 'AttributeError', } } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) exc = self.assertRaises(exception.IronicException, cctx.call, self.context, 'do_something', answer=42) self.assertEqual(500, exc.code) self.assertIn('Unexpected error', str(exc)) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}, 'id': self.context.request_id}) def test_call_failure_with_foreign_class(self, mock_session): # This should not happen, but provide an additional safeguard response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'error': { 'code': 500, 'message': 'AttributeError', 'data': { 'class': 'AttributeError' } } } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) exc = self.assertRaises(exception.IronicException, cctx.call, self.context, 'do_something', answer=42) self.assertEqual(500, exc.code) self.assertIn('Unexpected error', str(exc)) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}, 'id': self.context.request_id}) def test_cast_failure(self, mock_session): # Cast cannot return normal failures, but make sure we ignore them even # if server sends something in violation of the protocol (or because # it's a low-level error like HTTP Forbidden). response = mock_session.return_value.post.return_value response.json.return_value = { 'jsonrpc': '2.0', 'error': { 'code': 418, 'message': 'I am a teapot', 'data': { 'class': 'ironic.common.exception.IronicException' } } } cctx = self.client.prepare('foo.example.com') self.assertEqual('example.com', cctx.host) result = cctx.cast(self.context, 'do_something', answer=42) self.assertIsNone(result) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'answer': 42, 'context': self.ctx_json}}) def test_call_failure_with_version_and_cap(self, mock_session): self.client = client.Client(self.serializer, version_cap='1.42') cctx = self.client.prepare('foo.example.com', version='1.99') self.assertRaisesRegex(RuntimeError, "requested version 1.99, maximum allowed " "version is 1.42", cctx.call, self.context, 'do_something', answer=42) self.assertFalse(mock_session.return_value.post.called) @mock.patch.object(client.LOG, 'debug', autospec=True) def test_mask_secrets(self, mock_log, mock_session): request = { 'redfish_username': 'admin', 'redfish_password': 'passw0rd' } body = """{ "jsonrpc": "2.0", "result": { "driver_info": { "ipmi_username": "admin", "ipmi_password": "passw0rd" } } }""" response = mock_session.return_value.post.return_value response.text = body cctx = self.client.prepare('foo.example.com') cctx.cast(self.context, 'do_something', node=request) mock_session.return_value.post.assert_called_once_with( 'http://example.com:8089', json={'jsonrpc': '2.0', 'method': 'do_something', 'params': {'node': request, 'context': self.ctx_json}}) self.assertEqual(2, mock_log.call_count) node = mock_log.call_args_list[0][0][2]['params']['node'] self.assertEqual(node, {'redfish_username': 'admin', 'redfish_password': '***'}) resp_text = mock_log.call_args_list[1][0][2] self.assertEqual(body.replace('passw0rd', '***'), resp_text) ironic-15.0.0/ironic/tests/unit/common/test_keystone.py0000664000175000017500000000734713652514273023273 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as kaloading import mock from oslo_config import cfg from oslo_config import fixture from ironic.common import context from ironic.common import exception from ironic.common import keystone from ironic.conf import auth as ironic_auth from ironic.tests import base class KeystoneTestCase(base.TestCase): def setUp(self): super(KeystoneTestCase, self).setUp() self.test_group = 'test_group' self.cfg_fixture.conf.register_group(cfg.OptGroup(self.test_group)) ironic_auth.register_auth_opts(self.cfg_fixture.conf, self.test_group, service_type='vikings') self.config(auth_type='password', group=self.test_group) # NOTE(pas-ha) this is due to auth_plugin options # being dynamically registered on first load, # but we need to set the config before plugin = kaloading.get_plugin_loader('password') opts = kaloading.get_auth_plugin_conf_options(plugin) self.cfg_fixture.register_opts(opts, group=self.test_group) self.config(auth_url='http://127.0.0.1:9898', username='fake_user', password='fake_pass', project_name='fake_tenant', group=self.test_group) def _set_config(self): self.cfg_fixture = self.useFixture(fixture.Config()) self.addCleanup(cfg.CONF.reset) def test_get_session(self): self.config(timeout=10, group=self.test_group) session = keystone.get_session(self.test_group, timeout=20) self.assertEqual(20, session.timeout) def test_get_auth(self): auth = keystone.get_auth(self.test_group) self.assertEqual('http://127.0.0.1:9898', auth.auth_url) def test_get_auth_fail(self): # NOTE(pas-ha) 'password' auth_plugin is used, # so when we set the required auth_url to None, # MissingOption is raised self.config(auth_url=None, group=self.test_group) self.assertRaises(exception.ConfigInvalid, keystone.get_auth, self.test_group) def test_get_adapter_from_config(self): self.config(valid_interfaces=['internal', 'public'], group=self.test_group) session = keystone.get_session(self.test_group) adapter = keystone.get_adapter(self.test_group, session=session, interface='admin') self.assertEqual('admin', adapter.interface) self.assertEqual(session, adapter.session) @mock.patch('keystoneauth1.service_token.ServiceTokenAuthWrapper') @mock.patch('keystoneauth1.token_endpoint.Token') def test_get_service_auth(self, token_mock, service_auth_mock): ctxt = context.RequestContext(auth_token='spam') mock_auth = mock.Mock() self.assertEqual(service_auth_mock.return_value, keystone.get_service_auth(ctxt, 'ham', mock_auth)) token_mock.assert_called_once_with('ham', 'spam') service_auth_mock.assert_called_once_with( user_auth=token_mock.return_value, service_auth=mock_auth) ironic-15.0.0/ironic/tests/unit/common/test_image_service.py0000664000175000017500000004034213652514273024224 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import builtins import datetime from http import client as http_client import io import os import shutil import mock from oslo_utils import uuidutils import requests import sendfile from ironic.common import exception from ironic.common.glance_service import image_service as glance_v2_service from ironic.common import image_service from ironic.tests import base class HttpImageServiceTestCase(base.TestCase): def setUp(self): super(HttpImageServiceTestCase, self).setUp() self.service = image_service.HttpImageService() self.href = 'http://127.0.0.1:12345/fedora.qcow2' @mock.patch.object(requests, 'head', autospec=True) def test_validate_href(self, head_mock): response = head_mock.return_value response.status_code = http_client.OK self.service.validate_href(self.href) head_mock.assert_called_once_with(self.href) response.status_code = http_client.NO_CONTENT self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) response.status_code = http_client.BAD_REQUEST self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) @mock.patch.object(requests, 'head', autospec=True) def test_validate_href_error_code(self, head_mock): head_mock.return_value.status_code = http_client.BAD_REQUEST self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) head_mock.assert_called_once_with(self.href) @mock.patch.object(requests, 'head', autospec=True) def test_validate_href_error(self, head_mock): head_mock.side_effect = requests.ConnectionError() self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) head_mock.assert_called_once_with(self.href) @mock.patch.object(requests, 'head', autospec=True) def test_validate_href_error_with_secret_parameter(self, head_mock): head_mock.return_value.status_code = 204 e = self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href, True) self.assertIn('secreturl', str(e)) self.assertNotIn(self.href, str(e)) head_mock.assert_called_once_with(self.href) @mock.patch.object(requests, 'head', autospec=True) def _test_show(self, head_mock, mtime, mtime_date): head_mock.return_value.status_code = http_client.OK head_mock.return_value.headers = { 'Content-Length': 100, 'Last-Modified': mtime } result = self.service.show(self.href) head_mock.assert_called_once_with(self.href) self.assertEqual({'size': 100, 'updated_at': mtime_date, 'properties': {}}, result) def test_show_rfc_822(self): self._test_show(mtime='Tue, 15 Nov 2014 08:12:31 GMT', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) def test_show_rfc_850(self): self._test_show(mtime='Tuesday, 15-Nov-14 08:12:31 GMT', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) def test_show_ansi_c(self): self._test_show(mtime='Tue Nov 15 08:12:31 2014', mtime_date=datetime.datetime(2014, 11, 15, 8, 12, 31)) @mock.patch.object(requests, 'head', autospec=True) def test_show_no_content_length(self, head_mock): head_mock.return_value.status_code = http_client.OK head_mock.return_value.headers = {} self.assertRaises(exception.ImageRefValidationFailed, self.service.show, self.href) head_mock.assert_called_with(self.href) @mock.patch.object(shutil, 'copyfileobj', autospec=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_success(self, req_get_mock, shutil_mock): response_mock = req_get_mock.return_value response_mock.status_code = http_client.OK response_mock.raw = mock.MagicMock(spec=io.BytesIO) file_mock = mock.Mock(spec=io.BytesIO) self.service.download(self.href, file_mock) shutil_mock.assert_called_once_with( response_mock.raw.__enter__(), file_mock, image_service.IMAGE_CHUNK_SIZE ) req_get_mock.assert_called_once_with(self.href, stream=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_fail_connerror(self, req_get_mock): req_get_mock.side_effect = requests.ConnectionError() file_mock = mock.Mock(spec=io.BytesIO) self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) @mock.patch.object(shutil, 'copyfileobj', autospec=True) @mock.patch.object(requests, 'get', autospec=True) def test_download_fail_ioerror(self, req_get_mock, shutil_mock): response_mock = req_get_mock.return_value response_mock.status_code = http_client.OK response_mock.raw = mock.MagicMock(spec=io.BytesIO) file_mock = mock.Mock(spec=io.BytesIO) shutil_mock.side_effect = IOError self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) req_get_mock.assert_called_once_with(self.href, stream=True) class FileImageServiceTestCase(base.TestCase): def setUp(self): super(FileImageServiceTestCase, self).setUp() self.service = image_service.FileImageService() self.href = 'file:///home/user/image.qcow2' self.href_path = '/home/user/image.qcow2' @mock.patch.object(os.path, 'isfile', return_value=True, autospec=True) def test_validate_href(self, path_exists_mock): self.service.validate_href(self.href) path_exists_mock.assert_called_once_with(self.href_path) @mock.patch.object(os.path, 'isfile', return_value=False, autospec=True) def test_validate_href_path_not_found_or_not_file(self, path_exists_mock): self.assertRaises(exception.ImageRefValidationFailed, self.service.validate_href, self.href) path_exists_mock.assert_called_once_with(self.href_path) @mock.patch.object(os.path, 'getmtime', return_value=1431087909.1641912, autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_show(self, _validate_mock, getsize_mock, getmtime_mock): _validate_mock.return_value = self.href_path result = self.service.show(self.href) getsize_mock.assert_called_once_with(self.href_path) getmtime_mock.assert_called_once_with(self.href_path) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual({'size': 42, 'updated_at': datetime.datetime(2015, 5, 8, 12, 25, 9, 164191), 'properties': {}}, result) @mock.patch.object(os, 'link', autospec=True) @mock.patch.object(os, 'remove', autospec=True) @mock.patch.object(os, 'access', return_value=True, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_hard_link(self, _validate_mock, stat_mock, access_mock, remove_mock, link_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.Mock(spec=io.BytesIO) file_mock.name = 'file' self.service.download(self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) remove_mock.assert_called_once_with('file') link_mock.assert_called_once_with(self.href_path, 'file') @mock.patch.object(sendfile, 'sendfile', return_value=42, autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object(os, 'access', return_value=False, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_copy(self, _validate_mock, stat_mock, access_mock, open_mock, size_mock, copy_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=io.BytesIO) file_mock.name = 'file' input_mock = mock.MagicMock(spec=io.BytesIO) open_mock.return_value = input_mock self.service.download(self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) copy_mock.assert_called_once_with(file_mock.fileno(), input_mock.__enter__().fileno(), 0, 42) @mock.patch.object(sendfile, 'sendfile', autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object(os, 'access', return_value=False, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_copy_segmented(self, _validate_mock, stat_mock, access_mock, open_mock, size_mock, copy_mock): # Fake a 3G + 1k image chunk_size = image_service.SENDFILE_CHUNK_SIZE fake_image_size = chunk_size * 3 + 1024 fake_chunk_seq = [chunk_size, chunk_size, chunk_size, 1024] _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=io.BytesIO) file_mock.name = 'file' input_mock = mock.MagicMock(spec=io.BytesIO) open_mock.return_value = input_mock size_mock.return_value = fake_image_size copy_mock.side_effect = fake_chunk_seq self.service.download(self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) copy_calls = [mock.call(file_mock.fileno(), input_mock.__enter__().fileno(), chunk_size * i, fake_chunk_seq[i]) for i in range(4)] copy_mock.assert_has_calls(copy_calls) size_mock.assert_called_once_with(self.href_path) @mock.patch.object(os, 'remove', side_effect=OSError, autospec=True) @mock.patch.object(os, 'access', return_value=True, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_hard_link_fail(self, _validate_mock, stat_mock, access_mock, remove_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=io.BytesIO) file_mock.name = 'file' self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) @mock.patch.object(sendfile, 'sendfile', side_effect=OSError, autospec=True) @mock.patch.object(os.path, 'getsize', return_value=42, autospec=True) @mock.patch.object(builtins, 'open', autospec=True) @mock.patch.object(os, 'access', return_value=False, autospec=True) @mock.patch.object(os, 'stat', autospec=True) @mock.patch.object(image_service.FileImageService, 'validate_href', autospec=True) def test_download_copy_fail(self, _validate_mock, stat_mock, access_mock, open_mock, size_mock, copy_mock): _validate_mock.return_value = self.href_path stat_mock.return_value.st_dev = 'dev1' file_mock = mock.MagicMock(spec=io.BytesIO) file_mock.name = 'file' input_mock = mock.MagicMock(spec=io.BytesIO) open_mock.return_value = input_mock self.assertRaises(exception.ImageDownloadFailed, self.service.download, self.href, file_mock) _validate_mock.assert_called_once_with(mock.ANY, self.href) self.assertEqual(2, stat_mock.call_count) access_mock.assert_called_once_with(self.href_path, os.R_OK | os.W_OK) size_mock.assert_called_once_with(self.href_path) class ServiceGetterTestCase(base.TestCase): @mock.patch.object(glance_v2_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service(self, glance_service_mock): image_href = uuidutils.generate_uuid() image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, self.context) @mock.patch.object(glance_v2_service.GlanceImageService, '__init__', return_value=None, autospec=True) def test_get_glance_image_service_url(self, glance_service_mock): image_href = 'glance://%s' % uuidutils.generate_uuid() image_service.get_image_service(image_href, context=self.context) glance_service_mock.assert_called_once_with(mock.ANY, None, self.context) @mock.patch.object(image_service.HttpImageService, '__init__', return_value=None, autospec=True) def test_get_http_image_service(self, http_service_mock): image_href = 'http://127.0.0.1/image.qcow2' image_service.get_image_service(image_href) http_service_mock.assert_called_once_with() @mock.patch.object(image_service.HttpImageService, '__init__', return_value=None, autospec=True) def test_get_https_image_service(self, http_service_mock): image_href = 'https://127.0.0.1/image.qcow2' image_service.get_image_service(image_href) http_service_mock.assert_called_once_with() @mock.patch.object(image_service.FileImageService, '__init__', return_value=None, autospec=True) def test_get_file_image_service(self, local_service_mock): image_href = 'file:///home/user/image.qcow2' image_service.get_image_service(image_href) local_service_mock.assert_called_once_with() def test_get_image_service_invalid_image_ref(self): invalid_refs = ( 'usenet://alt.binaries.dvd/image.qcow2', 'no scheme, no uuid') for image_ref in invalid_refs: self.assertRaises(exception.ImageRefValidationFailed, image_service.get_image_service, image_ref) ironic-15.0.0/ironic/tests/unit/common/test_cinder.py0000664000175000017500000011101713652514273022664 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client import json from cinderclient import exceptions as cinder_exceptions import cinderclient.v3 as cinderclient import mock from oslo_utils import uuidutils from ironic.common import cinder from ironic.common import context from ironic.common import exception from ironic.common import keystone from ironic.conductor import task_manager from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as object_utils @mock.patch.object(keystone, 'get_auth', autospec=True) @mock.patch.object(keystone, 'get_session', autospec=True) class TestCinderSession(base.TestCase): def setUp(self): super(TestCinderSession, self).setUp() self.config(timeout=1, retries=2, group='cinder') cinder._CINDER_SESSION = None def test__get_cinder_session(self, mock_keystone_session, mock_auth): """Check establishing new session when no session exists.""" mock_keystone_session.return_value = 'session1' self.assertEqual('session1', cinder._get_cinder_session()) mock_keystone_session.assert_called_once_with('cinder') """Check if existing session is used.""" mock_keystone_session.reset_mock() mock_keystone_session.return_value = 'session2' self.assertEqual('session1', cinder._get_cinder_session()) self.assertFalse(mock_keystone_session.called) self.assertFalse(mock_auth.called) @mock.patch('ironic.common.keystone.get_adapter', autospec=True) @mock.patch('ironic.common.keystone.get_service_auth', autospec=True, return_value=mock.sentinel.sauth) @mock.patch('ironic.common.keystone.get_auth', autospec=True, return_value=mock.sentinel.auth) @mock.patch('ironic.common.keystone.get_session', autospec=True, return_value=mock.sentinel.session) @mock.patch.object(cinderclient.Client, '__init__', autospec=True, return_value=None) class TestCinderClient(base.TestCase): def setUp(self): super(TestCinderClient, self).setUp() self.config(timeout=1, retries=2, group='cinder') cinder._CINDER_SESSION = None self.context = context.RequestContext(global_request_id='global') def _assert_client_call(self, init_mock, url, auth=mock.sentinel.auth): cinder.get_client(self.context) init_mock.assert_called_once_with( mock.ANY, session=mock.sentinel.session, auth=auth, endpoint_override=url, connect_retries=2, global_request_id='global') def test_get_client(self, mock_client_init, mock_session, mock_auth, mock_sauth, mock_adapter): mock_adapter.return_value = mock_adapter_obj = mock.Mock() mock_adapter_obj.get_endpoint.return_value = 'cinder_url' self._assert_client_call(mock_client_init, 'cinder_url') mock_session.assert_called_once_with('cinder') mock_auth.assert_called_once_with('cinder') mock_adapter.assert_called_once_with('cinder', session=mock.sentinel.session, auth=mock.sentinel.auth) self.assertFalse(mock_sauth.called) def test_get_client_deprecated_opts(self, mock_client_init, mock_session, mock_auth, mock_sauth, mock_adapter): self.config(url='http://test-url', group='cinder') mock_adapter.return_value = mock_adapter_obj = mock.Mock() mock_adapter_obj.get_endpoint.return_value = 'http://test-url' self._assert_client_call(mock_client_init, 'http://test-url') mock_auth.assert_called_once_with('cinder') mock_session.assert_called_once_with('cinder') mock_adapter.assert_called_once_with( 'cinder', session=mock.sentinel.session, auth=mock.sentinel.auth, endpoint_override='http://test-url') self.assertFalse(mock_sauth.called) class TestCinderUtils(db_base.DbTestCase): def setUp(self): super(TestCinderUtils, self).setUp() self.node = object_utils.create_test_node( self.context, instance_uuid=uuidutils.generate_uuid()) def test_is_volume_available(self): available_volumes = [ mock.Mock(status=cinder.AVAILABLE, multiattach=False), mock.Mock(status=cinder.IN_USE, multiattach=True)] unavailable_volumes = [ mock.Mock(status=cinder.IN_USE, multiattach=False), mock.Mock(status='fake-non-status', multiattach=True)] for vol in available_volumes: result = cinder.is_volume_available(vol) self.assertTrue(result, msg="Failed for status '%s'." % vol.status) for vol in unavailable_volumes: result = cinder.is_volume_available(vol) self.assertFalse(result, msg="Failed for status '%s'." % vol.status) def test_is_volume_attached(self): attached_vol = mock.Mock(id='foo', attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'meow'}]) attached_vol2 = mock.Mock(id='bar', attachments=[ {'server_id': self.node.instance_uuid, 'attachment_id': 'meow'}],) unattached = mock.Mock(attachments=[]) self.assertTrue(cinder.is_volume_attached(self.node, attached_vol)) self.assertTrue(cinder.is_volume_attached(self.node, attached_vol2)) self.assertFalse(cinder.is_volume_attached(self.node, unattached)) def test__get_attachment_id(self): expectation = 'meow' attached_vol = mock.Mock(attachments=[ {'server_id': self.node.instance_uuid, 'attachment_id': 'meow'}]) attached_vol2 = mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'meow'}]) unattached = mock.Mock(attachments=[]) no_attachment = mock.Mock(attachments=[ {'server_id': 'cat', 'id': 'cat'}]) self.assertEqual(expectation, cinder._get_attachment_id(self.node, attached_vol)) self.assertEqual(expectation, cinder._get_attachment_id(self.node, attached_vol2)) self.assertIsNone(cinder._get_attachment_id(self.node, unattached)) self.assertIsNone(cinder._get_attachment_id(self.node, no_attachment)) @mock.patch.object(datetime, 'datetime', autospec=True) def test__create_metadata_dictionary(self, mock_datetime): fake_time = '2017-06-05T00:33:26.574676' mock_utcnow = mock.Mock() mock_datetime.utcnow.return_value = mock_utcnow mock_utcnow.isoformat.return_value = fake_time expected_key = ("ironic_node_%s" % self.node.uuid) expected_data = { 'instance_uuid': self.node.instance_uuid, 'last_seen': fake_time, 'last_action': 'meow' } result = cinder._create_metadata_dictionary(self.node, 'meow') data = json.loads(result[expected_key]) self.assertEqual(expected_data, data) @mock.patch.object(cinder, '_get_cinder_session', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'set_metadata', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'get', autospec=True) class TestCinderActions(db_base.DbTestCase): def setUp(self): super(TestCinderActions, self).setUp() self.node = object_utils.create_test_node( self.context, instance_uuid=uuidutils.generate_uuid()) self.mount_point = 'ironic_mountpoint' @mock.patch.object(cinderclient.volumes.VolumeManager, 'attach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_attach_volumes(self, mock_create_meta, mock_is_attached, mock_reserve, mock_init, mock_attach, mock_get, mock_set_meta, mock_session): """Iterate once on a single volume with success.""" volume_id = '111111111-0000-0000-0000-000000000003' expected = [{ 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'volume_id': volume_id, 'target_lun': 2, 'ironic_volume_uuid': '000-001'}}] volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = False mock_get.return_value = mock.Mock(attachments=[], id='000-001') mock_init.return_value = { 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'target_lun': 2}} with task_manager.acquire(self.context, self.node.uuid) as task: attachments = cinder.attach_volumes(task, volumes, connector) self.assertEqual(expected, attachments) mock_reserve.assert_called_once_with(mock.ANY, volume_id) mock_init.assert_called_once_with(mock.ANY, volume_id, connector) mock_attach.assert_called_once_with(mock.ANY, volume_id, self.node.instance_uuid, self.mount_point) mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) mock_get.assert_called_once_with(mock.ANY, volume_id) @mock.patch.object(cinderclient.volumes.VolumeManager, 'attach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_attach_volumes_one_attached( self, mock_create_meta, mock_reserve, mock_init, mock_attach, mock_get, mock_set_meta, mock_session): """Iterate with two volumes, one already attached.""" volume_id = '111111111-0000-0000-0000-000000000003' expected = [ {'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'volume_id': volume_id, 'target_lun': 2, 'ironic_volume_uuid': '000-000'}}, {'already_attached': True, 'data': { 'volume_id': 'already_attached', 'ironic_volume_uuid': '000-001'}}] volumes = [volume_id, 'already_attached'] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_get.side_effect = [ mock.Mock(attachments=[], id='000-000'), mock.Mock(attachments=[{'server_id': self.node.uuid}], id='000-001') ] mock_init.return_value = { 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'target_lun': 2}} with task_manager.acquire(self.context, self.node.uuid) as task: attachments = cinder.attach_volumes(task, volumes, connector) self.assertEqual(expected, attachments) mock_reserve.assert_called_once_with(mock.ANY, volume_id) mock_init.assert_called_once_with(mock.ANY, volume_id, connector) mock_attach.assert_called_once_with(mock.ANY, volume_id, self.node.instance_uuid, self.mount_point) mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) @mock.patch.object(cinderclient.Client, '__init__', autospec=True) def test_attach_volumes_client_init_failure( self, mock_client, mock_get, mock_set_meta, mock_session): connector = {'foo': 'bar'} volumes = ['111111111-0000-0000-0000-000000000003'] mock_client.side_effect = cinder_exceptions.BadRequest( http_client.BAD_REQUEST) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.attach_volumes, task, volumes, connector) @mock.patch.object(cinderclient.volumes.VolumeManager, 'attach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_attach_volumes_vol_not_found( self, mock_create_meta, mock_reserve, mock_init, mock_attach, mock_get, mock_set_meta, mock_session): """Raise an error if the volume lookup fails""" def __mock_get_side_effect(client, volume_id): if volume_id == 'not_found': raise cinder_exceptions.NotFound( http_client.NOT_FOUND, message='error') else: return mock.Mock(attachments=[], uuid='000-000') volumes = ['111111111-0000-0000-0000-000000000003', 'not_found', 'not_reached'] connector = {'foo': 'bar'} mock_get.side_effect = __mock_get_side_effect mock_create_meta.return_value = {'bar': 'baz'} with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.attach_volumes, task, volumes, connector) mock_get.assert_any_call(mock.ANY, '111111111-0000-0000-0000-000000000003') mock_get.assert_any_call(mock.ANY, 'not_found') self.assertEqual(2, mock_get.call_count) mock_reserve.assert_called_once_with( mock.ANY, '111111111-0000-0000-0000-000000000003') mock_init.assert_called_once_with( mock.ANY, '111111111-0000-0000-0000-000000000003', connector) mock_attach.assert_called_once_with( mock.ANY, '111111111-0000-0000-0000-000000000003', self.node.instance_uuid, self.mount_point) mock_set_meta.assert_called_once_with( mock.ANY, '111111111-0000-0000-0000-000000000003', {'bar': 'baz'}) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) def test_attach_volumes_reserve_failure(self, mock_is_attached, mock_reserve, mock_get, mock_set_meta, mock_session): volumes = ['111111111-0000-0000-0000-000000000003'] connector = {'foo': 'bar'} volume = mock.Mock(attachments=[]) mock_get.return_value = volume mock_is_attached.return_value = False mock_reserve.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.attach_volumes, task, volumes, connector) mock_is_attached.assert_called_once_with(mock.ANY, volume) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_attach_volumes_initialize_connection_failure( self, mock_create_meta, mock_is_attached, mock_reserve, mock_init, mock_get, mock_set_meta, mock_session): """Fail attachment upon an initialization failure.""" volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = False mock_get.return_value = mock.Mock(attachments=[]) mock_init.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.attach_volumes, task, volumes, connector) mock_get.assert_called_once_with(mock.ANY, volume_id) mock_reserve.assert_called_once_with(mock.ANY, volume_id) mock_init.assert_called_once_with(mock.ANY, volume_id, connector) @mock.patch.object(cinderclient.volumes.VolumeManager, 'attach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_attach_volumes_attach_record_failure( self, mock_create_meta, mock_is_attached, mock_reserve, mock_init, mock_attach, mock_get, mock_set_meta, mock_session): """Attach a volume and fail if final record failure occurs""" volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = False mock_get.return_value = mock.Mock(attachments=[], id='000-003') mock_init.return_value = { 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'target_lun': 2}} mock_attach.side_effect = cinder_exceptions.ClientException( http_client.NOT_ACCEPTABLE, 'error') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.attach_volumes, task, volumes, connector) mock_reserve.assert_called_once_with(mock.ANY, volume_id) mock_init.assert_called_once_with(mock.ANY, volume_id, connector) mock_attach.assert_called_once_with(mock.ANY, volume_id, self.node.instance_uuid, self.mount_point) mock_get.assert_called_once_with(mock.ANY, volume_id) mock_is_attached.assert_called_once_with(mock.ANY, mock_get.return_value) @mock.patch.object(cinderclient.volumes.VolumeManager, 'attach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'initialize_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'reserve', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) @mock.patch.object(cinder, 'LOG', autospec=True) def test_attach_volumes_attach_set_meta_failure( self, mock_log, mock_create_meta, mock_is_attached, mock_reserve, mock_init, mock_attach, mock_get, mock_set_meta, mock_session): """Attach a volume and tolerate set_metadata failure.""" expected = [{ 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'volume_id': '111111111-0000-0000-0000-000000000003', 'target_lun': 2, 'ironic_volume_uuid': '000-000'}}] volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = False mock_get.return_value = mock.Mock(attachments=[], id='000-000') mock_init.return_value = { 'driver_volume_type': 'iscsi', 'data': { 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000002', 'target_portal': '127.0.0.0.1:3260', 'target_lun': 2}} mock_set_meta.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: attachments = cinder.attach_volumes(task, volumes, connector) self.assertEqual(expected, attachments) mock_reserve.assert_called_once_with(mock.ANY, volume_id) mock_init.assert_called_once_with(mock.ANY, volume_id, connector) mock_attach.assert_called_once_with(mock.ANY, volume_id, self.node.instance_uuid, self.mount_point) mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) mock_get.assert_called_once_with(mock.ANY, volume_id) mock_is_attached.assert_called_once_with(mock.ANY, mock_get.return_value) self.assertTrue(mock_log.warning.called) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): """Iterate once and detach a volume without issues.""" volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_get.return_value = mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'qux'}]) with task_manager.acquire(self.context, self.node.uuid) as task: cinder.detach_volumes(task, volumes, connector, allow_errors=False) mock_begin.assert_called_once_with(mock.ANY, volume_id) mock_term.assert_called_once_with(mock.ANY, volume_id, {'foo': 'bar'}) mock_detach.assert_called_once_with(mock.ANY, volume_id, 'qux') mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_one_detached( self, mock_create_meta, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): """Iterate with two volumes, one already detached.""" volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id, 'detached'] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_get.side_effect = [ mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'qux'}]), mock.Mock(attachments=[]) ] with task_manager.acquire(self.context, self.node.uuid) as task: cinder.detach_volumes(task, volumes, connector, allow_errors=False) mock_begin.assert_called_once_with(mock.ANY, volume_id) mock_term.assert_called_once_with(mock.ANY, volume_id, {'foo': 'bar'}) mock_detach.assert_called_once_with(mock.ANY, volume_id, 'qux') mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) @mock.patch.object(cinderclient.Client, '__init__', autospec=True) def test_detach_volumes_client_init_failure_bad_request( self, mock_client, mock_get, mock_set_meta, mock_session): connector = {'foo': 'bar'} volumes = ['111111111-0000-0000-0000-000000000003'] with task_manager.acquire(self.context, self.node.uuid) as task: mock_client.side_effect = cinder_exceptions.BadRequest( http_client.BAD_REQUEST) self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector) @mock.patch.object(cinderclient.Client, '__init__', autospec=True) def test_detach_volumes_client_init_failure_invalid_parameter_value( self, mock_client, mock_get, mock_set_meta, mock_session): connector = {'foo': 'bar'} volumes = ['111111111-0000-0000-0000-000000000003'] with task_manager.acquire(self.context, self.node.uuid) as task: # While we would be permitting failures, this is an exception that # must be raised since the client cannot be initialized. mock_client.side_effect = exception.InvalidParameterValue('error') self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector, allow_errors=True) def test_detach_volumes_vol_not_found(self, mock_get, mock_set_meta, mock_session): """Raise an error if the volume lookup fails""" volumes = ['vol1'] connector = {'foo': 'bar'} mock_get.side_effect = cinder_exceptions.NotFound( http_client.NOT_FOUND, message='error') with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector) self.assertFalse(mock_set_meta.called) # We should not raise any exception when issuing a command # with errors being permitted. cinder.detach_volumes(task, volumes, connector, allow_errors=True) self.assertFalse(mock_set_meta.called) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_begin_detaching_failure( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} volume = mock.Mock(attachments=[]) mock_get.return_value = volume mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_begin.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector) mock_is_attached.assert_called_once_with(mock.ANY, volume) cinder.detach_volumes(task, volumes, connector, allow_errors=True) mock_term.assert_called_once_with(mock.ANY, volume_id, {'foo': 'bar'}) mock_detach.assert_called_once_with(mock.ANY, volume_id, None) mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_term_failure( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_get, mock_set_meta, mock_session): volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_get.return_value = {'id': volume_id, 'attachments': []} mock_term.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector) mock_begin.assert_called_once_with(mock.ANY, volume_id) mock_term.assert_called_once_with(mock.ANY, volume_id, connector) cinder.detach_volumes(task, volumes, connector, allow_errors=True) self.assertFalse(mock_set_meta.called) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_detach_failure_errors_not_allowed( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_get.return_value = mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'qux'}]) mock_detach.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: self.assertRaises(exception.StorageError, cinder.detach_volumes, task, volumes, connector, allow_errors=False) mock_detach.assert_called_once_with(mock.ANY, volume_id, 'qux') self.assertFalse(mock_set_meta.called) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_detach_failure_errors_allowed( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_get.return_value = mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'qux'}]) mock_set_meta.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: cinder.detach_volumes(task, volumes, connector, allow_errors=True) mock_detach.assert_called_once_with(mock.ANY, volume_id, 'qux') mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) @mock.patch.object(cinderclient.volumes.VolumeManager, 'detach', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'terminate_connection', autospec=True) @mock.patch.object(cinderclient.volumes.VolumeManager, 'begin_detaching', autospec=True) @mock.patch.object(cinder, 'is_volume_attached', autospec=True) @mock.patch.object(cinder, '_create_metadata_dictionary', autospec=True) def test_detach_volumes_detach_meta_failure_errors_not_allowed( self, mock_create_meta, mock_is_attached, mock_begin, mock_term, mock_detach, mock_get, mock_set_meta, mock_session): volume_id = '111111111-0000-0000-0000-000000000003' volumes = [volume_id] connector = {'foo': 'bar'} mock_create_meta.return_value = {'bar': 'baz'} mock_is_attached.return_value = True mock_get.return_value = mock.Mock(attachments=[ {'server_id': self.node.uuid, 'attachment_id': 'qux'}]) mock_set_meta.side_effect = cinder_exceptions.NotAcceptable( http_client.NOT_ACCEPTABLE) with task_manager.acquire(self.context, self.node.uuid) as task: cinder.detach_volumes(task, volumes, connector, allow_errors=False) mock_detach.assert_called_once_with(mock.ANY, volume_id, 'qux') mock_set_meta.assert_called_once_with(mock.ANY, volume_id, {'bar': 'baz'}) ironic-15.0.0/ironic/tests/unit/common/test_wsgi_service.py0000664000175000017500000000567213652514273024122 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils from oslo_config import cfg from ironic.common import exception from ironic.common import wsgi_service from ironic.tests import base CONF = cfg.CONF class TestWSGIService(base.TestCase): @mock.patch.object(wsgi_service.wsgi, 'Server', autospec=True) def test_workers_set_default(self, mock_server): service_name = "ironic_api" test_service = wsgi_service.WSGIService(service_name) self.assertEqual(processutils.get_worker_count(), test_service.workers) mock_server.assert_called_once_with(CONF, service_name, test_service.app, host='0.0.0.0', port=6385, use_ssl=False) @mock.patch.object(wsgi_service.wsgi, 'Server', autospec=True) def test_workers_set_correct_setting(self, mock_server): self.config(api_workers=8, group='api') test_service = wsgi_service.WSGIService("ironic_api") self.assertEqual(8, test_service.workers) @mock.patch.object(wsgi_service.wsgi, 'Server', autospec=True) def test_workers_set_zero_setting(self, mock_server): self.config(api_workers=0, group='api') test_service = wsgi_service.WSGIService("ironic_api") self.assertEqual(processutils.get_worker_count(), test_service.workers) @mock.patch.object(wsgi_service.wsgi, 'Server', autospec=True) def test_workers_set_negative_setting(self, mock_server): self.config(api_workers=-2, group='api') self.assertRaises(exception.ConfigInvalid, wsgi_service.WSGIService, 'ironic_api') self.assertFalse(mock_server.called) @mock.patch.object(wsgi_service.wsgi, 'Server', autospec=True) def test_wsgi_service_with_ssl_enabled(self, mock_server): self.config(enable_ssl_api=True, group='api') service_name = 'ironic_api' srv = wsgi_service.WSGIService('ironic_api', CONF.api.enable_ssl_api) mock_server.assert_called_once_with(CONF, service_name, srv.app, host='0.0.0.0', port=6385, use_ssl=True) ironic-15.0.0/ironic/tests/unit/common/test_nova.py0000664000175000017500000002154613652514273022372 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from keystoneauth1 import exceptions as kaexception import mock import requests from ironic.common import context from ironic.common import keystone from ironic.common import nova from ironic.tests import base @mock.patch.object(keystone, 'get_session', autospec=True) @mock.patch.object(keystone, 'get_adapter', autospec=True) class TestNovaAdapter(base.TestCase): def test_get_nova_adapter(self, mock_adapter, mock_nova_session): nova._NOVA_ADAPTER = None mock_session_obj = mock.Mock() expected = {'session': mock_session_obj, 'auth': None, 'version': "2.1"} mock_nova_session.return_value = mock_session_obj nova._get_nova_adapter() mock_nova_session.assert_called_once_with('nova') mock_adapter.assert_called_once_with(group='nova', **expected) """Check if existing adapter is used.""" mock_nova_session.reset_mock() nova._get_nova_adapter() mock_nova_session.assert_not_called() @ddt.ddt @mock.patch.object(nova, 'LOG', autospec=True) class NovaApiTestCase(base.TestCase): def setUp(self): super(NovaApiTestCase, self).setUp() self.api = nova self.ctx = context.get_admin_context() @ddt.unpack # one @ddt.data element comprises: # - nova_result: POST response JSON dict # - resp_status: POST response status_code # - exp_ret: Expected bool return value from power_update() @ddt.data([{'events': [{'status': 'completed', 'tag': 'POWER_OFF', 'name': 'power-update', 'server_uuid': '1234', 'code': 200}]}, 200, True], [{'events': [{'code': 422}]}, 207, False], [{'events': [{'code': 404}]}, 207, False], [{'events': [{'code': 400}]}, 207, False], # This (response 207, event code 200) will never happen IRL [{'events': [{'code': 200}]}, 207, True]) @mock.patch.object(nova, '_get_nova_adapter') def test_power_update(self, nova_result, resp_status, exp_ret, mock_adapter, mock_log): server_ids = ['server-id-1', 'server-id-2'] nova_adapter = mock.Mock() with mock.patch.object(nova_adapter, 'post') as mock_post_event: post_resp_mock = requests.Response() def json_func(): return nova_result post_resp_mock.json = json_func post_resp_mock.status_code = resp_status mock_adapter.return_value = nova_adapter mock_post_event.return_value = post_resp_mock for server in server_ids: result = self.api.power_update(self.ctx, server, 'power on') self.assertEqual(exp_ret, result) mock_adapter.assert_has_calls([mock.call(), mock.call()]) req_url = '/os-server-external-events' mock_post_event.assert_has_calls([ mock.call(req_url, json={'events': [{'name': 'power-update', 'server_uuid': 'server-id-1', 'tag': 'POWER_ON'}]}, microversion='2.76', global_request_id=self.ctx.global_id, raise_exc=False), mock.call(req_url, json={'events': [{'name': 'power-update', 'server_uuid': 'server-id-2', 'tag': 'POWER_ON'}]}, microversion='2.76', global_request_id=self.ctx.global_id, raise_exc=False) ]) if not exp_ret: expected = ('Nova event: %s returned with failed status.', nova_result['events'][0]) mock_log.warning.assert_called_with(*expected) else: expected = ("Nova event response: %s.", nova_result['events'][0]) mock_log.debug.assert_called_with(*expected) @mock.patch.object(nova, '_get_nova_adapter') def test_invalid_power_update(self, mock_adapter, mock_log): nova_adapter = mock.Mock() with mock.patch.object(nova_adapter, 'post') as mock_post_event: result = self.api.power_update(self.ctx, 'server', None) self.assertFalse(result) expected = ('Invalid Power State %s.', None) mock_log.error.assert_called_once_with(*expected) mock_adapter.assert_not_called() mock_post_event.assert_not_called() def test_power_update_failed(self, mock_log): nova_adapter = nova._get_nova_adapter() event = [{'name': 'power-update', 'server_uuid': 'server-id-1', 'tag': 'POWER_OFF'}] nova_result = requests.Response() with mock.patch.object(nova_adapter, 'post') as mock_post_event: for stat_code in (500, 404, 400): mock_log.reset_mock() nova_result.status_code = stat_code type(nova_result).text = mock.PropertyMock(return_value="blah") mock_post_event.return_value = nova_result result = self.api.power_update( self.ctx, 'server-id-1', 'power off') self.assertFalse(result) expected = ("Failed to notify nova on event: %s. %s.", event[0], "blah") mock_log.warning.assert_called_once_with(*expected) mock_post_event.assert_has_calls([ mock.call('/os-server-external-events', json={'events': event}, microversion='2.76', global_request_id=self.ctx.global_id, raise_exc=False) ]) @ddt.data({'events': [{}]}, {'events': []}, {'events': None}, {}) @mock.patch.object(nova, '_get_nova_adapter') def test_power_update_invalid_reponse_format(self, nova_result, mock_adapter, mock_log): nova_adapter = mock.Mock() with mock.patch.object(nova_adapter, 'post') as mock_post_event: post_resp_mock = requests.Response() def json_func(): return nova_result post_resp_mock.json = json_func post_resp_mock.status_code = 207 mock_adapter.return_value = nova_adapter mock_post_event.return_value = post_resp_mock result = self.api.power_update(self.ctx, 'server-id-1', 'power on') self.assertFalse(result) mock_adapter.assert_has_calls([mock.call()]) req_url = '/os-server-external-events' mock_post_event.assert_has_calls([ mock.call(req_url, json={'events': [{'name': 'power-update', 'server_uuid': 'server-id-1', 'tag': 'POWER_ON'}]}, microversion='2.76', global_request_id=self.ctx.global_id, raise_exc=False), ]) self.assertIn('Invalid response', mock_log.error.call_args[0][0]) @mock.patch.object(keystone, 'get_adapter', autospec=True) def test_power_update_failed_no_nova(self, mock_adapter, mock_log): self.config(send_power_notifications=False, group="nova") result = self.api.power_update(self.ctx, 'server-id-1', 'power off') self.assertFalse(result) mock_adapter.assert_not_called() @mock.patch.object(nova, '_get_nova_adapter') def test_power_update_failed_no_nova_auth_url(self, mock_adapter, mock_log): server = 'server-id-1' emsg = 'An auth plugin is required to determine endpoint URL' side_effect = kaexception.MissingAuthPlugin(emsg) mock_nova = mock.Mock() mock_adapter.return_value = mock_nova mock_nova.post.side_effect = side_effect result = self.api.power_update(self.ctx, server, 'power off') msg = ('Could not connect to Nova to send a power notification, ' 'please check configuration. %s', side_effect) self.assertFalse(result) mock_log.warning.assert_called_once_with(*msg) mock_adapter.assert_called_once_with() ironic-15.0.0/ironic/tests/unit/common/test_images.py0000664000175000017500000013332713652514273022675 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import builtins import io import os import shutil from ironic_lib import disk_utils from ironic_lib import utils as ironic_utils import mock from oslo_concurrency import processutils from oslo_config import cfg from ironic.common import exception from ironic.common.glance_service import service_utils as glance_utils from ironic.common import image_service from ironic.common import images from ironic.common import utils from ironic.tests import base CONF = cfg.CONF class IronicImagesTestCase(base.TestCase): class FakeImgInfo(object): pass @mock.patch.object(image_service, 'get_image_service', autospec=True) @mock.patch.object(builtins, 'open', autospec=True) def test_fetch_image_service(self, open_mock, image_service_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'file' open_mock.return_value = mock_file_handle images.fetch('context', 'image_href', 'path') open_mock.assert_called_once_with('path', 'wb') image_service_mock.assert_called_once_with('image_href', context='context') image_service_mock.return_value.download.assert_called_once_with( 'image_href', 'file') @mock.patch.object(image_service, 'get_image_service', autospec=True) @mock.patch.object(images, 'image_to_raw', autospec=True) @mock.patch.object(builtins, 'open', autospec=True) def test_fetch_image_service_force_raw(self, open_mock, image_to_raw_mock, image_service_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'file' open_mock.return_value = mock_file_handle images.fetch('context', 'image_href', 'path', force_raw=True) open_mock.assert_called_once_with('path', 'wb') image_service_mock.return_value.download.assert_called_once_with( 'image_href', 'file') image_to_raw_mock.assert_called_once_with( 'image_href', 'path', 'path.part') @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_no_file_format(self, qemu_img_info_mock): info = self.FakeImgInfo() info.file_format = None qemu_img_info_mock.return_value = info e = self.assertRaises(exception.ImageUnacceptable, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') self.assertIn("'qemu-img info' parsing failed.", str(e)) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_backing_file_present(self, qemu_img_info_mock): info = self.FakeImgInfo() info.file_format = 'raw' info.backing_file = 'backing_file' qemu_img_info_mock.return_value = info e = self.assertRaises(exception.ImageUnacceptable, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') self.assertIn("fmt=raw backed by: backing_file", str(e)) @mock.patch.object(os, 'rename', autospec=True) @mock.patch.object(os, 'unlink', autospec=True) @mock.patch.object(disk_utils, 'convert_image', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw(self, qemu_img_info_mock, convert_image_mock, unlink_mock, rename_mock): CONF.set_override('force_raw_images', True) info = self.FakeImgInfo() info.file_format = 'fmt' info.backing_file = None qemu_img_info_mock.return_value = info def convert_side_effect(source, dest, out_format): info.file_format = 'raw' convert_image_mock.side_effect = convert_side_effect images.image_to_raw('image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_has_calls([mock.call('path_tmp'), mock.call('path.converted')]) convert_image_mock.assert_called_once_with('path_tmp', 'path.converted', 'raw') unlink_mock.assert_called_once_with('path_tmp') rename_mock.assert_called_once_with('path.converted', 'path') @mock.patch.object(os, 'unlink', autospec=True) @mock.patch.object(disk_utils, 'convert_image', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_not_raw_after_conversion(self, qemu_img_info_mock, convert_image_mock, unlink_mock): CONF.set_override('force_raw_images', True) info = self.FakeImgInfo() info.file_format = 'fmt' info.backing_file = None qemu_img_info_mock.return_value = info self.assertRaises(exception.ImageConvertFailed, images.image_to_raw, 'image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_has_calls([mock.call('path_tmp'), mock.call('path.converted')]) convert_image_mock.assert_called_once_with('path_tmp', 'path.converted', 'raw') unlink_mock.assert_called_once_with('path_tmp') @mock.patch.object(os, 'rename', autospec=True) @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_image_to_raw_already_raw_format(self, qemu_img_info_mock, rename_mock): info = self.FakeImgInfo() info.file_format = 'raw' info.backing_file = None qemu_img_info_mock.return_value = info images.image_to_raw('image_href', 'path', 'path_tmp') qemu_img_info_mock.assert_called_once_with('path_tmp') rename_mock.assert_called_once_with('path_tmp', 'path') @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_image_show_no_image_service(self, image_service_mock): images.image_show('context', 'image_href') image_service_mock.assert_called_once_with('image_href', context='context') image_service_mock.return_value.show.assert_called_once_with( 'image_href') def test_image_show_image_service(self): image_service_mock = mock.MagicMock() images.image_show('context', 'image_href', image_service_mock) image_service_mock.show.assert_called_once_with('image_href') @mock.patch.object(images, 'image_show', autospec=True) def test_download_size(self, show_mock): show_mock.return_value = {'size': 123456} size = images.download_size('context', 'image_href', 'image_service') self.assertEqual(123456, size) show_mock.assert_called_once_with('context', 'image_href', 'image_service') @mock.patch.object(disk_utils, 'qemu_img_info', autospec=True) def test_converted_size(self, qemu_img_info_mock): info = self.FakeImgInfo() info.virtual_size = 1 qemu_img_info_mock.return_value = info size = images.converted_size('path') qemu_img_info_mock.assert_called_once_with('path') self.assertEqual(1, size) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_no_img_src(self, mock_igi, mock_gip): instance_info = {'image_source': ''} iwdi = images.is_whole_disk_image('context', instance_info) self.assertIsNone(iwdi) self.assertFalse(mock_igi.called) self.assertFalse(mock_gip.called) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_partition_image(self, mock_igi, mock_gip): mock_igi.return_value = True mock_gip.return_value = {'kernel_id': 'kernel', 'ramdisk_id': 'ramdisk'} instance_info = {'image_source': 'glance://partition_image'} image_source = instance_info['image_source'] is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertFalse(is_whole_disk_image) mock_igi.assert_called_once_with(image_source) mock_gip.assert_called_once_with('context', image_source) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_whole_disk_image(self, mock_igi, mock_gip): mock_igi.return_value = True mock_gip.return_value = {} instance_info = {'image_source': 'glance://whole_disk_image'} image_source = instance_info['image_source'] is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertTrue(is_whole_disk_image) mock_igi.assert_called_once_with(image_source) mock_gip.assert_called_once_with('context', image_source) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_partition_non_glance(self, mock_igi, mock_gip): mock_igi.return_value = False instance_info = {'image_source': 'partition_image', 'kernel': 'kernel', 'ramdisk': 'ramdisk'} is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertFalse(is_whole_disk_image) self.assertFalse(mock_gip.called) mock_igi.assert_called_once_with(instance_info['image_source']) @mock.patch.object(images, 'get_image_properties', autospec=True) @mock.patch.object(glance_utils, 'is_glance_image', autospec=True) def test_is_whole_disk_image_whole_disk_non_glance(self, mock_igi, mock_gip): mock_igi.return_value = False instance_info = {'image_source': 'whole_disk_image'} is_whole_disk_image = images.is_whole_disk_image('context', instance_info) self.assertTrue(is_whole_disk_image) self.assertFalse(mock_gip.called) mock_igi.assert_called_once_with(instance_info['image_source']) class FsImageTestCase(base.TestCase): @mock.patch.object(shutil, 'copyfile', autospec=True) @mock.patch.object(os, 'makedirs', autospec=True) @mock.patch.object(os.path, 'dirname', autospec=True) @mock.patch.object(os.path, 'exists', autospec=True) def test__create_root_fs(self, path_exists_mock, dirname_mock, mkdir_mock, cp_mock): def path_exists_mock_func(path): return path == 'root_dir' files_info = { 'a1': 'b1', 'a2': 'b2', 'a3': 'sub_dir/b3'} path_exists_mock.side_effect = path_exists_mock_func dirname_mock.side_effect = ['root_dir', 'root_dir', 'root_dir/sub_dir', 'root_dir/sub_dir'] images._create_root_fs('root_dir', files_info) cp_mock.assert_any_call('a1', 'root_dir/b1') cp_mock.assert_any_call('a2', 'root_dir/b2') cp_mock.assert_any_call('a3', 'root_dir/sub_dir/b3') path_exists_mock.assert_any_call('root_dir/sub_dir') dirname_mock.assert_any_call('root_dir/b1') dirname_mock.assert_any_call('root_dir/b2') dirname_mock.assert_any_call('root_dir/sub_dir/b3') mkdir_mock.assert_called_once_with('root_dir/sub_dir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image( self, mkfs_mock, mount_mock, umount_mock, dd_mock, write_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle parameters = {'p1': 'v1'} files_info = {'a': 'b'} images.create_vfat_image('tgt_file', parameters=parameters, files_info=files_info, parameters_file='qwe', fs_size_kib=1000) dd_mock.assert_called_once_with('/dev/zero', 'tgt_file', 'count=1', 'bs=1000KiB') mkfs_mock.assert_called_once_with('vfat', 'tgt_file', label="ir-vfd-dev") mount_mock.assert_called_once_with('tgt_file', 'tempdir', '-o', 'umask=0') parameters_file_path = os.path.join('tempdir', 'qwe') write_mock.assert_called_once_with(parameters_file_path, 'p1=v1') create_root_fs_mock.assert_called_once_with('tempdir', files_info) umount_mock.assert_called_once_with('tempdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_always_umount( self, mkfs_mock, mount_mock, umount_mock, dd_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle files_info = {'a': 'b'} create_root_fs_mock.side_effect = OSError() self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file', files_info=files_info) umount_mock.assert_called_once_with('tempdir') @mock.patch.object(ironic_utils, 'dd', autospec=True) def test_create_vfat_image_dd_fails(self, dd_mock): dd_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_mkfs_fails(self, mkfs_mock, dd_mock, tempdir_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle mkfs_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(ironic_utils, 'dd', autospec=True) @mock.patch.object(utils, 'umount', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) @mock.patch.object(ironic_utils, 'mkfs', autospec=True) def test_create_vfat_image_umount_fails( self, mkfs_mock, mount_mock, umount_mock, dd_mock, tempdir_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tempdir' tempdir_mock.return_value = mock_file_handle umount_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_vfat_image, 'tgt_file') @mock.patch.object(utils, 'umount', autospec=True) def test__umount_without_raise(self, umount_mock): umount_mock.side_effect = processutils.ProcessExecutionError images._umount_without_raise('mountdir') umount_mock.assert_called_once_with('mountdir') def test__generate_isolinux_cfg(self): kernel_params = ['key1=value1', 'key2'] options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} expected_cfg = ("default boot\n" "\n" "label boot\n" "kernel /vmlinuz\n" "append initrd=/initrd text key1=value1 key2 --") cfg = images._generate_cfg(kernel_params, CONF.isolinux_config_template, options) self.assertEqual(expected_cfg, cfg) def test__generate_grub_cfg(self): kernel_params = ['key1=value1', 'key2'] options = {'linux': '/vmlinuz', 'initrd': '/initrd'} expected_cfg = ("set default=0\n" "set timeout=5\n" "set hidden_timeout_quiet=false\n" "\n" "menuentry \"boot_partition\" {\n" "linuxefi /vmlinuz key1=value1 key2 --\n" "initrdefi /initrd\n" "}") cfg = images._generate_cfg(kernel_params, CONF.grub_config_template, options) self.assertEqual(expected_cfg, cfg) @mock.patch.object(images, 'os', autospec=True) def test__read_dir(self, mock_os): mock_os.path.join = os.path.join mock_os.path.isdir.side_effect = (False, True, False) mock_os.listdir.side_effect = [['a', 'b'], ['c']] file_info = images._read_dir('/mnt') expected = { '/mnt/a': 'a', '/mnt/b/c': 'b/c' } self.assertEqual(expected, file_info) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso(self, mount_mock, walk_mock, relpath_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', [], ['grub.cfg']), ('/tmpdir1/isolinux', [], ['efiboot.img', 'isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = ['EFI/ubuntu/grub.cfg', 'isolinux/efiboot.img'] images._mount_deploy_iso('path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fail_no_esp_imageimg(self, mount_mock, walk_mock, relpath_mock, umount_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', [], ['grub.cfg']), ('/tmpdir1/isolinux', [], ['isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = 'EFI/ubuntu/grub.cfg' self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') umount_mock.assert_called_once_with('tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(os.path, 'relpath', autospec=True) @mock.patch.object(os, 'walk', autospec=True) @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fails_no_grub_cfg(self, mount_mock, walk_mock, relpath_mock, umount_mock): walk_mock.return_value = [('/tmpdir1/EFI/ubuntu', '', []), ('/tmpdir1/isolinux', '', ['efiboot.img', 'isolinux.bin', 'isolinux.cfg'])] relpath_mock.side_effect = 'isolinux/efiboot.img' self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') mount_mock.assert_called_once_with('path/to/deployiso', 'tmpdir1', '-o', 'loop') walk_mock.assert_called_once_with('tmpdir1') umount_mock.assert_called_once_with('tmpdir1') @mock.patch.object(utils, 'mount', autospec=True) def test__mount_deploy_iso_fail_with_ExecutionError(self, mount_mock): mount_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images._mount_deploy_iso, 'path/to/deployiso', 'tmpdir1') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_mount_deploy_iso', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_esp_image_for_uefi_with_deploy_iso( self, gen_cfg_mock, tempdir_mock, mount_mock, execute_mock, write_to_file_mock, create_root_fs_mock, umount_mock): files_info = { 'path/to/kernel': 'vmlinuz', 'path/to/ramdisk': 'initrd', 'sourceabspath/to/efiboot.img': 'path/to/efiboot.img', 'path/to/grub': 'relpath/to/grub.cfg' } grubcfg = "grubcfg" grub_file = 'tmpdir/relpath/to/grub.cfg' gen_cfg_mock.side_effect = (grubcfg,) params = ['a=b', 'c'] grub_options = {'linux': '/vmlinuz', 'initrd': '/initrd'} uefi_path_info = { 'sourceabspath/to/efiboot.img': 'path/to/efiboot.img', 'path/to/grub': 'relpath/to/grub.cfg'} grub_rel_path = 'relpath/to/grub.cfg' e_img_rel_path = 'path/to/efiboot.img' mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=io.BytesIO) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = mock_file_handle, mock_file_handle1 mount_mock.return_value = (uefi_path_info, e_img_rel_path, grub_rel_path) images.create_esp_image_for_uefi('tgt_file', 'path/to/kernel', 'path/to/ramdisk', deploy_iso='path/to/deploy_iso', kernel_params=params) mount_mock.assert_called_once_with('path/to/deploy_iso', 'mountdir') create_root_fs_mock.assert_called_once_with('tmpdir', files_info) gen_cfg_mock.assert_any_call(params, CONF.grub_config_template, grub_options) write_to_file_mock.assert_any_call(grub_file, grubcfg) execute_mock.assert_called_once_with( 'mkisofs', '-r', '-V', 'VMEDIA_BOOT_ISO', '-l', '-e', 'path/to/efiboot.img', '-no-emul-boot', '-o', 'tgt_file', 'tmpdir') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_esp_image_for_uefi_with_esp_image( self, gen_cfg_mock, tempdir_mock, execute_mock, create_root_fs_mock, write_to_file_mock): files_info = { 'path/to/kernel': 'vmlinuz', 'path/to/ramdisk': 'initrd', 'sourceabspath/to/efiboot.img': 'boot/grub/efiboot.img', '/dev/null': 'EFI/MYBOOT/grub.cfg', } grub_cfg_file = '/EFI/MYBOOT/grub.cfg' CONF.set_override('grub_config_path', grub_cfg_file) grubcfg = "grubcfg" gen_cfg_mock.side_effect = (grubcfg,) params = ['a=b', 'c'] grub_options = {'linux': '/vmlinuz', 'initrd': '/initrd'} mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=io.BytesIO) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = mock_file_handle, mock_file_handle1 mountdir_grub_cfg_path = 'tmpdir' + grub_cfg_file images.create_esp_image_for_uefi( 'tgt_file', 'path/to/kernel', 'path/to/ramdisk', esp_image='sourceabspath/to/efiboot.img', kernel_params=params) create_root_fs_mock.assert_called_once_with('tmpdir', files_info) gen_cfg_mock.assert_any_call(params, CONF.grub_config_template, grub_options) write_to_file_mock.assert_any_call(mountdir_grub_cfg_path, grubcfg) execute_mock.assert_called_once_with( 'mkisofs', '-r', '-V', 'VMEDIA_BOOT_ISO', '-l', '-e', 'boot/grub/efiboot.img', '-no-emul-boot', '-o', 'tgt_file', 'tmpdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def _test_create_isolinux_image_for_bios( self, gen_cfg_mock, execute_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock, ldlinux_path=None): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle cfg = "cfg" cfg_file = 'tmpdir/isolinux/isolinux.cfg' gen_cfg_mock.return_value = cfg params = ['a=b', 'c'] isolinux_options = {'kernel': '/vmlinuz', 'ramdisk': '/initrd'} images.create_isolinux_image_for_bios('tgt_file', 'path/to/kernel', 'path/to/ramdisk', kernel_params=params) files_info = { 'path/to/kernel': 'vmlinuz', 'path/to/ramdisk': 'initrd', CONF.isolinux_bin: 'isolinux/isolinux.bin' } if ldlinux_path: files_info[ldlinux_path] = 'isolinux/ldlinux.c32' create_root_fs_mock.assert_called_once_with('tmpdir', files_info) gen_cfg_mock.assert_called_once_with(params, CONF.isolinux_config_template, isolinux_options) write_to_file_mock.assert_called_once_with(cfg_file, cfg) execute_mock.assert_called_once_with( 'mkisofs', '-r', '-V', "VMEDIA_BOOT_ISO", '-cache-inodes', '-J', '-l', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '-b', 'isolinux/isolinux.bin', '-o', 'tgt_file', 'tmpdir') @mock.patch.object(os.path, 'isfile', autospec=True) def test_create_isolinux_image_for_bios(self, mock_isfile): mock_isfile.return_value = False self._test_create_isolinux_image_for_bios() def test_create_isolinux_image_for_bios_conf_ldlinux(self): CONF.set_override('ldlinux_c32', 'path/to/ldlinux.c32') self._test_create_isolinux_image_for_bios( ldlinux_path='path/to/ldlinux.c32') @mock.patch.object(os.path, 'isfile', autospec=True) def test_create_isolinux_image_for_bios_default_ldlinux(self, mock_isfile): mock_isfile.side_effect = [False, True] self._test_create_isolinux_image_for_bios( ldlinux_path='/usr/share/syslinux/ldlinux.c32') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(os, 'walk', autospec=True) def test_create_esp_image_uefi_rootfs_fails( self, walk_mock, utils_mock, tempdir_mock, create_root_fs_mock, umount_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=io.BytesIO) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = mock_file_handle, mock_file_handle1 create_root_fs_mock.side_effect = IOError self.assertRaises(exception.ImageCreationFailed, images.create_esp_image_for_uefi, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk', deploy_iso='path/to/deployiso') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(os, 'walk', autospec=True) def test_create_isolinux_image_bios_rootfs_fails(self, walk_mock, utils_mock, tempdir_mock, create_root_fs_mock): create_root_fs_mock.side_effect = IOError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_bios, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk') @mock.patch.object(images, '_umount_without_raise', autospec=True) @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_mount_deploy_iso', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_esp_image_mkisofs_fails( self, gen_cfg_mock, mount_mock, utils_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock, umount_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' mock_file_handle1 = mock.MagicMock(spec=io.BytesIO) mock_file_handle1.__enter__.return_value = 'mountdir' tempdir_mock.side_effect = mock_file_handle, mock_file_handle1 mount_mock.return_value = ({'a': 'a'}, 'b', 'c') utils_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_esp_image_for_uefi, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk', deploy_iso='path/to/deployiso') umount_mock.assert_called_once_with('mountdir') @mock.patch.object(images, '_create_root_fs', autospec=True) @mock.patch.object(utils, 'write_to_file', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) @mock.patch.object(images, '_generate_cfg', autospec=True) def test_create_isolinux_image_bios_mkisofs_fails(self, gen_cfg_mock, utils_mock, tempdir_mock, write_to_file_mock, create_root_fs_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle utils_mock.side_effect = processutils.ProcessExecutionError self.assertRaises(exception.ImageCreationFailed, images.create_isolinux_image_for_bios, 'tgt_file', 'path/to/kernel', 'path/to/ramdisk') @mock.patch.object(images, 'create_esp_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi_deploy_iso( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso( 'ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', deploy_iso_href='deploy_iso-uuid', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') fetch_images_mock.assert_any_call( 'ctx', 'deploy_iso-uuid', 'tmpdir/deploy_iso-uuid') params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', deploy_iso='tmpdir/deploy_iso-uuid', esp_image=None, kernel_params=params, configdrive=None) @mock.patch.object(images, 'create_esp_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi_esp_image( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso( 'ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', esp_image_href='efiboot-uuid', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') fetch_images_mock.assert_any_call( 'ctx', 'efiboot-uuid', 'tmpdir/efiboot-uuid') params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', deploy_iso=None, esp_image='tmpdir/efiboot-uuid', kernel_params=params, configdrive=None) @mock.patch.object(images, 'create_esp_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi_deploy_iso_for_hrefs( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso( 'ctx', 'output_file', 'http://kernel-href', 'http://ramdisk-href', deploy_iso_href='http://deploy_iso-href', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') expected_calls = [mock.call('ctx', 'http://kernel-href', 'tmpdir/kernel-href'), mock.call('ctx', 'http://ramdisk-href', 'tmpdir/ramdisk-href'), mock.call('ctx', 'http://deploy_iso-href', 'tmpdir/deploy_iso-href')] fetch_images_mock.assert_has_calls(expected_calls) params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-href', 'tmpdir/ramdisk-href', deploy_iso='tmpdir/deploy_iso-href', esp_image=None, kernel_params=params, configdrive=None) @mock.patch.object(images, 'create_esp_image_for_uefi', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_uefi_esp_image_for_hrefs( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso( 'ctx', 'output_file', 'http://kernel-href', 'http://ramdisk-href', esp_image_href='http://efiboot-href', root_uuid='root-uuid', kernel_params='kernel-params', boot_mode='uefi') expected_calls = [mock.call('ctx', 'http://kernel-href', 'tmpdir/kernel-href'), mock.call('ctx', 'http://ramdisk-href', 'tmpdir/ramdisk-href'), mock.call('ctx', 'http://efiboot-href', 'tmpdir/efiboot-href')] fetch_images_mock.assert_has_calls(expected_calls) params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-href', 'tmpdir/ramdisk-href', deploy_iso=None, esp_image='tmpdir/efiboot-href', kernel_params=params, configdrive=None) @mock.patch.object(images, 'create_isolinux_image_for_bios', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_bios( self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', 'deploy_iso-uuid', 'efiboot-uuid', 'root-uuid', 'kernel-params', 'bios', 'configdrive') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') fetch_images_mock.assert_any_call( 'ctx', 'configdrive', 'tmpdir/configdrive') # Note (NobodyCam): the original assert asserted that fetch_images # was not called with parameters, this did not # work, So I instead assert that there were only # Two calls to the mock validating the above # asserts. self.assertEqual(3, fetch_images_mock.call_count) params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', kernel_params=params, configdrive='tmpdir/configdrive') @mock.patch.object(images, 'create_isolinux_image_for_bios', autospec=True) @mock.patch.object(images, 'fetch', autospec=True) @mock.patch.object(utils, 'tempdir', autospec=True) def test_create_boot_iso_for_bios_with_no_boot_mode(self, tempdir_mock, fetch_images_mock, create_isolinux_mock): mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'tmpdir' tempdir_mock.return_value = mock_file_handle images.create_boot_iso('ctx', 'output_file', 'kernel-uuid', 'ramdisk-uuid', 'deploy_iso-uuid', 'efiboot-uuid', 'root-uuid', 'kernel-params', None, 'http://configdrive') fetch_images_mock.assert_any_call( 'ctx', 'kernel-uuid', 'tmpdir/kernel-uuid') fetch_images_mock.assert_any_call( 'ctx', 'ramdisk-uuid', 'tmpdir/ramdisk-uuid') fetch_images_mock.assert_any_call( 'ctx', 'http://configdrive', 'tmpdir/configdrive') params = ['root=UUID=root-uuid', 'kernel-params'] create_isolinux_mock.assert_called_once_with( 'output_file', 'tmpdir/kernel-uuid', 'tmpdir/ramdisk-uuid', configdrive='tmpdir/configdrive', kernel_params=params) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_no_such_prop(self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid', ['p1', 'p2', 'p3']) image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p2': 'v2', 'p3': None}, ret_val) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_default_all( self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid') image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p2': 'v2'}, ret_val) @mock.patch.object(image_service, 'get_image_service', autospec=True) def test_get_glance_image_properties_with_prop_subset( self, image_service_mock): prop_dict = {'properties': {'p1': 'v1', 'p2': 'v2', 'p3': 'v3'}} image_service_obj_mock = image_service_mock.return_value image_service_obj_mock.show.return_value = prop_dict ret_val = images.get_image_properties('con', 'uuid', ['p1', 'p3']) image_service_mock.assert_called_once_with('uuid', context='con') image_service_obj_mock.show.assert_called_once_with('uuid') self.assertEqual({'p1': 'v1', 'p3': 'v3'}, ret_val) @mock.patch.object(image_service, 'GlanceImageService', autospec=True) def test_get_temp_url_for_glance_image(self, image_service_mock): direct_url = 'swift+http://host/v1/AUTH_xx/con/obj' image_info = {'id': 'qwe', 'properties': {'direct_url': direct_url}} glance_service_mock = image_service_mock.return_value glance_service_mock.swift_temp_url.return_value = 'temp-url' glance_service_mock.show.return_value = image_info temp_url = images.get_temp_url_for_glance_image('context', 'glance_uuid') glance_service_mock.show.assert_called_once_with('glance_uuid') self.assertEqual('temp-url', temp_url) ironic-15.0.0/ironic/tests/unit/common/test_rpc_service.py0000664000175000017500000000431713652514273023730 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg import oslo_messaging from oslo_service import service as base_service from ironic.common import context from ironic.common import rpc from ironic.common import rpc_service from ironic.conductor import manager from ironic.objects import base as objects_base from ironic.tests import base CONF = cfg.CONF @mock.patch.object(base_service.Service, '__init__', lambda *_, **__: None) class TestRPCService(base.TestCase): def setUp(self): super(TestRPCService, self).setUp() host = "fake_host" mgr_module = "ironic.conductor.manager" mgr_class = "ConductorManager" self.rpc_svc = rpc_service.RPCService(host, mgr_module, mgr_class) @mock.patch.object(oslo_messaging, 'Target', autospec=True) @mock.patch.object(objects_base, 'IronicObjectSerializer', autospec=True) @mock.patch.object(rpc, 'get_server', autospec=True) @mock.patch.object(manager.ConductorManager, 'init_host', autospec=True) @mock.patch.object(context, 'get_admin_context', autospec=True) def test_start(self, mock_ctx, mock_init_method, mock_rpc, mock_ios, mock_target): mock_rpc.return_value.start = mock.MagicMock() self.rpc_svc.handle_signal = mock.MagicMock() self.rpc_svc.start() mock_ctx.assert_called_once_with() mock_target.assert_called_once_with(topic=self.rpc_svc.topic, server="fake_host") mock_ios.assert_called_once_with(is_server=True) mock_init_method.assert_called_once_with(self.rpc_svc.manager, mock_ctx.return_value) ironic-15.0.0/ironic/tests/unit/common/test_hash_ring.py0000664000175000017500000001240413652514273023362 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from oslo_config import cfg from ironic.common import exception from ironic.common import hash_ring from ironic.tests.unit.db import base as db_base CONF = cfg.CONF class HashRingManagerTestCase(db_base.DbTestCase): use_groups = False def setUp(self): super(HashRingManagerTestCase, self).setUp() self.ring_manager = hash_ring.HashRingManager( use_groups=self.use_groups) def register_conductors(self): c1 = self.dbapi.register_conductor({ 'hostname': 'host1', 'drivers': ['driver1', 'driver2'], }) c2 = self.dbapi.register_conductor({ 'hostname': 'host2', 'drivers': ['driver1'], }) c3 = self.dbapi.register_conductor({ 'hostname': 'host3', 'drivers': ['driver1, driver2'], 'conductor_group': 'foogroup', }) c4 = self.dbapi.register_conductor({ 'hostname': 'host4', 'drivers': ['driver1'], 'conductor_group': 'foogroup', }) c5 = self.dbapi.register_conductor({ 'hostname': 'host5', 'drivers': ['driver1'], 'conductor_group': 'bargroup', }) for c in (c1, c2, c3, c4, c5): self.dbapi.register_conductor_hardware_interfaces( c.id, 'hardware-type', 'deploy', ['iscsi', 'direct'], 'iscsi') def test_hash_ring_manager_hardware_type_success(self): self.register_conductors() ring = self.ring_manager.get_ring('hardware-type', '') self.assertEqual(sorted(['host1', 'host2', 'host3', 'host4', 'host5']), sorted(ring.nodes)) def test_hash_ring_manager_hardware_type_success_groups(self): # groupings should be ignored here self.register_conductors() ring = self.ring_manager.get_ring('hardware-type', 'foogroup') self.assertEqual(sorted(['host1', 'host2', 'host3', 'host4', 'host5']), sorted(ring.nodes)) def test_hash_ring_manager_driver_not_found(self): self.register_conductors() self.assertRaises(exception.DriverNotFound, self.ring_manager.get_ring, 'driver3', '') def test_hash_ring_manager_automatic_retry(self): self.assertRaises(exception.TemporaryFailure, self.ring_manager.get_ring, 'hardware-type', '') self.register_conductors() self.ring_manager.get_ring('hardware-type', '') def test_hash_ring_manager_reset_interval(self): CONF.set_override('hash_ring_reset_interval', 30) # Just to simplify calculations CONF.set_override('hash_partition_exponent', 0) c1 = self.dbapi.register_conductor({ 'hostname': 'host1', 'drivers': ['driver1', 'driver2'], }) c2 = self.dbapi.register_conductor({ 'hostname': 'host2', 'drivers': ['driver1'], }) self.dbapi.register_conductor_hardware_interfaces( c1.id, 'hardware-type', 'deploy', ['iscsi', 'direct'], 'iscsi') ring = self.ring_manager.get_ring('hardware-type', '') self.assertEqual(1, len(ring)) self.dbapi.register_conductor_hardware_interfaces( c2.id, 'hardware-type', 'deploy', ['iscsi', 'direct'], 'iscsi') ring = self.ring_manager.get_ring('hardware-type', '') # The new conductor is not known yet. Automatic retry does not kick in, # since there is an active conductor for the requested hardware type. self.assertEqual(1, len(ring)) self.ring_manager.updated_at = time.time() - 31 ring = self.ring_manager.get_ring('hardware-type', '') self.assertEqual(2, len(ring)) def test_hash_ring_manager_uncached(self): ring_mgr = hash_ring.HashRingManager(cache=False, use_groups=self.use_groups) ring = ring_mgr.ring self.assertIsNotNone(ring) self.assertIsNone(hash_ring.HashRingManager._hash_rings) class HashRingManagerWithGroupsTestCase(HashRingManagerTestCase): use_groups = True def test_hash_ring_manager_hardware_type_success(self): self.register_conductors() ring = self.ring_manager.get_ring('hardware-type', '') self.assertEqual(sorted(['host1', 'host2']), sorted(ring.nodes)) def test_hash_ring_manager_hardware_type_success_groups(self): self.register_conductors() ring = self.ring_manager.get_ring('hardware-type', 'foogroup') self.assertEqual(sorted(['host3', 'host4']), sorted(ring.nodes)) ironic-15.0.0/ironic/tests/unit/common/test_fsm.py0000664000175000017500000000720713652514273022212 0ustar zuulzuul00000000000000# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import exception as excp from ironic.common import fsm from ironic.tests import base class FSMTest(base.TestCase): def setUp(self): super(FSMTest, self).setUp() m = fsm.FSM() m.add_state('working', stable=True) m.add_state('daydream') m.add_state('wakeup', target='working') m.add_state('play', stable=True) m.add_transition('wakeup', 'working', 'walk') self.fsm = m def test_is_stable(self): self.assertTrue(self.fsm.is_stable('working')) def test_is_stable_not(self): self.assertFalse(self.fsm.is_stable('daydream')) def test_is_stable_invalid_state(self): self.assertRaises(excp.InvalidState, self.fsm.is_stable, 'foo') def test_target_state_stable(self): # Test to verify that adding a new state with a 'target' state pointing # to a 'stable' state does not raise an exception self.fsm.add_state('foo', target='working') self.fsm.default_start_state = 'working' self.fsm.initialize() def test__validate_target_state(self): # valid self.fsm._validate_target_state('working') # target doesn't exist self.assertRaisesRegex(excp.InvalidState, "does not exist", self.fsm._validate_target_state, 'new state') # target isn't a stable state self.assertRaisesRegex(excp.InvalidState, "stable", self.fsm._validate_target_state, 'daydream') def test_initialize(self): # no start state self.assertRaises(excp.InvalidState, self.fsm.initialize) # no target state self.fsm.initialize('working') self.assertEqual('working', self.fsm.current_state) self.assertIsNone(self.fsm.target_state) # default target state self.fsm.initialize('wakeup') self.assertEqual('wakeup', self.fsm.current_state) self.assertEqual('working', self.fsm.target_state) # specify (it overrides default) target state self.fsm.initialize('wakeup', 'play') self.assertEqual('wakeup', self.fsm.current_state) self.assertEqual('play', self.fsm.target_state) # specify an invalid target state self.assertRaises(excp.InvalidState, self.fsm.initialize, 'wakeup', 'daydream') def test_process_event(self): # default target state self.fsm.initialize('wakeup') self.fsm.process_event('walk') self.assertEqual('working', self.fsm.current_state) self.assertIsNone(self.fsm.target_state) # specify (it overrides default) target state self.fsm.initialize('wakeup') self.fsm.process_event('walk', 'play') self.assertEqual('working', self.fsm.current_state) self.assertEqual('play', self.fsm.target_state) # specify an invalid target state self.fsm.initialize('wakeup') self.assertRaises(excp.InvalidState, self.fsm.process_event, 'walk', 'daydream') ironic-15.0.0/ironic/tests/unit/common/test_rpc.py0000664000175000017500000002153513652514273022211 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg import oslo_messaging as messaging from ironic.common import context as ironic_context from ironic.common import rpc from ironic.tests import base CONF = cfg.CONF class TestUtils(base.TestCase): @mock.patch.object(messaging, 'Notifier', autospec=True) @mock.patch.object(messaging, 'JsonPayloadSerializer', autospec=True) @mock.patch.object(messaging, 'get_notification_transport', autospec=True) @mock.patch.object(messaging, 'get_rpc_transport', autospec=True) def test_init_globals_notifications_disabled(self, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier): self._test_init_globals(False, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier) @mock.patch.object(messaging, 'Notifier', autospec=True) @mock.patch.object(messaging, 'JsonPayloadSerializer', autospec=True) @mock.patch.object(messaging, 'get_notification_transport', autospec=True) @mock.patch.object(messaging, 'get_rpc_transport', autospec=True) def test_init_globals_notifications_enabled(self, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier): self.config(notification_level='debug') self._test_init_globals(True, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier) @mock.patch.object(messaging, 'Notifier', autospec=True) @mock.patch.object(messaging, 'JsonPayloadSerializer', autospec=True) @mock.patch.object(messaging, 'get_notification_transport', autospec=True) @mock.patch.object(messaging, 'get_rpc_transport', autospec=True) def test_init_globals_with_custom_topics(self, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier): self._test_init_globals( False, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier, versioned_notifications_topics=['custom_topic1', 'custom_topic2']) def _test_init_globals( self, notifications_enabled, mock_get_rpc_transport, mock_get_notification, mock_json_serializer, mock_notifier, versioned_notifications_topics=['ironic_versioned_notifications']): rpc.TRANSPORT = None rpc.NOTIFICATION_TRANSPORT = None rpc.SENSORS_NOTIFIER = None rpc.VERSIONED_NOTIFIER = None mock_request_serializer = mock.Mock() mock_request_serializer.return_value = mock.Mock() rpc.RequestContextSerializer = mock_request_serializer # Make sure that two separate Notifiers are instantiated: one for the # regular RPC transport, one for the notification transport mock_notifiers = [mock.Mock()] * 2 mock_notifier.side_effect = mock_notifiers rpc.init(CONF) self.assertEqual(mock_get_rpc_transport.return_value, rpc.TRANSPORT) self.assertEqual(mock_get_notification.return_value, rpc.NOTIFICATION_TRANSPORT) self.assertTrue(mock_json_serializer.called) if not notifications_enabled: notifier_calls = [ mock.call( rpc.NOTIFICATION_TRANSPORT, serializer=mock_request_serializer.return_value), mock.call( rpc.NOTIFICATION_TRANSPORT, serializer=mock_request_serializer.return_value, driver='noop') ] else: notifier_calls = [ mock.call( rpc.NOTIFICATION_TRANSPORT, serializer=mock_request_serializer.return_value), mock.call( rpc.NOTIFICATION_TRANSPORT, serializer=mock_request_serializer.return_value, topics=versioned_notifications_topics) ] mock_notifier.assert_has_calls(notifier_calls) self.assertEqual(mock_notifiers[0], rpc.SENSORS_NOTIFIER) self.assertEqual(mock_notifiers[1], rpc.VERSIONED_NOTIFIER) def test_get_sensors_notifier(self): rpc.SENSORS_NOTIFIER = mock.Mock(autospec=True) rpc.get_sensors_notifier(service='conductor', host='my_conductor', publisher_id='a_great_publisher') rpc.SENSORS_NOTIFIER.prepare.assert_called_once_with( publisher_id='a_great_publisher') def test_get_sensors_notifier_no_publisher_id(self): rpc.SENSORS_NOTIFIER = mock.Mock(autospec=True) rpc.get_sensors_notifier(service='conductor', host='my_conductor') rpc.SENSORS_NOTIFIER.prepare.assert_called_once_with( publisher_id='conductor.my_conductor') def test_get_sensors_notifier_no_notifier(self): rpc.SENSORS_NOTIFIER = None self.assertRaises(AssertionError, rpc.get_sensors_notifier) def test_get_versioned_notifier(self): rpc.VERSIONED_NOTIFIER = mock.Mock(autospec=True) rpc.get_versioned_notifier(publisher_id='a_great_publisher') rpc.VERSIONED_NOTIFIER.prepare.assert_called_once_with( publisher_id='a_great_publisher') def test_get_versioned_notifier_no_publisher_id(self): rpc.VERSIONED_NOTIFIER = mock.Mock() self.assertRaises(AssertionError, rpc.get_versioned_notifier, publisher_id=None) def test_get_versioned_notifier_no_notifier(self): rpc.VERSIONED_NOTIFIER = None self.assertRaises( AssertionError, rpc.get_versioned_notifier, publisher_id='a_great_publisher') class TestRequestContextSerializer(base.TestCase): def setUp(self): super(TestRequestContextSerializer, self).setUp() self.mock_serializer = mock.MagicMock() self.serializer = rpc.RequestContextSerializer(self.mock_serializer) self.context = ironic_context.RequestContext() self.entity = {'foo': 'bar'} def test_serialize_entity(self): self.serializer.serialize_entity(self.context, self.entity) self.mock_serializer.serialize_entity.assert_called_with( self.context, self.entity) def test_serialize_entity_empty_base(self): # NOTE(viktors): Return False for check `if self.serializer._base:` bool_args = {'__bool__': lambda *args: False, '__nonzero__': lambda *args: False} self.mock_serializer.configure_mock(**bool_args) entity = self.serializer.serialize_entity(self.context, self.entity) self.assertFalse(self.mock_serializer.serialize_entity.called) # If self.serializer._base is empty, return entity directly self.assertEqual(self.entity, entity) def test_deserialize_entity(self): self.serializer.deserialize_entity(self.context, self.entity) self.mock_serializer.deserialize_entity.assert_called_with( self.context, self.entity) def test_deserialize_entity_empty_base(self): # NOTE(viktors): Return False for check `if self.serializer._base:` bool_args = {'__bool__': lambda *args: False, '__nonzero__': lambda *args: False} self.mock_serializer.configure_mock(**bool_args) entity = self.serializer.deserialize_entity(self.context, self.entity) self.assertFalse(self.mock_serializer.serialize_entity.called) self.assertEqual(self.entity, entity) def test_serialize_context(self): serialize_values = self.serializer.serialize_context(self.context) self.assertEqual(self.context.to_dict(), serialize_values) def test_deserialize_context(self): serialize_values = self.context.to_dict() new_context = self.serializer.deserialize_context(serialize_values) self.assertEqual(serialize_values, new_context.to_dict()) self.assertIsInstance(new_context, ironic_context.RequestContext) ironic-15.0.0/ironic/tests/unit/common/test_glance_service.py0000664000175000017500000011735613652514273024405 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import importlib import time from glanceclient import client as glance_client from glanceclient import exc as glance_exc from keystoneauth1 import loading as kaloading import mock from oslo_config import cfg from oslo_utils import uuidutils import retrying import testtools from ironic.common import context from ironic.common import exception from ironic.common.glance_service import image_service from ironic.common.glance_service import service_utils from ironic.tests import base from ironic.tests.unit import stubs CONF = cfg.CONF class NullWriter(object): """Used to test ImageService.get which takes a writer object.""" def write(self, *arg, **kwargs): pass class TestGlanceSerializer(testtools.TestCase): def test_serialize(self): metadata = {'name': 'image1', 'foo': 'bar', 'properties': { 'prop1': 'propvalue1', 'mappings': '[' '{"virtual":"aaa","device":"bbb"},' '{"virtual":"xxx","device":"yyy"}]', 'block_device_mapping': '[' '{"virtual_device":"fake","device_name":"/dev/fake"},' '{"virtual_device":"ephemeral0",' '"device_name":"/dev/fake0"}]'}} expected = { 'name': 'image1', 'foo': 'bar', 'properties': {'prop1': 'propvalue1', 'mappings': [ {'virtual': 'aaa', 'device': 'bbb'}, {'virtual': 'xxx', 'device': 'yyy'}, ], 'block_device_mapping': [ {'virtual_device': 'fake', 'device_name': '/dev/fake'}, {'virtual_device': 'ephemeral0', 'device_name': '/dev/fake0'} ] } } converted = service_utils._convert(metadata) self.assertEqual(expected, converted) class TestGlanceImageService(base.TestCase): NOW_GLANCE_OLD_FORMAT = "2010-10-11T10:30:22" NOW_GLANCE_FORMAT = "2010-10-11T10:30:22.000000" NOW_DATETIME = datetime.datetime(2010, 10, 11, 10, 30, 22) def setUp(self): super(TestGlanceImageService, self).setUp() self.client = stubs.StubGlanceClient() self.context = context.RequestContext(auth_token=True) self.context.user_id = 'fake' self.context.project_id = 'fake' self.service = image_service.GlanceImageService(self.client, self.context) @staticmethod def _make_fixture(**kwargs): fixture = {'name': None, 'status': "active"} fixture.update(kwargs) return stubs.FakeImage(fixture) @property def endpoint(self): # For glanceclient versions >= 0.13, the endpoint is located # under http_client (blueprint common-client-library-2) # I5addc38eb2e2dd0be91b566fda7c0d81787ffa75 # Test both options to keep backward compatibility if getattr(self.service.client, 'endpoint', None): endpoint = self.service.client.endpoint else: endpoint = self.service.client.http_client.endpoint return endpoint def _make_datetime_fixture(self): return self._make_fixture(created_at=self.NOW_GLANCE_FORMAT, updated_at=self.NOW_GLANCE_FORMAT, deleted_at=self.NOW_GLANCE_FORMAT) def test_show_passes_through_to_client(self): image_id = uuidutils.generate_uuid() image = self._make_fixture(name='image1', id=image_id) expected = { 'checksum': None, 'container_format': None, 'created_at': None, 'deleted': None, 'deleted_at': None, 'disk_format': None, 'file': None, 'id': image_id, 'min_disk': None, 'min_ram': None, 'name': 'image1', 'owner': None, 'properties': {}, 'protected': None, 'schema': None, 'size': None, 'status': "active", 'tags': None, 'updated_at': None, 'visibility': None, 'os_hash_algo': None, 'os_hash_value': None, } with mock.patch.object(self.service, 'call', return_value=image, autospec=True): image_meta = self.service.show(image_id) self.service.call.assert_called_once_with('get', image_id) self.assertEqual(expected, image_meta) def test_show_makes_datetimes(self): image_id = uuidutils.generate_uuid() image = self._make_datetime_fixture() with mock.patch.object(self.service, 'call', return_value=image, autospec=True): image_meta = self.service.show(image_id) self.service.call.assert_called_once_with('get', image_id) self.assertEqual(self.NOW_DATETIME, image_meta['created_at']) self.assertEqual(self.NOW_DATETIME, image_meta['updated_at']) def test_show_raises_when_no_authtoken_in_the_context(self): self.context.auth_token = False self.assertRaises(exception.ImageNotFound, self.service.show, uuidutils.generate_uuid()) def test_show_raises_when_image_not_active(self): image_id = uuidutils.generate_uuid() image = self._make_fixture(name='image1', id=image_id, status="queued") with mock.patch.object(self.service, 'call', return_value=image, autospec=True): self.assertRaises(exception.ImageUnacceptable, self.service.show, image_id) @mock.patch.object(retrying.time, 'sleep', autospec=True) def test_download_with_retries(self, mock_sleep): tries = [0] class MyGlanceStubClient(stubs.StubGlanceClient): """A client that fails the first time, then succeeds.""" def get(self, image_id): if tries[0] == 0: tries[0] = 1 raise glance_exc.ServiceUnavailable('') else: return {} stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = image_service.GlanceImageService(stub_client, stub_context) image_id = uuidutils.generate_uuid() writer = NullWriter() # When retries are disabled, we should get an exception self.config(num_retries=0, group='glance') self.assertRaises(exception.GlanceConnectionFailed, stub_service.download, image_id, writer) # Now lets enable retries. No exception should happen now. self.config(num_retries=1, group='glance') importlib.reload(image_service) stub_service = image_service.GlanceImageService(stub_client, stub_context) tries = [0] stub_service.download(image_id, writer) self.assertTrue(mock_sleep.called) def test_download_no_data(self): self.client.fake_wrapped = None image_id = uuidutils.generate_uuid() image = self._make_datetime_fixture() with mock.patch.object(self.client, 'get', return_value=image, autospec=True): self.assertRaisesRegex(exception.ImageDownloadFailed, 'image contains no data', self.service.download, image_id) @mock.patch('sendfile.sendfile', autospec=True) @mock.patch('os.path.getsize', autospec=True) @mock.patch('%s.open' % __name__, new=mock.mock_open(), create=True) def test_download_file_url(self, mock_getsize, mock_sendfile): # NOTE: only in v2 API class MyGlanceStubClient(stubs.StubGlanceClient): """A client that returns a file url.""" s_tmpfname = '/whatever/source' def get(self, image_id): return type('GlanceTestDirectUrlMeta', (object,), {'direct_url': 'file://%s' + self.s_tmpfname}) stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_client = MyGlanceStubClient() stub_service = image_service.GlanceImageService(stub_client, context=stub_context) image_id = uuidutils.generate_uuid() self.config(allowed_direct_url_schemes=['file'], group='glance') # patching open in image_service module namespace # to make call-spec assertions with mock.patch('ironic.common.glance_service.image_service.open', new=mock.mock_open(), create=True) as mock_ironic_open: with open('/whatever/target', 'w') as mock_target_fd: stub_service.download(image_id, mock_target_fd) # assert the image data was neither read nor written # but rather sendfiled mock_ironic_open.assert_called_once_with(MyGlanceStubClient.s_tmpfname, 'r') mock_source_fd = mock_ironic_open() self.assertFalse(mock_source_fd.read.called) self.assertFalse(mock_target_fd.write.called) mock_sendfile.assert_called_once_with( mock_target_fd.fileno(), mock_source_fd.fileno(), 0, mock_getsize(MyGlanceStubClient.s_tmpfname)) def test_client_forbidden_converts_to_imagenotauthed(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a Forbidden exception.""" def get(self, image_id): raise glance_exc.Forbidden(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = image_service.GlanceImageService(stub_client, stub_context) image_id = uuidutils.generate_uuid() writer = NullWriter() self.assertRaises(exception.ImageNotAuthorized, stub_service.download, image_id, writer) def test_client_httpforbidden_converts_to_imagenotauthed(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a HTTPForbidden exception.""" def get(self, image_id): raise glance_exc.HTTPForbidden(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = image_service.GlanceImageService(stub_client, stub_context) image_id = uuidutils.generate_uuid() writer = NullWriter() self.assertRaises(exception.ImageNotAuthorized, stub_service.download, image_id, writer) def test_client_notfound_converts_to_imagenotfound(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a NotFound exception.""" def get(self, image_id): raise glance_exc.NotFound(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = image_service.GlanceImageService(stub_client, stub_context) image_id = uuidutils.generate_uuid() writer = NullWriter() self.assertRaises(exception.ImageNotFound, stub_service.download, image_id, writer) def test_client_httpnotfound_converts_to_imagenotfound(self): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that raises a HTTPNotFound exception.""" def get(self, image_id): raise glance_exc.HTTPNotFound(image_id) stub_client = MyGlanceStubClient() stub_context = context.RequestContext(auth_token=True) stub_context.user_id = 'fake' stub_context.project_id = 'fake' stub_service = image_service.GlanceImageService(stub_client, stub_context) image_id = uuidutils.generate_uuid() writer = NullWriter() self.assertRaises(exception.ImageNotFound, stub_service.download, image_id, writer) @mock.patch('ironic.common.keystone.get_auth', autospec=True, return_value=mock.sentinel.auth) @mock.patch('ironic.common.keystone.get_service_auth', autospec=True, return_value=mock.sentinel.sauth) @mock.patch('ironic.common.keystone.get_adapter', autospec=True) @mock.patch('ironic.common.keystone.get_session', autospec=True, return_value=mock.sentinel.session) @mock.patch.object(glance_client, 'Client', autospec=True) class CheckImageServiceTestCase(base.TestCase): def setUp(self): super(CheckImageServiceTestCase, self).setUp() self.context = context.RequestContext(global_request_id='global') self.service = image_service.GlanceImageService(None, self.context) # NOTE(pas-ha) register keystoneauth dynamic options manually plugin = kaloading.get_plugin_loader('password') opts = kaloading.get_auth_plugin_conf_options(plugin) self.cfg_fixture.register_opts(opts, group='glance') self.config(auth_type='password', auth_url='viking', username='spam', password='ham', project_name='parrot', service_type='image', region_name='SomeRegion', interface='internal', group='glance') image_service._GLANCE_SESSION = None def test_check_image_service_client_already_set(self, mock_gclient, mock_sess, mock_adapter, mock_sauth, mock_auth): def func(self): return True self.service.client = True wrapped_func = image_service.check_image_service(func) self.assertTrue(wrapped_func(self.service)) self.assertEqual(0, mock_gclient.call_count) self.assertEqual(0, mock_sess.call_count) self.assertEqual(0, mock_adapter.call_count) self.assertEqual(0, mock_auth.call_count) self.assertEqual(0, mock_sauth.call_count) def _assert_client_call(self, mock_gclient, url, user=False): mock_gclient.assert_called_once_with( 2, session=mock.sentinel.session, global_request_id='global', auth=mock.sentinel.sauth if user else mock.sentinel.auth, endpoint_override=url) def test_check_image_service__config_auth(self, mock_gclient, mock_sess, mock_adapter, mock_sauth, mock_auth): def func(service, *args, **kwargs): return args, kwargs mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'glance_url' uuid = uuidutils.generate_uuid() params = {'image_href': uuid} wrapped_func = image_service.check_image_service(func) self.assertEqual(((), params), wrapped_func(self.service, **params)) self._assert_client_call(mock_gclient, 'glance_url') mock_auth.assert_called_once_with('glance') mock_sess.assert_called_once_with('glance') mock_adapter.assert_called_once_with('glance', session=mock.sentinel.session, auth=mock.sentinel.auth) adapter.get_endpoint.assert_called_once_with() self.assertEqual(0, mock_sauth.call_count) def test_check_image_service__token_auth(self, mock_gclient, mock_sess, mock_adapter, mock_sauth, mock_auth): def func(service, *args, **kwargs): return args, kwargs self.service.context = context.RequestContext( auth_token='token', global_request_id='global') mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'glance_url' uuid = uuidutils.generate_uuid() params = {'image_href': uuid} wrapped_func = image_service.check_image_service(func) self.assertEqual(((), params), wrapped_func(self.service, **params)) self._assert_client_call(mock_gclient, 'glance_url', user=True) mock_sess.assert_called_once_with('glance') mock_adapter.assert_called_once_with('glance', session=mock.sentinel.session, auth=mock.sentinel.auth) mock_sauth.assert_called_once_with(self.service.context, 'glance_url', mock.sentinel.auth) mock_auth.assert_called_once_with('glance') def test_check_image_service__no_auth(self, mock_gclient, mock_sess, mock_adapter, mock_sauth, mock_auth): def func(service, *args, **kwargs): return args, kwargs self.config(endpoint_override='foo', auth_type='none', group='glance') mock_adapter.return_value = adapter = mock.Mock() adapter.get_endpoint.return_value = 'foo' uuid = uuidutils.generate_uuid() params = {'image_href': uuid} wrapped_func = image_service.check_image_service(func) self.assertEqual(((), params), wrapped_func(self.service, **params)) self.assertEqual('none', image_service.CONF.glance.auth_type) self._assert_client_call(mock_gclient, 'foo') mock_sess.assert_called_once_with('glance') mock_adapter.assert_called_once_with('glance', session=mock.sentinel.session, auth=mock.sentinel.auth) self.assertEqual(0, mock_sauth.call_count) def _create_failing_glance_client(info): class MyGlanceStubClient(stubs.StubGlanceClient): """A client that fails the first time, then succeeds.""" def get(self, image_id): info['num_calls'] += 1 if info['num_calls'] == 1: raise glance_exc.ServiceUnavailable('') return {} return MyGlanceStubClient() class TestGlanceSwiftTempURL(base.TestCase): def setUp(self): super(TestGlanceSwiftTempURL, self).setUp() client = stubs.StubGlanceClient() self.context = context.RequestContext() self.context.auth_token = 'fake' self.service = image_service.GlanceImageService(client, self.context) self.config(swift_temp_url_key='correcthorsebatterystaple', group='glance') self.config(swift_endpoint_url='https://swift.example.com', group='glance') self.config(swift_account='AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30', group='glance') self.config(swift_api_version='v1', group='glance') self.config(swift_container='glance', group='glance') self.config(swift_temp_url_duration=1200, group='glance') self.config(swift_store_multiple_containers_seed=0, group='glance') self.fake_image = { 'id': '757274c4-2856-4bd2-bb20-9a4a231e187b' } @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url(self, tempurl_mock): path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('ironic.common.keystone.get_adapter', autospec=True) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_endpoint_detected(self, tempurl_mock, adapter_mock): self.config(swift_endpoint_url=None, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') endpoint = 'http://another.example.com:8080' adapter_mock.return_value.get_endpoint.return_value = endpoint self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(endpoint + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('ironic.common.keystone.get_adapter', autospec=True) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_endpoint_with_suffix(self, tempurl_mock, adapter_mock): self.config(swift_endpoint_url=None, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') endpoint = 'http://another.example.com:8080' adapter_mock.return_value.get_endpoint.return_value = ( endpoint + '/v1/AUTH_foobar') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(endpoint + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') @mock.patch('ironic.common.swift.get_swift_session', autospec=True) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_account_detected(self, tempurl_mock, swift_mock): self.config(swift_account=None, group='glance') path = ('/v1/AUTH_42/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') auth_ref = swift_mock.return_value.auth.get_auth_ref.return_value auth_ref.project_id = '42' self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') swift_mock.assert_called_once_with() @mock.patch('ironic.common.swift.SwiftAPI', autospec=True) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_key_detected(self, tempurl_mock, swift_mock): self.config(swift_temp_url_key=None, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') conn = swift_mock.return_value.connection conn.head_account.return_value = { 'x-account-meta-temp-url-key': 'secret' } self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key='secret', method='GET') conn.head_account.assert_called_once_with() @mock.patch('ironic.common.swift.SwiftAPI', autospec=True) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_no_key_detected(self, tempurl_mock, swift_mock): self.config(swift_temp_url_key=None, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') conn = swift_mock.return_value.connection conn.head_account.return_value = {} self.service._validate_temp_url_config = mock.Mock() self.assertRaises(exception.InvalidParameterValue, self.service.swift_temp_url, image_info=self.fake_image) conn.head_account.assert_called_once_with() @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_invalid_image_info(self, tempurl_mock): self.service._validate_temp_url_config = mock.Mock() image_info = {} self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info) image_info = {'id': 'not an id'} self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info) self.assertFalse(tempurl_mock.called) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_swift_temp_url_multiple_containers(self, tempurl_mock): self.config(swift_store_multiple_containers_seed=8, group='glance') path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance_757274c4' '/757274c4-2856-4bd2-bb20-9a4a231e187b') tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.service._validate_temp_url_config = mock.Mock() temp_url = self.service.swift_temp_url(image_info=self.fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') def test_swift_temp_url_url_bad_no_info(self): self.assertRaises(exception.ImageUnacceptable, self.service.swift_temp_url, image_info={}) def test__validate_temp_url_config(self): self.service._validate_temp_url_config() def test__validate_temp_url_no_key_no_exception(self): self.config(swift_temp_url_key=None, group='glance') self.service._validate_temp_url_config() def test__validate_temp_url_endpoint_less_than_download_delay(self): self.config(swift_temp_url_expected_download_start_delay=1000, group='glance') self.config(swift_temp_url_duration=15, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) def test__validate_temp_url_multiple_containers(self): self.config(swift_store_multiple_containers_seed=-1, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) self.config(swift_store_multiple_containers_seed=None, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) self.config(swift_store_multiple_containers_seed=33, group='glance') self.assertRaises(exception.InvalidParameterValue, self.service._validate_temp_url_config) class TestSwiftTempUrlCache(base.TestCase): def setUp(self): super(TestSwiftTempUrlCache, self).setUp() client = stubs.StubGlanceClient() self.context = context.RequestContext() self.context.auth_token = 'fake' self.config(swift_temp_url_expected_download_start_delay=100, group='glance') self.config(swift_temp_url_key='correcthorsebatterystaple', group='glance') self.config(swift_endpoint_url='https://swift.example.com', group='glance') self.config(swift_account='AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30', group='glance') self.config(swift_api_version='v1', group='glance') self.config(swift_container='glance', group='glance') self.config(swift_temp_url_duration=1200, group='glance') self.config(swift_temp_url_cache_enabled=True, group='glance') self.config(swift_store_multiple_containers_seed=0, group='glance') self.glance_service = image_service.GlanceImageService( client, context=self.context) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_add_items_to_cache(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id']) exp_time = int(time.time()) + 1200 tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=%s' % exp_time) cleanup_mock = mock.Mock() self.glance_service._remove_expired_items_from_cache = cleanup_mock self.glance_service._validate_temp_url_config = mock.Mock() temp_url = self.glance_service.swift_temp_url( image_info=fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) cleanup_mock.assert_called_once_with() tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') self.assertEqual((temp_url, exp_time), self.glance_service._cache[fake_image['id']]) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_return_cached_tempurl(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } exp_time = int(time.time()) + 1200 temp_url = CONF.glance.swift_endpoint_url + ( '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%(uuid)s' '?temp_url_sig=hmacsig&temp_url_expires=%(exp_time)s' % {'uuid': fake_image['id'], 'exp_time': exp_time} ) self.glance_service._cache[fake_image['id']] = ( image_service.TempUrlCacheElement(url=temp_url, url_expires_at=exp_time) ) cleanup_mock = mock.Mock() self.glance_service._remove_expired_items_from_cache = cleanup_mock self.glance_service._validate_temp_url_config = mock.Mock() self.assertEqual( temp_url, self.glance_service.swift_temp_url(image_info=fake_image) ) cleanup_mock.assert_called_once_with() self.assertFalse(tempurl_mock.called) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def test_do_not_return_expired_tempurls(self, tempurl_mock): fake_image = { 'id': uuidutils.generate_uuid() } old_exp_time = int(time.time()) + 99 path = ( '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id'] ) query = '?temp_url_sig=hmacsig&temp_url_expires=%s' self.glance_service._cache[fake_image['id']] = ( image_service.TempUrlCacheElement( url=(CONF.glance.swift_endpoint_url + path + query % old_exp_time), url_expires_at=old_exp_time) ) new_exp_time = int(time.time()) + 1200 tempurl_mock.return_value = ( path + query % new_exp_time) self.glance_service._validate_temp_url_config = mock.Mock() fresh_temp_url = self.glance_service.swift_temp_url( image_info=fake_image) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, fresh_temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') self.assertEqual( (fresh_temp_url, new_exp_time), self.glance_service._cache[fake_image['id']]) def test_remove_expired_items_from_cache(self): expired_items = { uuidutils.generate_uuid(): image_service.TempUrlCacheElement( 'fake-url-1', int(time.time()) - 10 ), uuidutils.generate_uuid(): image_service.TempUrlCacheElement( 'fake-url-2', int(time.time()) + 90 # Agent won't be able to start in time ) } valid_items = { uuidutils.generate_uuid(): image_service.TempUrlCacheElement( 'fake-url-3', int(time.time()) + 1000 ), uuidutils.generate_uuid(): image_service.TempUrlCacheElement( 'fake-url-4', int(time.time()) + 2000 ) } self.glance_service._cache.update(expired_items) self.glance_service._cache.update(valid_items) self.glance_service._remove_expired_items_from_cache() for uuid in valid_items: self.assertEqual(valid_items[uuid], self.glance_service._cache[uuid]) for uuid in expired_items: self.assertNotIn(uuid, self.glance_service._cache) @mock.patch('swiftclient.utils.generate_temp_url', autospec=True) def _test__generate_temp_url(self, fake_image, tempurl_mock): path = ('/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30' '/glance' '/%s' % fake_image['id']) tempurl_mock.return_value = ( path + '?temp_url_sig=hmacsig&temp_url_expires=1400001200') self.glance_service._validate_temp_url_config = mock.Mock() temp_url = self.glance_service._generate_temp_url( path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET', endpoint=CONF.glance.swift_endpoint_url, image_id=fake_image['id'] ) self.assertEqual(CONF.glance.swift_endpoint_url + tempurl_mock.return_value, temp_url) tempurl_mock.assert_called_with( path=path, seconds=CONF.glance.swift_temp_url_duration, key=CONF.glance.swift_temp_url_key, method='GET') def test_swift_temp_url_cache_enabled(self): fake_image = { 'id': uuidutils.generate_uuid() } rm_expired = mock.Mock() self.glance_service._remove_expired_items_from_cache = rm_expired self._test__generate_temp_url(fake_image) rm_expired.assert_called_once_with() self.assertIn(fake_image['id'], self.glance_service._cache) def test_swift_temp_url_cache_disabled(self): self.config(swift_temp_url_cache_enabled=False, group='glance') fake_image = { 'id': uuidutils.generate_uuid() } rm_expired = mock.Mock() self.glance_service._remove_expired_items_from_cache = rm_expired self._test__generate_temp_url(fake_image) self.assertFalse(rm_expired.called) self.assertNotIn(fake_image['id'], self.glance_service._cache) class TestServiceUtils(base.TestCase): def setUp(self): super(TestServiceUtils, self).setUp() service_utils._GLANCE_API_SERVER = None def test_parse_image_id_from_uuid(self): image_href = uuidutils.generate_uuid() parsed_id = service_utils.parse_image_id(image_href) self.assertEqual(image_href, parsed_id) def test_parse_image_id_from_glance(self): uuid = uuidutils.generate_uuid() image_href = u'glance://some-stuff/%s' % uuid parsed_id = service_utils.parse_image_id(image_href) self.assertEqual(uuid, parsed_id) def test_parse_image_id_from_glance_fail(self): self.assertRaises(exception.InvalidImageRef, service_utils.parse_image_id, u'glance://not-a-uuid') def test_parse_image_id_fail(self): self.assertRaises(exception.InvalidImageRef, service_utils.parse_image_id, u'http://spam.ham/eggs') def test_is_glance_image(self): image_href = u'uui\u0111' self.assertFalse(service_utils.is_glance_image(image_href)) image_href = u'733d1c44-a2ea-414b-aca7-69decf20d810' self.assertTrue(service_utils.is_glance_image(image_href)) image_href = u'glance://uui\u0111' self.assertTrue(service_utils.is_glance_image(image_href)) image_href = 'http://aaa/bbb' self.assertFalse(service_utils.is_glance_image(image_href)) image_href = None self.assertFalse(service_utils.is_glance_image(image_href)) ironic-15.0.0/ironic/tests/unit/common/test_swift.py0000664000175000017500000001770313652514273022563 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import builtins from http import client as http_client import io import mock from oslo_config import cfg from swiftclient import client as swift_client from swiftclient import exceptions as swift_exception from swiftclient import utils as swift_utils from ironic.common import exception from ironic.common import swift from ironic.tests import base CONF = cfg.CONF @mock.patch.object(swift, 'get_swift_session', autospec=True, return_value=mock.Mock(verify=False, cert=('spam', 'ham'), timeout=42)) @mock.patch.object(swift_client, 'Connection', autospec=True) class SwiftTestCase(base.TestCase): def setUp(self): super(SwiftTestCase, self).setUp() self.swift_exception = swift_exception.ClientException('', '') def test___init__(self, connection_mock, keystone_mock): """Check if client is properly initialized with swift""" self.config(group='swift', endpoint_override='http://example.com/objects') swift.SwiftAPI() connection_mock.assert_called_once_with( retries=2, session=keystone_mock.return_value, timeout=42, insecure=True, cert='spam', cert_key='ham', os_options={'object_storage_url': 'http://example.com/objects'} ) @mock.patch.object(builtins, 'open', autospec=True) def test_create_object(self, open_mock, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'file-object' open_mock.return_value = mock_file_handle connection_obj_mock.put_object.return_value = 'object-uuid' object_uuid = swiftapi.create_object('container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') connection_obj_mock.put_object.assert_called_once_with( 'container', 'object', 'file-object', headers=None) self.assertEqual('object-uuid', object_uuid) @mock.patch.object(builtins, 'open', autospec=True) def test_create_object_create_container_fails(self, open_mock, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value connection_obj_mock.put_container.side_effect = self.swift_exception self.assertRaises(exception.SwiftOperationError, swiftapi.create_object, 'container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') self.assertFalse(connection_obj_mock.put_object.called) @mock.patch.object(builtins, 'open', autospec=True) def test_create_object_put_object_fails(self, open_mock, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() mock_file_handle = mock.MagicMock(spec=io.BytesIO) mock_file_handle.__enter__.return_value = 'file-object' open_mock.return_value = mock_file_handle connection_obj_mock = connection_mock.return_value connection_obj_mock.head_account.side_effect = None connection_obj_mock.put_object.side_effect = self.swift_exception self.assertRaises(exception.SwiftOperationError, swiftapi.create_object, 'container', 'object', 'some-file-location') connection_obj_mock.put_container.assert_called_once_with('container') connection_obj_mock.put_object.assert_called_once_with( 'container', 'object', 'file-object', headers=None) @mock.patch.object(swift_utils, 'generate_temp_url', autospec=True) def test_get_temp_url(self, gen_temp_url_mock, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value connection_obj_mock.url = 'http://host/v1/AUTH_tenant_id' head_ret_val = {'x-account-meta-temp-url-key': 'secretkey'} connection_obj_mock.head_account.return_value = head_ret_val gen_temp_url_mock.return_value = 'temp-url-path' temp_url_returned = swiftapi.get_temp_url('container', 'object', 10) connection_obj_mock.head_account.assert_called_once_with() object_path_expected = '/v1/AUTH_tenant_id/container/object' gen_temp_url_mock.assert_called_once_with(object_path_expected, 10, 'secretkey', 'GET') self.assertEqual('http://host/temp-url-path', temp_url_returned) def test_delete_object(self, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value swiftapi.delete_object('container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_delete_object_exc_resource_not_found(self, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() exc = swift_exception.ClientException( "Resource not found", http_status=http_client.NOT_FOUND) connection_obj_mock = connection_mock.return_value connection_obj_mock.delete_object.side_effect = exc self.assertRaises(exception.SwiftObjectNotFoundError, swiftapi.delete_object, 'container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_delete_object_exc(self, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() exc = swift_exception.ClientException("Operation error") connection_obj_mock = connection_mock.return_value connection_obj_mock.delete_object.side_effect = exc self.assertRaises(exception.SwiftOperationError, swiftapi.delete_object, 'container', 'object') connection_obj_mock.delete_object.assert_called_once_with('container', 'object') def test_head_object(self, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value expected_head_result = {'a': 'b'} connection_obj_mock.head_object.return_value = expected_head_result actual_head_result = swiftapi.head_object('container', 'object') connection_obj_mock.head_object.assert_called_once_with('container', 'object') self.assertEqual(expected_head_result, actual_head_result) def test_update_object_meta(self, connection_mock, keystone_mock): swiftapi = swift.SwiftAPI() connection_obj_mock = connection_mock.return_value headers = {'a': 'b'} swiftapi.update_object_meta('container', 'object', headers) connection_obj_mock.post_object.assert_called_once_with( 'container', 'object', headers) ironic-15.0.0/ironic/tests/unit/common/test_context.py0000664000175000017500000000605613652514273023112 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_context import context as oslo_context from ironic.common import context from ironic.tests import base as tests_base class RequestContextTestCase(tests_base.TestCase): def setUp(self): super(RequestContextTestCase, self).setUp() self.context_dict = { 'auth_token': 'auth_token1', "user": "user1", "project_id": "project1", "project_name": "somename", 'is_admin': True, 'read_only': True, 'show_deleted': True, 'request_id': 'id1', "is_public_api": True, "domain": "domain_id2", "user_domain": "domain_id3", "user_domain_name": "TreeDomain", "project_domain": "domain_id4", "roles": None, "overwrite": True } @mock.patch.object(oslo_context.RequestContext, "__init__", autospec=True) def test_create_context(self, context_mock): test_context = context.RequestContext() context_mock.assert_called_once_with(mock.ANY) self.assertFalse(test_context.is_public_api) def test_from_dict(self): test_context = context.RequestContext.from_dict( {'project_name': 'demo', 'is_public_api': True, 'domain_id': 'meow'}) self.assertEqual('demo', test_context.project_name) self.assertEqual('meow', test_context.user_domain) self.assertTrue(test_context.is_public_api) def test_to_policy_values(self): ctx = context.RequestContext(**self.context_dict) ctx_dict = ctx.to_policy_values() self.assertEqual('somename', ctx_dict['project_name']) self.assertTrue(ctx_dict['is_public_api']) def test_get_admin_context(self): admin_context = context.get_admin_context() self.assertTrue(admin_context.is_admin) @mock.patch.object(oslo_context, 'get_current', autospec=True) def test_thread_without_context(self, context_get_mock): self.context.update_store = mock.Mock() context_get_mock.return_value = None self.context.ensure_thread_contain_context() self.context.update_store.assert_called_once_with() @mock.patch.object(oslo_context, 'get_current', autospec=True) def test_thread_with_context(self, context_get_mock): self.context.update_store = mock.Mock() context_get_mock.return_value = self.context self.context.ensure_thread_contain_context() self.assertFalse(self.context.update_store.called) ironic-15.0.0/ironic/tests/unit/common/__init__.py0000664000175000017500000000000013652514273022105 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/tests/unit/common/test_driver_factory.py0000664000175000017500000006224013652514273024445 0ustar zuulzuul00000000000000# coding=utf-8 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils from stevedore import named from ironic.common import driver_factory from ironic.common import exception from ironic.conductor import task_manager from ironic.drivers import base as drivers_base from ironic.drivers import fake_hardware from ironic.drivers import hardware_type from ironic.drivers.modules import fake from ironic.drivers.modules import noop from ironic.tests import base from ironic.tests.unit.db import base as db_base from ironic.tests.unit.objects import utils as obj_utils class FakeEp(object): name = 'fake-hardware' class DriverLoadTestCase(db_base.DbTestCase): def _fake_init_name_err(self, *args, **kwargs): kwargs['on_load_failure_callback'](None, FakeEp, NameError('aaa')) def _fake_init_driver_err(self, *args, **kwargs): kwargs['on_load_failure_callback'](None, FakeEp, exception.DriverLoadError( driver='aaa', reason='bbb')) def test_driver_load_error_if_driver_enabled(self): self.config(enabled_hardware_types=['fake-hardware']) with mock.patch.object(named.NamedExtensionManager, '__init__', self._fake_init_driver_err): self.assertRaises( exception.DriverLoadError, driver_factory.HardwareTypesFactory._init_extension_manager) def test_wrap_in_driver_load_error_if_driver_enabled(self): self.config(enabled_hardware_types=['fake-hardware']) with mock.patch.object(named.NamedExtensionManager, '__init__', self._fake_init_name_err): self.assertRaises( exception.DriverLoadError, driver_factory.HardwareTypesFactory._init_extension_manager) @mock.patch.object(named.NamedExtensionManager, 'names', autospec=True) def test_no_driver_load_error_if_driver_disabled(self, mock_em): self.config(enabled_hardware_types=[]) with mock.patch.object(named.NamedExtensionManager, '__init__', self._fake_init_driver_err): driver_factory.HardwareTypesFactory._init_extension_manager() self.assertEqual(1, mock_em.call_count) @mock.patch.object(driver_factory.LOG, 'warning', autospec=True) def test_driver_duplicated_entry(self, mock_log): self.config(enabled_hardware_types=['fake-hardware', 'fake-hardware']) driver_factory.HardwareTypesFactory._init_extension_manager() self.assertEqual( ['fake-hardware'], driver_factory.HardwareTypesFactory._extension_manager.names()) self.assertTrue(mock_log.called) @mock.patch.object(driver_factory.LOG, 'warning', autospec=True) def test_driver_empty_entry(self, mock_log): self.config(enabled_hardware_types=['fake-hardware', '']) driver_factory.HardwareTypesFactory._init_extension_manager() self.assertEqual( ['fake-hardware'], driver_factory.HardwareTypesFactory._extension_manager.names()) self.assertTrue(mock_log.called) @mock.patch.object(driver_factory, '_warn_if_unsupported', autospec=True) def test_driver_init_checks_unsupported(self, mock_warn): self.config(enabled_hardware_types=['fake-hardware']) driver_factory.HardwareTypesFactory._init_extension_manager() self.assertEqual( ['fake-hardware'], driver_factory.HardwareTypesFactory._extension_manager.names()) self.assertTrue(mock_warn.called) class WarnUnsupportedDriversTestCase(base.TestCase): @mock.patch.object(driver_factory.LOG, 'warning', autospec=True) def _test__warn_if_unsupported(self, supported, mock_log): ext = mock.Mock() ext.obj = mock.Mock() ext.obj.supported = supported driver_factory._warn_if_unsupported(ext) if supported: self.assertFalse(mock_log.called) else: self.assertTrue(mock_log.called) def test__warn_if_unsupported_with_supported(self): self._test__warn_if_unsupported(True) def test__warn_if_unsupported_with_unsupported(self): self._test__warn_if_unsupported(False) class NetworkInterfaceFactoryTestCase(db_base.DbTestCase): @mock.patch.object(driver_factory, '_warn_if_unsupported', autospec=True) def test_build_driver_for_task(self, mock_warn): # flat, neutron, and noop network interfaces are enabled in base test # case factory = driver_factory.NetworkInterfaceFactory node = obj_utils.create_test_node(self.context, network_interface='flat') with task_manager.acquire(self.context, node.id) as task: extension_mgr = factory._extension_manager self.assertIn('flat', extension_mgr) self.assertIn('neutron', extension_mgr) self.assertIn('noop', extension_mgr) self.assertEqual(extension_mgr['flat'].obj, task.driver.network) self.assertEqual('ironic.hardware.interfaces.network', factory._entrypoint_name) self.assertEqual(['flat', 'neutron', 'noop'], sorted(factory._enabled_driver_list)) # NOTE(TheJulia) We should only check that the warn check is called, # as opposed to that the check is called a specific number of times, # during driver/interface loading in ironic. This is due to the fact # each activated interface or driver causes the number to increment. self.assertTrue(mock_warn.called) def test_build_driver_for_task_default_is_flat(self): # flat, neutron, and noop network interfaces are enabled in base test # case factory = driver_factory.NetworkInterfaceFactory node = obj_utils.create_test_node(self.context) with task_manager.acquire(self.context, node.id) as task: extension_mgr = factory._extension_manager self.assertIn('flat', extension_mgr) self.assertIn('neutron', extension_mgr) self.assertIn('noop', extension_mgr) self.assertEqual(extension_mgr['flat'].obj, task.driver.network) def test_build_driver_for_task_unknown_network_interface(self): node = obj_utils.create_test_node(self.context, network_interface='meow') self.assertRaises(exception.InterfaceNotFoundInEntrypoint, task_manager.acquire, self.context, node.id) class StorageInterfaceFactoryTestCase(db_base.DbTestCase): def test_build_interface_for_task(self): """Validate a node has no default storage interface.""" factory = driver_factory.StorageInterfaceFactory node = obj_utils.create_test_node(self.context, driver='fake-hardware') with task_manager.acquire(self.context, node.id) as task: manager = factory._extension_manager self.assertIn('noop', manager) self.assertEqual('noop', task.node.storage_interface) class NewDriverFactory(driver_factory.BaseDriverFactory): _entrypoint_name = 'woof' class NewFactoryTestCase(db_base.DbTestCase): def test_new_driver_factory_unknown_entrypoint(self): factory = NewDriverFactory() self.assertEqual('woof', factory._entrypoint_name) self.assertEqual([], factory._enabled_driver_list) class CheckAndUpdateNodeInterfacesTestCase(db_base.DbTestCase): def test_no_network_interface(self): node = obj_utils.get_test_node(self.context) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('flat', node.network_interface) def test_none_network_interface(self): node = obj_utils.get_test_node(self.context, network_interface=None) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('flat', node.network_interface) def test_no_network_interface_default_from_conf(self): self.config(default_network_interface='noop') node = obj_utils.get_test_node(self.context) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('noop', node.network_interface) def test_create_node_valid_interfaces(self): node = obj_utils.get_test_node(self.context, network_interface='noop', storage_interface='noop') self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('noop', node.network_interface) self.assertEqual('noop', node.storage_interface) def test_create_node_invalid_network_interface(self): node = obj_utils.get_test_node(self.context, network_interface='banana') self.assertRaises(exception.InterfaceNotFoundInEntrypoint, driver_factory.check_and_update_node_interfaces, node) def _get_valid_default_interface_name(self, iface): i_name = 'fake' # there is no 'fake' network interface if iface == 'network': i_name = 'noop' return i_name def _set_config_interface_options_hardware_type(self): for iface in drivers_base.ALL_INTERFACES: i_name = self._get_valid_default_interface_name(iface) config_kwarg = {'enabled_%s_interfaces' % iface: [i_name], 'default_%s_interface' % iface: i_name} self.config(**config_kwarg) def test_create_node_dynamic_driver_interfaces_set(self): self._set_config_interface_options_hardware_type() for iface in drivers_base.ALL_INTERFACES: iface_name = '%s_interface' % iface i_name = self._get_valid_default_interface_name(iface) node_kwargs = {'uuid': uuidutils.generate_uuid(), iface_name: i_name} node = obj_utils.get_test_node( self.context, driver='fake-hardware', **node_kwargs) driver_factory.check_and_update_node_interfaces(node) self.assertEqual(i_name, getattr(node, iface_name)) def test_node_update_dynamic_driver_set_interfaces(self): """Update interfaces for node with dynamic driver""" self._set_config_interface_options_hardware_type() for iface in drivers_base.ALL_INTERFACES: iface_name = '%s_interface' % iface node_kwargs = {'uuid': uuidutils.generate_uuid()} node = obj_utils.create_test_node(self.context, driver='fake-hardware', **node_kwargs) i_name = self._get_valid_default_interface_name(iface) setattr(node, iface_name, i_name) driver_factory.check_and_update_node_interfaces(node) self.assertEqual(i_name, getattr(node, iface_name)) class DefaultInterfaceTestCase(db_base.DbTestCase): def setUp(self): super(DefaultInterfaceTestCase, self).setUp() self.config(enabled_hardware_types=['manual-management']) self.driver = driver_factory.get_hardware_type('manual-management') def test_from_config(self): self.config(default_deploy_interface='direct') iface = driver_factory.default_interface(self.driver, 'deploy') self.assertEqual('direct', iface) def test_from_additional_defaults(self): self.config(default_storage_interface=None) iface = driver_factory.default_interface(self.driver, 'storage') self.assertEqual('noop', iface) def test_network_from_additional_defaults_hardware_type(self): self.config(default_network_interface=None) self.config(dhcp_provider='none', group='dhcp') self.config(enabled_network_interfaces=['neutron']) iface = driver_factory.default_interface(self.driver, 'network') self.assertEqual('neutron', iface) def test_calculated_with_one(self): self.config(default_deploy_interface=None) self.config(enabled_deploy_interfaces=['direct']) iface = driver_factory.default_interface(self.driver, 'deploy') self.assertEqual('direct', iface) def test_calculated_with_two(self): self.config(default_deploy_interface=None) self.config(enabled_deploy_interfaces=['iscsi', 'direct']) iface = driver_factory.default_interface(self.driver, 'deploy') self.assertEqual('iscsi', iface) def test_calculated_with_unsupported(self): self.config(default_deploy_interface=None) # manual-management doesn't support fake deploy self.config(enabled_deploy_interfaces=['fake', 'direct']) iface = driver_factory.default_interface(self.driver, 'deploy') self.assertEqual('direct', iface) def test_calculated_no_answer(self): # manual-management supports no power interfaces self.config(default_power_interface=None) self.config(enabled_power_interfaces=[]) self.assertRaisesRegex( exception.NoValidDefaultForInterface, "For hardware type 'ManualManagementHardware', no default " "value found for power interface.", driver_factory.default_interface, self.driver, 'power') def test_calculated_no_answer_drivername(self): # manual-management instance (of entry-point driver named 'foo') # supports no power interfaces self.config(default_power_interface=None) self.config(enabled_power_interfaces=[]) self.assertRaisesRegex( exception.NoValidDefaultForInterface, "For hardware type 'foo', no default value found for power " "interface.", driver_factory.default_interface, self.driver, 'power', driver_name='foo') def test_calculated_no_answer_drivername_node(self): # for a node with manual-management instance (of entry-point driver # named 'foo'), no default power interface is supported self.config(default_power_interface=None) self.config(enabled_power_interfaces=[]) self.assertRaisesRegex( exception.NoValidDefaultForInterface, "For node bar with hardware type 'foo', no default " "value found for power interface.", driver_factory.default_interface, self.driver, 'power', driver_name='foo', node='bar') class TestFakeHardware(hardware_type.AbstractHardwareType): @property def supported_bios_interfaces(self): """List of supported bios interfaces.""" return [fake.FakeBIOS] @property def supported_boot_interfaces(self): """List of supported boot interfaces.""" return [fake.FakeBoot] @property def supported_console_interfaces(self): """List of supported console interfaces.""" return [fake.FakeConsole] @property def supported_deploy_interfaces(self): """List of supported deploy interfaces.""" return [fake.FakeDeploy] @property def supported_inspect_interfaces(self): """List of supported inspect interfaces.""" return [fake.FakeInspect] @property def supported_management_interfaces(self): """List of supported management interfaces.""" return [fake.FakeManagement] @property def supported_power_interfaces(self): """List of supported power interfaces.""" return [fake.FakePower] @property def supported_raid_interfaces(self): """List of supported raid interfaces.""" return [fake.FakeRAID] @property def supported_rescue_interfaces(self): """List of supported rescue interfaces.""" return [fake.FakeRescue] @property def supported_vendor_interfaces(self): """List of supported rescue interfaces.""" return [fake.FakeVendorB, fake.FakeVendorA] OPTIONAL_INTERFACES = (drivers_base.BareDriver().optional_interfaces + ['vendor']) class HardwareTypeLoadTestCase(db_base.DbTestCase): def setUp(self): super(HardwareTypeLoadTestCase, self).setUp() self.config(dhcp_provider=None, group='dhcp') self.ifaces = {} self.node_kwargs = {} for iface in drivers_base.ALL_INTERFACES: if iface == 'network': self.ifaces[iface] = 'noop' enabled = ['noop'] elif iface == 'storage': self.ifaces[iface] = 'noop' enabled = ['noop'] else: self.ifaces[iface] = 'fake' enabled = ['fake'] if iface in OPTIONAL_INTERFACES: enabled.append('no-%s' % iface) self.config(**{'enabled_%s_interfaces' % iface: enabled}) self.node_kwargs['%s_interface' % iface] = self.ifaces[iface] def test_get_hardware_type_existing(self): hw_type = driver_factory.get_hardware_type('fake-hardware') self.assertIsInstance(hw_type, fake_hardware.FakeHardware) def test_get_hardware_type_missing(self): self.assertRaises(exception.DriverNotFound, driver_factory.get_hardware_type, 'fake_agent') def test_build_driver_for_task(self): node = obj_utils.create_test_node(self.context, driver='fake-hardware', **self.node_kwargs) with task_manager.acquire(self.context, node.id) as task: for iface in drivers_base.ALL_INTERFACES: impl = getattr(task.driver, iface) self.assertIsNotNone(impl) def test_build_driver_for_task_incorrect(self): self.node_kwargs['power_interface'] = 'foobar' node = obj_utils.create_test_node(self.context, driver='fake-hardware', **self.node_kwargs) self.assertRaises(exception.InterfaceNotFoundInEntrypoint, task_manager.acquire, self.context, node.id) def test_build_driver_for_task_fake(self): # Checks that fake driver is compatible with any interfaces, even those # which are not declared in supported__interfaces result. self.node_kwargs['raid_interface'] = 'no-raid' node = obj_utils.create_test_node(self.context, driver='fake-hardware', **self.node_kwargs) with task_manager.acquire(self.context, node.id) as task: for iface in drivers_base.ALL_INTERFACES: impl = getattr(task.driver, iface) self.assertIsNotNone(impl) self.assertIsInstance(task.driver.raid, noop.NoRAID) @mock.patch.object(driver_factory, 'get_hardware_type', autospec=True, return_value=TestFakeHardware()) def test_build_driver_for_task_not_fake(self, mock_get_hw_type): # Checks that other hardware types do check compatibility. self.node_kwargs['raid_interface'] = 'no-raid' node = obj_utils.create_test_node(self.context, driver='fake-2', **self.node_kwargs) self.assertRaises(exception.IncompatibleInterface, task_manager.acquire, self.context, node.id) mock_get_hw_type.assert_called_once_with('fake-2') def test_build_driver_for_task_no_defaults(self): self.config(dhcp_provider=None, group='dhcp') for iface in drivers_base.ALL_INTERFACES: if iface not in ['network', 'storage']: self.config(**{'enabled_%s_interfaces' % iface: []}) self.config(**{'default_%s_interface' % iface: None}) node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.assertRaises(exception.NoValidDefaultForInterface, task_manager.acquire, self.context, node.id) def test_build_driver_for_task_calculated_defaults(self): self.config(dhcp_provider=None, group='dhcp') node = obj_utils.create_test_node(self.context, driver='fake-hardware') with task_manager.acquire(self.context, node.id) as task: for iface in drivers_base.ALL_INTERFACES: impl = getattr(task.driver, iface) self.assertIsNotNone(impl) def test_build_driver_for_task_configured_defaults(self): for iface in drivers_base.ALL_INTERFACES: self.config(**{'default_%s_interface' % iface: self.ifaces[iface]}) node = obj_utils.create_test_node(self.context, driver='fake-hardware') with task_manager.acquire(self.context, node.id) as task: for iface in drivers_base.ALL_INTERFACES: impl = getattr(task.driver, iface) self.assertIsNotNone(impl) self.assertEqual(self.ifaces[iface], getattr(task.node, '%s_interface' % iface)) def test_build_driver_for_task_bad_default(self): self.config(default_power_interface='foobar') node = obj_utils.create_test_node(self.context, driver='fake-hardware') self.assertRaises(exception.InterfaceNotFoundInEntrypoint, task_manager.acquire, self.context, node.id) def test_no_storage_interface(self): node = obj_utils.get_test_node(self.context) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('noop', node.storage_interface) def test_none_storage_interface(self): node = obj_utils.get_test_node(self.context, storage_interface=None) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('noop', node.storage_interface) def test_no_storage_interface_default_from_conf(self): self.config(enabled_storage_interfaces=['noop', 'fake']) self.config(default_storage_interface='fake') node = obj_utils.get_test_node(self.context) self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('fake', node.storage_interface) def test_invalid_storage_interface(self): node = obj_utils.get_test_node(self.context, storage_interface='scoop') self.assertRaises(exception.InterfaceNotFoundInEntrypoint, driver_factory.check_and_update_node_interfaces, node) def test_no_rescue_interface_default_from_conf(self): self.config(enabled_rescue_interfaces=['fake']) self.config(default_rescue_interface='fake') node = obj_utils.get_test_node(self.context, driver='fake-hardware') self.assertTrue(driver_factory.check_and_update_node_interfaces(node)) self.assertEqual('fake', node.rescue_interface) def test_invalid_rescue_interface(self): node = obj_utils.get_test_node(self.context, driver='fake-hardware', rescue_interface='scoop') self.assertRaises(exception.InterfaceNotFoundInEntrypoint, driver_factory.check_and_update_node_interfaces, node) def test_no_raid_interface_no_default(self): # NOTE(rloo): It doesn't seem possible to not have a default interface # for storage, so we'll test this case with raid. self.config(enabled_raid_interfaces=[]) node = obj_utils.get_test_node(self.context, driver='fake-hardware') self.assertRaisesRegex( exception.NoValidDefaultForInterface, "raid interface", driver_factory.check_and_update_node_interfaces, node) def _test_enabled_supported_interfaces(self, enable_storage): ht = fake_hardware.FakeHardware() expected = { 'bios': set(['fake', 'no-bios']), 'boot': set(['fake']), 'console': set(['fake', 'no-console']), 'deploy': set(['fake']), 'inspect': set(['fake', 'no-inspect']), 'management': set(['fake']), 'network': set(['noop']), 'power': set(['fake']), 'raid': set(['fake', 'no-raid']), 'rescue': set(['fake', 'no-rescue']), 'storage': set(['noop']), 'vendor': set(['fake', 'no-vendor']) } if enable_storage: self.config(enabled_storage_interfaces=['fake']) expected['storage'] = set(['fake']) mapping = driver_factory.enabled_supported_interfaces(ht) self.assertEqual(expected, mapping) def test_enabled_supported_interfaces(self): self._test_enabled_supported_interfaces(False) def test_enabled_supported_interfaces_non_default(self): self._test_enabled_supported_interfaces(True) ironic-15.0.0/ironic/tests/unit/common/test_states.py0000664000175000017500000000263413652514273022727 0ustar zuulzuul00000000000000# Copyright (C) 2015 Intel Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.common import states from ironic.tests import base class StatesTest(base.TestCase): def test_state_values_length(self): """test_state_values_length State values can be a maximum of 15 characters because they are stored in the database and the size of the database entry is 15 characters. This is specified in db/sqlalchemy/models.py """ for key, value in states.__dict__.items(): # Assumption: A state variable name is all UPPERCASE and contents # are a string. if key.upper() == key and isinstance(value, str): self.assertLessEqual( len(value), 15, "Value for state: {} is greater than 15 characters".format( key)) ironic-15.0.0/ironic/tests/unit/__init__.py0000664000175000017500000000305013652514273020625 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`ironic.tests.unit` -- ironic unit tests ===================================================== .. automodule:: ironic.tests.unit :platform: Unix """ # TODO(tenbrae): move eventlet imports to ironic.__init__ once we move to PBR import eventlet from oslo_config import cfg from oslo_log import log from ironic import objects eventlet.monkey_patch(os=False) log.register_options(cfg.CONF) log.setup(cfg.CONF, 'ironic') # NOTE(comstud): Make sure we have all of the objects loaded. We do this # at module import time, because we may be using mock decorators in our # tests that run at import time. objects.register_all() # NOTE(dtantsur): this module creates mocks which may be used at random points # of time, so it must be imported as early as possible. from ironic.tests.unit.drivers import third_party_driver_mocks # noqa ironic-15.0.0/ironic/tests/base.py0000664000175000017500000002257413652514273017035 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for our unit tests. Allows overriding of config for use of fakes, and some black magic for inline callbacks. """ import copy import os import subprocess import sys import tempfile import eventlet eventlet.monkey_patch(os=False) # noqa E402 import fixtures from ironic_lib import utils from oslo_concurrency import processutils from oslo_config import fixture as config_fixture from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import uuidutils from oslotest import base as oslo_test_base from ironic.common import config as ironic_config from ironic.common import context as ironic_context from ironic.common import driver_factory from ironic.common import hash_ring from ironic.common import utils as common_utils from ironic.conf import CONF from ironic.drivers import base as drivers_base from ironic.objects import base as objects_base from ironic.tests.unit import policy_fixture logging.register_options(CONF) logging.setup(CONF, 'ironic') class ReplaceModule(fixtures.Fixture): """Replace a module with a fake module.""" def __init__(self, name, new_value): self.name = name self.new_value = new_value def _restore(self, old_value): sys.modules[self.name] = old_value def setUp(self): super(ReplaceModule, self).setUp() old_value = sys.modules.get(self.name) sys.modules[self.name] = self.new_value self.addCleanup(self._restore, old_value) class TestingException(Exception): pass class TestCase(oslo_test_base.BaseTestCase): """Test case base class for all unit tests.""" # By default block execution of utils.execute() and related functions. block_execute = True def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() self.context = ironic_context.get_admin_context() self._set_config() # NOTE(danms): Make sure to reset us back to non-remote objects # for each test to avoid interactions. Also, backup the object # registry self._base_test_obj_backup = copy.copy( objects_base.IronicObjectRegistry.obj_classes()) self.addCleanup(self._restore_obj_registry) self.addCleanup(self._clear_attrs) self.addCleanup(hash_ring.HashRingManager().reset) self.useFixture(fixtures.EnvironmentVariable('http_proxy')) self.policy = self.useFixture(policy_fixture.PolicyFixture()) driver_factory.HardwareTypesFactory._extension_manager = None for factory in driver_factory._INTERFACE_LOADERS.values(): factory._extension_manager = None # Ban running external processes via 'execute' like functions. If the # patched function is called, an exception is raised to warn the # tester. if self.block_execute: # NOTE(jlvillal): Intentionally not using mock as if you mock a # mock it causes things to not work correctly. As doing an # autospec=True causes strangeness. By using a simple function we # can then mock it without issue. self.patch(processutils, 'execute', do_not_call) self.patch(subprocess, 'call', do_not_call) self.patch(subprocess, 'check_call', do_not_call) self.patch(subprocess, 'check_output', do_not_call) self.patch(utils, 'execute', do_not_call) # subprocess.Popen is a class self.patch(subprocess, 'Popen', DoNotCallPopen) def _set_config(self): self.cfg_fixture = self.useFixture(config_fixture.Config(CONF)) self.config(use_stderr=False, fatal_exception_format_errors=True, tempdir=tempfile.tempdir) self.config(cleaning_network=uuidutils.generate_uuid(), group='neutron') self.config(provisioning_network=uuidutils.generate_uuid(), group='neutron') self.config(rescuing_network=uuidutils.generate_uuid(), group='neutron') self.config(enabled_hardware_types=['fake-hardware', 'manual-management']) for iface in drivers_base.ALL_INTERFACES: default = None # Restore some reasonable defaults if iface == 'network': values = ['flat', 'noop', 'neutron'] else: values = ['fake'] if iface == 'deploy': values.extend(['iscsi', 'direct']) elif iface == 'boot': values.append('pxe') elif iface == 'storage': default = 'noop' values.append('noop') elif iface not in {'network', 'power', 'management'}: values.append('no-%s' % iface) self.config(**{'enabled_%s_interfaces' % iface: values, 'default_%s_interface' % iface: default}) self.set_defaults(host='fake-mini', debug=True) self.set_defaults(connection="sqlite://", sqlite_synchronous=False, group='database') ironic_config.parse_args([], default_config_files=[]) def _restore_obj_registry(self): objects_base.IronicObjectRegistry._registry._obj_classes = ( self._base_test_obj_backup) def _clear_attrs(self): # Delete attributes that don't start with _ so they don't pin # memory around unnecessarily for the duration of the test # suite for key in [k for k in self.__dict__ if k[0] != '_']: del self.__dict__[key] def config(self, **kw): """Override config options for a test.""" self.cfg_fixture.config(**kw) def config_temp_dir(self, option, group=None): """Override a config option with a temporary directory.""" temp_dir = tempfile.mkdtemp() self.addCleanup(lambda: common_utils.rmtree_without_raise(temp_dir)) self.config(**{option: temp_dir, 'group': group}) def set_defaults(self, **kw): """Set default values of config options.""" group = kw.pop('group', None) for o, v in kw.items(): self.cfg_fixture.set_default(o, v, group=group) def path_get(self, project_file=None): """Get the absolute path to a file. Used for testing the API. :param project_file: File whose path to return. Default: None. :returns: path to the specified file, or path to project root. """ root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', ) ) if project_file: return os.path.join(root, project_file) else: return root def assertJsonEqual(self, expected, observed): """Asserts that 2 complex data structures are json equivalent.""" self.assertEqual(jsonutils.dumps(expected, sort_keys=True), jsonutils.dumps(observed, sort_keys=True)) def assertNotificationEqual(self, notif_args, service, host, event_type, level): """Asserts properties of arguments passed when creating a notification. :param notif_args: dict of arguments notification instantiated with :param service: expected service that emits the notification :param host: expected host that emits the notification :param event_type: expected value of EventType field of notification as a string :param level: expected NotificationLevel """ self.assertEqual(service, notif_args['publisher'].service) self.assertEqual(host, notif_args['publisher'].host) self.assertEqual(event_type, notif_args['event_type']. to_event_type_field()) self.assertEqual(level, notif_args['level']) def do_not_call(*args, **kwargs): """Helper function to raise an exception if it is called""" raise Exception( "Don't call ironic_lib.utils.execute() / " "processutils.execute() or similar functions in tests!") class DoNotCallPopen(object): """Helper class to mimic subprocess.popen() It's job is to raise an exception if it is called. We create stub functions so mocks that use autospec=True will work. """ def __init__(self, *args, **kwargs): do_not_call(*args, **kwargs) def communicate(input=None): pass def kill(): pass def poll(): pass def terminate(): pass def wait(): pass ironic-15.0.0/ironic/tests/__init__.py0000664000175000017500000000000013652514273017636 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/version.py0000664000175000017500000000130113652514273016427 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('ironic') ironic-15.0.0/ironic/conf/0000775000175000017500000000000013652514443015321 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/conf/pxe.py0000664000175000017500000001626013652514273016475 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('pxe_append_params', default='nofb nomodeset vga=normal', help=_('Additional append parameters for baremetal PXE boot.')), cfg.StrOpt('default_ephemeral_format', default='ext4', help=_('Default file system format for ephemeral partition, ' 'if one is created.')), cfg.StrOpt('images_path', default='/var/lib/ironic/images/', help=_('On the ironic-conductor node, directory where images ' 'are stored on disk.')), cfg.StrOpt('instance_master_path', default='/var/lib/ironic/master_images', help=_('On the ironic-conductor node, directory where master ' 'instance images are stored on disk. ' 'Setting to the empty string disables image caching.')), cfg.IntOpt('image_cache_size', default=20480, help=_('Maximum size (in MiB) of cache for master images, ' 'including those in use.')), # 10080 here is 1 week - 60*24*7. It is entirely arbitrary in the absence # of a facility to disable the ttl entirely. cfg.IntOpt('image_cache_ttl', default=10080, help=_('Maximum TTL (in minutes) for old master images in ' 'cache.')), cfg.StrOpt('pxe_config_template', default=os.path.join( '$pybasedir', 'drivers/modules/pxe_config.template'), help=_('On ironic-conductor node, template file for PXE ' 'configuration.')), cfg.StrOpt('uefi_pxe_config_template', default=os.path.join( '$pybasedir', 'drivers/modules/pxe_grub_config.template'), help=_('On ironic-conductor node, template file for PXE ' 'configuration for UEFI boot loader.')), cfg.DictOpt('pxe_config_template_by_arch', default={}, help=_('On ironic-conductor node, template file for PXE ' 'configuration per node architecture. ' 'For example: ' 'aarch64:/opt/share/grubaa64_pxe_config.template')), cfg.StrOpt('tftp_server', default='$my_ip', help=_("IP address of ironic-conductor node's TFTP server.")), cfg.StrOpt('tftp_root', default='/tftpboot', help=_("ironic-conductor node's TFTP root path. The " "ironic-conductor must have read/write access to this " "path.")), cfg.StrOpt('tftp_master_path', default='/tftpboot/master_images', help=_('On ironic-conductor node, directory where master TFTP ' 'images are stored on disk. ' 'Setting to the empty string disables image caching.')), cfg.IntOpt('dir_permission', help=_("The permission that will be applied to the TFTP " "folders upon creation. This should be set to the " "permission such that the tftpserver has access to " "read the contents of the configured TFTP folder. This " "setting is only required when the operating system's " "umask is restrictive such that ironic-conductor is " "creating files that cannot be read by the TFTP server. " "Setting to will result in the operating " "system's umask to be utilized for the creation of new " "tftp folders. It is recommended that an octal " "representation is specified. For example: 0o755")), cfg.StrOpt('pxe_bootfile_name', default='pxelinux.0', help=_('Bootfile DHCP parameter.')), cfg.StrOpt('pxe_config_subdir', default='pxelinux.cfg', help=_('Directory in which to create symbolic links which ' 'represent the MAC or IP address of the ports on ' 'a node and allow boot loaders to load the PXE ' 'file for the node. This directory name is relative ' 'to the PXE or iPXE folders.')), cfg.StrOpt('uefi_pxe_bootfile_name', default='bootx64.efi', help=_('Bootfile DHCP parameter for UEFI boot mode.')), cfg.DictOpt('pxe_bootfile_name_by_arch', default={}, help=_('Bootfile DHCP parameter per node architecture. ' 'For example: aarch64:grubaa64.efi')), cfg.StrOpt('ipxe_boot_script', default=os.path.join( '$pybasedir', 'drivers/modules/boot.ipxe'), help=_('On ironic-conductor node, the path to the main iPXE ' 'script file.')), cfg.IntOpt('ipxe_timeout', default=0, help=_('Timeout value (in seconds) for downloading an image ' 'via iPXE. Defaults to 0 (no timeout)')), cfg.IntOpt('boot_retry_timeout', min=60, help=_('Timeout (in seconds) after which PXE boot should be ' 'retried. Must be less than [conductor]' 'deploy_callback_timeout. Disabled by default.')), cfg.IntOpt('boot_retry_check_interval', default=90, min=1, help=_('Interval (in seconds) between periodic checks on PXE ' 'boot retry. Has no effect if boot_retry_timeout ' 'is not set.')), cfg.StrOpt('ip_version', default='4', choices=[('4', _('IPv4')), ('6', _('IPv6'))], help=_('The IP version that will be used for PXE booting. ' 'Defaults to 4. EXPERIMENTAL')), cfg.BoolOpt('ipxe_use_swift', default=False, help=_("Download deploy and rescue images directly from swift " "using temporary URLs. " "If set to false (default), images are downloaded " "to the ironic-conductor node and served over its " "local HTTP server. " "Applicable only when 'ipxe' compatible boot interface " "is used.")), ] def register_opts(conf): conf.register_opts(opts, group='pxe') ironic-15.0.0/ironic/conf/irmc.py0000664000175000017500000001230213652514273016624 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2015 FUJITSU LIMITED # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('remote_image_share_root', default='/remote_image_share_root', help=_('Ironic conductor node\'s "NFS" or "CIFS" root path')), cfg.StrOpt('remote_image_server', help=_('IP of remote image server')), cfg.StrOpt('remote_image_share_type', default='CIFS', choices=[('CIFS', _('CIFS (Common Internet File System) ' 'protocol')), ('NFS', _('NFS (Network File System) protocol'))], ignore_case=True, help=_('Share type of virtual media')), cfg.StrOpt('remote_image_share_name', default='share', help=_('share name of remote_image_server')), cfg.StrOpt('remote_image_user_name', help=_('User name of remote_image_server')), cfg.StrOpt('remote_image_user_password', secret=True, help=_('Password of remote_image_user_name')), cfg.StrOpt('remote_image_user_domain', default='', help=_('Domain name of remote_image_user_name')), cfg.PortOpt('port', default=443, choices=[(443, _('port 443')), (80, _('port 80'))], help=_('Port to be used for iRMC operations')), cfg.StrOpt('auth_method', default='basic', choices=[('basic', _('Basic authentication')), ('digest', _('Digest authentication'))], help=_('Authentication method to be used for iRMC ' 'operations')), cfg.IntOpt('client_timeout', default=60, help=_('Timeout (in seconds) for iRMC operations')), cfg.StrOpt('sensor_method', default='ipmitool', choices=[('ipmitool', _('IPMItool')), ('scci', _('Fujitsu SCCI (ServerView Common Command ' 'Interface)'))], help=_('Sensor data retrieval method.')), cfg.StrOpt('snmp_version', default='v2c', choices=[('v1', _('SNMPv1')), ('v2c', _('SNMPv2c')), ('v3', _('SNMPv3'))], help=_('SNMP protocol version')), cfg.PortOpt('snmp_port', default=161, help=_('SNMP port')), cfg.StrOpt('snmp_community', default='public', help=_('SNMP community. Required for versions "v1" and "v2c"')), cfg.StrOpt('snmp_security', help=_('SNMP security name. Required for version "v3"')), cfg.IntOpt('snmp_polling_interval', default=10, help='SNMP polling interval in seconds'), cfg.IntOpt('clean_priority_restore_irmc_bios_config', default=0, help=_('Priority for restore_irmc_bios_config clean step.')), cfg.ListOpt('gpu_ids', default=[], help=_('List of vendor IDs and device IDs for GPU device to ' 'inspect. List items are in format vendorID/deviceID ' 'and separated by commas. GPU inspection will use this ' 'value to count the number of GPU device in a node. If ' 'this option is not defined, then leave out ' 'pci_gpu_devices in capabilities property. ' 'Sample gpu_ids value: 0x1000/0x0079,0x2100/0x0080')), cfg.ListOpt('fpga_ids', default=[], help=_('List of vendor IDs and device IDs for CPU FPGA to ' 'inspect. List items are in format vendorID/deviceID ' 'and separated by commas. CPU inspection will use this ' 'value to find existence of CPU FPGA in a node. If ' 'this option is not defined, then leave out ' 'CUSTOM_CPU_FPGA in node traits. ' 'Sample fpga_ids value: 0x1000/0x0079,0x2100/0x0080')), cfg.IntOpt('query_raid_config_fgi_status_interval', min=1, default=300, help=_('Interval (in seconds) between periodic RAID status ' 'checks to determine whether the asynchronous RAID ' 'configuration was successfully finished or not. ' 'Foreground Initialization (FGI) will start 5 minutes ' 'after creating virtual drives.')), ] def register_opts(conf): conf.register_opts(opts, group='irmc') ironic-15.0.0/ironic/conf/snmp.py0000664000175000017500000000341413652514273016653 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013,2014 Cray Inc # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('power_timeout', default=10, help=_('Seconds to wait for power action to be completed')), # NOTE(yuriyz): some of SNMP-enabled hardware have own options for pause # between off and on. This option guarantees minimal value. cfg.IntOpt('reboot_delay', default=0, min=0, help=_('Time (in seconds) to sleep between when rebooting ' '(powering off and on again)')), cfg.FloatOpt('udp_transport_timeout', default=1.0, min=0.0, help=_('Response timeout in seconds used for UDP transport. ' 'Timeout should be a multiple of 0.5 seconds and ' 'is applicable to each retry.')), cfg.IntOpt('udp_transport_retries', default=5, min=0, help=_('Maximum number of UDP request retries, ' '0 means no retries.')), ] def register_opts(conf): conf.register_opts(opts, group='snmp') ironic-15.0.0/ironic/conf/console.py0000664000175000017500000000564013652514273017343 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('terminal', default='shellinaboxd', help=_('Path to serial console terminal program. Used only ' 'by Shell In A Box console.')), cfg.StrOpt('terminal_cert_dir', help=_('Directory containing the terminal SSL cert (PEM) for ' 'serial console access. Used only by Shell In A Box ' 'console.')), cfg.StrOpt('terminal_pid_dir', help=_('Directory for holding terminal pid files. ' 'If not specified, the temporary directory ' 'will be used.')), cfg.IntOpt('terminal_timeout', default=600, min=0, help=_('Timeout (in seconds) for the terminal session to be ' 'closed on inactivity. Set to 0 to disable timeout. ' 'Used only by Socat console.')), cfg.IntOpt('subprocess_checking_interval', default=1, help=_('Time interval (in seconds) for checking the status of ' 'console subprocess.')), cfg.IntOpt('subprocess_timeout', default=10, help=_('Time (in seconds) to wait for the console subprocess ' 'to start.')), cfg.IntOpt('kill_timeout', default=1, help=_('Time (in seconds) to wait for the shellinabox console ' 'subprocess to exit before sending SIGKILL signal.')), cfg.IPOpt('socat_address', default='$my_ip', help=_('IP address of Socat service running on the host of ' 'ironic conductor. Used only by Socat console.')), cfg.StrOpt('port_range', regex=r'^\d+:\d+$', sample_default='10000:20000', help=_('A range of ports available to be used for the console ' 'proxy service running on the host of ironic ' 'conductor, in the form of :. This option ' 'is used by both Shellinabox and Socat console')), ] def register_opts(conf): conf.register_opts(opts, group='console') ironic-15.0.0/ironic/conf/api.py0000664000175000017500000000660613652514273016455 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.HostAddressOpt('host_ip', default='0.0.0.0', help=_('The IP address or hostname on which ironic-api ' 'listens.')), cfg.PortOpt('port', default=6385, help=_('The TCP port on which ironic-api listens.')), cfg.IntOpt('max_limit', default=1000, help=_('The maximum number of items returned in a single ' 'response from a collection resource.')), cfg.StrOpt('public_endpoint', help=_("Public URL to use when building the links to the API " "resources (for example, \"https://ironic.rocks:6384\")." " If None the links will be built using the request's " "host URL. If the API is operating behind a proxy, you " "will want to change this to represent the proxy's URL. " "Defaults to None. " "Ignored when proxy headers parsing is enabled via " "[oslo_middleware]enable_proxy_headers_parsing option.") ), cfg.IntOpt('api_workers', help=_('Number of workers for OpenStack Ironic API service. ' 'The default is equal to the number of CPUs available ' 'if that can be determined, else a default worker ' 'count of 1 is returned.')), cfg.BoolOpt('enable_ssl_api', default=False, help=_("Enable the integrated stand-alone API to service " "requests via HTTPS instead of HTTP. If there is a " "front-end service performing HTTPS offloading from " "the service, this option should be False; note, you " "will want to enable proxy headers parsing with " "[oslo_middleware]enable_proxy_headers_parsing " "option or configure [api]public_endpoint option " "to set URLs in responses to the SSL terminated one.")), cfg.BoolOpt('restrict_lookup', default=True, help=_('Whether to restrict the lookup API to only nodes ' 'in certain states.')), cfg.IntOpt('ramdisk_heartbeat_timeout', default=300, help=_('Maximum interval (in seconds) for agent heartbeats.')), ] opt_group = cfg.OptGroup(name='api', title='Options for the ironic-api service') def register_opts(conf): conf.register_group(opt_group) conf.register_opts(opts, group=opt_group) ironic-15.0.0/ironic/conf/redfish.py0000664000175000017500000000671613652514273017332 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('connection_attempts', min=1, default=5, help=_('Maximum number of attempts to try to connect ' 'to Redfish')), cfg.IntOpt('connection_retry_interval', min=1, default=4, help=_('Number of seconds to wait between attempts to ' 'connect to Redfish')), cfg.IntOpt('connection_cache_size', min=0, default=1000, help=_('Maximum Redfish client connection cache size. ' 'Redfish driver would strive to reuse authenticated ' 'BMC connections (obtained through Redfish Session ' 'Service). This option caps the maximum number of ' 'connections to maintain. The value of `0` disables ' 'client connection caching completely.')), cfg.StrOpt('auth_type', choices=[('basic', _('Use HTTP basic authentication')), ('session', _('Use HTTP session authentication')), ('auto', _('Try HTTP session authentication first, ' 'fall back to basic HTTP authentication'))], default='auto', help=_('Redfish HTTP client authentication method.')), cfg.BoolOpt('use_swift', default=True, help=_('Upload generated ISO images for virtual media boot to ' 'Swift, then pass temporary URL to BMC for booting the ' 'node. If set to false, images are placed on the ' 'ironic-conductor node and served over its ' 'local HTTP server.')), cfg.StrOpt('swift_container', default='ironic_redfish_container', help=_('The Swift container to store Redfish driver data. ' 'Applies only when `use_swift` is enabled.')), cfg.IntOpt('swift_object_expiry_timeout', default=900, help=_('Amount of time in seconds for Swift objects to ' 'auto-expire. Applies only when `use_swift` is ' 'enabled.')), cfg.StrOpt('kernel_append_params', default='nofb nomodeset vga=normal', help=_('Additional kernel parameters to pass down to the ' 'instance kernel. These parameters can be consumed by ' 'the kernel or by the applications by reading ' '/proc/cmdline. Mind severe cmdline size limit! Can be ' 'overridden by `instance_info/kernel_append_params` ' 'property.')), ] def register_opts(conf): conf.register_opts(opts, group='redfish') ironic-15.0.0/ironic/conf/ansible.py0000664000175000017500000001650713652514273017322 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('ansible_extra_args', help=_('Extra arguments to pass on every ' 'invocation of Ansible.')), cfg.IntOpt('verbosity', min=0, max=4, help=_('Set ansible verbosity level requested when invoking ' '"ansible-playbook" command. ' '4 includes detailed SSH session logging. ' 'Default is 4 when global debug is enabled ' 'and 0 otherwise.')), cfg.StrOpt('ansible_playbook_script', default='ansible-playbook', help=_('Path to "ansible-playbook" script. ' 'Default will search the $PATH configured for user ' 'running ironic-conductor process. ' 'Provide the full path when ansible-playbook is not in ' '$PATH or installed in not default location.')), cfg.StrOpt('playbooks_path', default=os.path.join('$pybasedir', 'drivers/modules/ansible/playbooks'), help=_('Path to directory with playbooks, roles and ' 'local inventory.')), cfg.StrOpt('config_file_path', default=os.path.join( '$pybasedir', 'drivers/modules/ansible/playbooks/ansible.cfg'), help=_('Path to ansible configuration file. If set to empty, ' 'system default will be used.')), cfg.IntOpt('post_deploy_get_power_state_retries', min=0, default=6, help=_('Number of times to retry getting power state to check ' 'if bare metal node has been powered off after a soft ' 'power off. Value of 0 means do not retry on failure.')), cfg.IntOpt('post_deploy_get_power_state_retry_interval', min=0, default=5, help=_('Amount of time (in seconds) to wait between polling ' 'power state after trigger soft poweroff.')), cfg.IntOpt('extra_memory', default=10, help=_('Extra amount of memory in MiB expected to be consumed ' 'by Ansible-related processes on the node. Affects ' 'decision whether image will fit into RAM.')), cfg.BoolOpt('image_store_insecure', default=False, help=_('Skip verifying SSL connections to the image store ' 'when downloading the image. ' 'Setting it to "True" is only recommended for testing ' 'environments that use self-signed certificates.')), cfg.StrOpt('image_store_cafile', help=_('Specific CA bundle to use for validating ' 'SSL connections to the image store. ' 'If not specified, CA available in the ramdisk ' 'will be used. ' 'Is not used by default playbooks included with ' 'the driver. ' 'Suitable for environments that use self-signed ' 'certificates.')), cfg.StrOpt('image_store_certfile', help=_('Client cert to use for SSL connections ' 'to image store. ' 'Is not used by default playbooks included with ' 'the driver.')), cfg.StrOpt('image_store_keyfile', help=_('Client key to use for SSL connections ' 'to image store. ' 'Is not used by default playbooks included with ' 'the driver.')), cfg.StrOpt('default_username', default='ansible', help=_("Name of the user to use for Ansible when connecting " "to the ramdisk over SSH. It may be overridden " "by per-node 'ansible_username' option " "in node's 'driver_info' field.")), cfg.StrOpt('default_key_file', help=_("Absolute path to the private SSH key file to use " "by Ansible by default when connecting to the ramdisk " "over SSH. Default is to use default SSH keys " "configured for the user running the ironic-conductor " "service. Private keys with password must be pre-loaded " "into 'ssh-agent'. It may be overridden by per-node " "'ansible_key_file' option in node's " "'driver_info' field.")), cfg.StrOpt('default_deploy_playbook', default='deploy.yaml', help=_("Path (relative to $playbooks_path or absolute) " "to the default playbook used for deployment. " "It may be overridden by per-node " "'ansible_deploy_playbook' option in node's " "'driver_info' field.")), cfg.StrOpt('default_shutdown_playbook', default='shutdown.yaml', help=_("Path (relative to $playbooks_path or absolute) " "to the default playbook used for graceful in-band " "shutdown of the node. " "It may be overridden by per-node " "'ansible_shutdown_playbook' option in node's " "'driver_info' field.")), cfg.StrOpt('default_clean_playbook', default='clean.yaml', help=_("Path (relative to $playbooks_path or absolute) " "to the default playbook used for node cleaning. " "It may be overridden by per-node " "'ansible_clean_playbook' option in node's " "'driver_info' field.")), cfg.StrOpt('default_clean_steps_config', default='clean_steps.yaml', help=_("Path (relative to $playbooks_path or absolute) " "to the default auxiliary cleaning steps file used " "during the node cleaning. " "It may be overridden by per-node " "'ansible_clean_steps_config' option in node's " "'driver_info' field.")), cfg.StrOpt('default_python_interpreter', help=_("Absolute path to the python interpreter on the " "managed machines. It may be overridden by per-node " "'ansible_python_interpreter' option in node's " "'driver_info' field. " "By default, ansible uses /usr/bin/python")), ] def register_opts(conf): conf.register_opts(opts, group='ansible') ironic-15.0.0/ironic/conf/opts.py0000664000175000017500000000670413652514273016670 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy # of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools from oslo_log import log import ironic.conf _default_opt_lists = [ ironic.conf.default.api_opts, ironic.conf.default.driver_opts, ironic.conf.default.exc_log_opts, ironic.conf.default.hash_opts, ironic.conf.default.image_opts, ironic.conf.default.img_cache_opts, ironic.conf.default.netconf_opts, ironic.conf.default.notification_opts, ironic.conf.default.path_opts, ironic.conf.default.portgroup_opts, ironic.conf.default.service_opts, ironic.conf.default.utils_opts, ] _opts = [ ('DEFAULT', itertools.chain(*_default_opt_lists)), ('agent', ironic.conf.agent.opts), ('ansible', ironic.conf.ansible.opts), ('api', ironic.conf.api.opts), ('audit', ironic.conf.audit.opts), ('cinder', ironic.conf.cinder.list_opts()), ('conductor', ironic.conf.conductor.opts), ('console', ironic.conf.console.opts), ('database', ironic.conf.database.opts), ('deploy', ironic.conf.deploy.opts), ('dhcp', ironic.conf.dhcp.opts), ('drac', ironic.conf.drac.opts), ('glance', ironic.conf.glance.list_opts()), ('healthcheck', ironic.conf.healthcheck.opts), ('ilo', ironic.conf.ilo.opts), ('inspector', ironic.conf.inspector.list_opts()), ('ipmi', ironic.conf.ipmi.opts), ('irmc', ironic.conf.irmc.opts), ('iscsi', ironic.conf.iscsi.opts), ('json_rpc', ironic.conf.json_rpc.list_opts()), ('metrics', ironic.conf.metrics.opts), ('metrics_statsd', ironic.conf.metrics_statsd.opts), ('neutron', ironic.conf.neutron.list_opts()), ('nova', ironic.conf.nova.list_opts()), ('pxe', ironic.conf.pxe.opts), ('service_catalog', ironic.conf.service_catalog.list_opts()), ('snmp', ironic.conf.snmp.opts), ('swift', ironic.conf.swift.list_opts()), ('xclarity', ironic.conf.xclarity.opts), ] def list_opts(): """Return a list of oslo.config options available in Ironic code. The returned list includes all oslo.config options. Each element of the list is a tuple. The first element is the name of the group, the second element is the options. The function is discoverable via the 'ironic' entry point under the 'oslo.config.opts' namespace. The function is used by Oslo sample config file generator to discover the options. :returns: a list of (group, options) tuples """ return _opts def update_opt_defaults(): log.set_defaults( default_log_levels=[ 'amqp=WARNING', 'amqplib=WARNING', 'qpid.messaging=INFO', 'oslo.messaging=INFO', 'sqlalchemy=WARNING', 'stevedore=INFO', 'eventlet.wsgi.server=INFO', 'iso8601=WARNING', 'requests=WARNING', 'neutronclient=WARNING', 'glanceclient=WARNING', 'urllib3.connectionpool=WARNING', 'keystonemiddleware.auth_token=INFO', 'keystoneauth.session=INFO', ] ) ironic-15.0.0/ironic/conf/agent.py0000664000175000017500000001576313652514273017006 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 Rackspace, Inc. # Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.BoolOpt('manage_agent_boot', default=True, help=_('Whether Ironic will manage booting of the agent ' 'ramdisk. If set to False, you will need to configure ' 'your mechanism to allow booting the agent ' 'ramdisk.')), cfg.IntOpt('memory_consumed_by_agent', default=0, help=_('The memory size in MiB consumed by agent when it is ' 'booted on a bare metal node. This is used for ' 'checking if the image can be downloaded and deployed ' 'on the bare metal node after booting agent ramdisk. ' 'This may be set according to the memory consumed by ' 'the agent ramdisk image.')), cfg.BoolOpt('stream_raw_images', default=True, help=_('Whether the agent ramdisk should stream raw images ' 'directly onto the disk or not. By streaming raw ' 'images directly onto the disk the agent ramdisk will ' 'not spend time copying the image to a tmpfs partition ' '(therefore consuming less memory) prior to writing it ' 'to the disk. Unless the disk where the image will be ' 'copied to is really slow, this option should be set ' 'to True. Defaults to True.')), cfg.IntOpt('post_deploy_get_power_state_retries', default=6, help=_('Number of times to retry getting power state to check ' 'if bare metal node has been powered off after a soft ' 'power off.')), cfg.IntOpt('post_deploy_get_power_state_retry_interval', default=5, help=_('Amount of time (in seconds) to wait between polling ' 'power state after trigger soft poweroff.')), cfg.StrOpt('agent_api_version', default='v1', help=_('API version to use for communicating with the ramdisk ' 'agent.')), cfg.StrOpt('deploy_logs_collect', choices=[('always', _('always collect the logs')), ('on_failure', _('only collect logs if there is a ' 'failure')), ('never', _('never collect logs'))], default='on_failure', help=_('Whether Ironic should collect the deployment logs on ' 'deployment failure (on_failure), always or never.')), cfg.StrOpt('deploy_logs_storage_backend', choices=[('local', _('store the logs locally')), ('swift', _('store the logs in Object Storage ' 'service'))], default='local', help=_('The name of the storage backend where the logs ' 'will be stored.')), cfg.StrOpt('deploy_logs_local_path', default='/var/log/ironic/deploy', help=_('The path to the directory where the logs should be ' 'stored, used when the deploy_logs_storage_backend ' 'is configured to "local".')), cfg.StrOpt('deploy_logs_swift_container', default='ironic_deploy_logs_container', help=_('The name of the Swift container to store the logs, ' 'used when the deploy_logs_storage_backend is ' 'configured to "swift".')), cfg.IntOpt('deploy_logs_swift_days_to_expire', default=30, help=_('Number of days before a log object is marked as ' 'expired in Swift. If None, the logs will be kept ' 'forever or until manually deleted. Used when the ' 'deploy_logs_storage_backend is configured to ' '"swift".')), cfg.StrOpt('image_download_source', choices=[('swift', _('IPA ramdisk retrieves instance image ' 'from the Object Storage service.')), ('http', _('IPA ramdisk retrieves instance image ' 'from HTTP service served at conductor ' 'nodes.'))], default='swift', help=_('Specifies whether direct deploy interface should try ' 'to use the image source directly or if ironic should ' 'cache the image on the conductor and serve it from ' 'ironic\'s own http server. This option takes effect ' 'only when instance image is provided from the Image ' 'service.')), cfg.IntOpt('command_timeout', default=60, help=_('Timeout (in seconds) for IPA commands. ' 'Please note, the bootloader installation command ' 'to the agent is permitted a timeout of twice the ' 'value set here as these are IO heavy operations ' 'depending on the configuration of the instance.')), cfg.IntOpt('max_command_attempts', default=3, help=_('This is the maximum number of attempts that will be ' 'done for IPA commands that fails due to network ' 'problems.')), cfg.IntOpt('neutron_agent_poll_interval', default=2, help=_('The number of seconds Neutron agent will wait between ' 'polling for device changes. This value should be ' 'the same as CONF.AGENT.polling_interval in Neutron ' 'configuration.')), cfg.IntOpt('neutron_agent_max_attempts', default=100, help=_('Max number of attempts to validate a Neutron agent ' 'status before raising network error for a ' 'dead agent.')), cfg.IntOpt('neutron_agent_status_retry_interval', default=10, help=_('Wait time in seconds between attempts for validating ' 'Neutron agent status.')), ] def register_opts(conf): conf.register_opts(opts, group='agent') ironic-15.0.0/ironic/conf/drac.py0000664000175000017500000000317413652514273016612 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('query_raid_config_job_status_interval', default=120, min=1, help=_('Interval (in seconds) between periodic RAID job status ' 'checks to determine whether the asynchronous RAID ' 'configuration was successfully finished or not.')), cfg.IntOpt('boot_device_job_status_timeout', default=30, min=1, help=_('Maximum amount of time (in seconds) to wait for ' 'the boot device configuration job to transition ' 'to the correct state to allow a reboot or power ' 'on to complete.')), cfg.IntOpt('config_job_max_retries', default=240, min=1, help=_('Maximum number of retries for ' 'the configuration job to complete ' 'successfully.')) ] def register_opts(conf): conf.register_opts(opts, group='drac') ironic-15.0.0/ironic/conf/auth.py0000664000175000017500000000543513652514273016644 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from keystoneauth1 import loading as kaloading from oslo_config import cfg from oslo_log import log LOG = log.getLogger(__name__) DEFAULT_VALID_INTERFACES = ['internal', 'public'] def register_auth_opts(conf, group, service_type=None): """Register session- and auth-related options Registers only basic auth options shared by all auth plugins. The rest are registered at runtime depending on auth plugin used. """ kaloading.register_session_conf_options(conf, group) kaloading.register_auth_conf_options(conf, group) kaloading.register_adapter_conf_options(conf, group) conf.set_default('valid_interfaces', DEFAULT_VALID_INTERFACES, group=group) # TODO(pas-ha) use os-service-type to try find the service_type by the # config group name assuming it is a project name (e.g. 'glance') if service_type: conf.set_default('service_type', service_type, group=group) def add_auth_opts(options, service_type=None): """Add auth options to sample config As these are dynamically registered at runtime, this adds options for most used auth_plugins when generating sample config. """ def add_options(opts, opts_to_add): for new_opt in opts_to_add: for opt in opts: if opt.name == new_opt.name: break else: opts.append(new_opt) opts = copy.deepcopy(options) opts.insert(0, kaloading.get_auth_common_conf_options()[0]) # NOTE(dims): There are a lot of auth plugins, we just generate # the config options for a few common ones plugins = ['password', 'v2password', 'v3password'] for name in plugins: plugin = kaloading.get_plugin_loader(name) add_options(opts, kaloading.get_auth_plugin_conf_options(plugin)) add_options(opts, kaloading.get_session_conf_options()) if service_type: adapter_opts = kaloading.get_adapter_conf_options( include_deprecated=False) # adding defaults for valid interfaces cfg.set_defaults(adapter_opts, service_type=service_type, valid_interfaces=DEFAULT_VALID_INTERFACES) add_options(opts, adapter_opts) opts.sort(key=lambda x: x.name) return opts ironic-15.0.0/ironic/conf/json_rpc.py0000664000175000017500000000315513652514273017515 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.StrOpt('auth_strategy', choices=[('noauth', _('no authentication')), ('keystone', _('use the Identity service for ' 'authentication'))], help=_('Authentication strategy used by JSON RPC. Defaults to ' 'the global auth_strategy setting.')), cfg.HostAddressOpt('host_ip', default='::', help=_('The IP address or hostname on which JSON RPC ' 'will listen.')), cfg.PortOpt('port', default=8089, help=_('The port to use for JSON RPC')), cfg.BoolOpt('use_ssl', default=False, help=_('Whether to use TLS for JSON RPC')), ] def register_opts(conf): conf.register_opts(opts, group='json_rpc') auth.register_auth_opts(conf, 'json_rpc') def list_opts(): return opts + auth.add_auth_opts([]) ironic-15.0.0/ironic/conf/inspector.py0000664000175000017500000000421613652514273017705 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.IntOpt('status_check_period', default=60, help=_('period (in seconds) to check status of nodes ' 'on inspection')), cfg.StrOpt('extra_kernel_params', default='', help=_('extra kernel parameters to pass to the inspection ' 'ramdisk when boot is managed by ironic (not ' 'ironic-inspector). Pairs key=value separated by ' 'spaces.')), cfg.BoolOpt('power_off', default=True, help=_('whether to power off a node after inspection ' 'finishes')), cfg.StrOpt('callback_endpoint_override', help=_('endpoint to use as a callback for posting back ' 'introspection data when boot is managed by ironic. ' 'Standard keystoneauth options are used by default.')), cfg.BoolOpt('require_managed_boot', default=False, help=_('require that the in-band inspection boot is fully ' 'managed by ironic. Set this to True if your ' 'installation of ironic-inspector does not have a ' 'separate PXE boot environment.')), ] def register_opts(conf): conf.register_opts(opts, group='inspector') auth.register_auth_opts(conf, 'inspector', service_type='baremetal-introspection') def list_opts(): return auth.add_auth_opts(opts, service_type='baremetal-introspection') ironic-15.0.0/ironic/conf/neutron.py0000664000175000017500000001506713652514273017377 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.IntOpt('port_setup_delay', default=0, min=0, help=_('Delay value to wait for Neutron agents to setup ' 'sufficient DHCP configuration for port.')), cfg.IntOpt('retries', default=3, help=_('Client retries in the case of a failed request.')), cfg.StrOpt('cleaning_network', help=_('Neutron network UUID or name for the ramdisk to be ' 'booted into for cleaning nodes. Required for "neutron" ' 'network interface. It is also required if cleaning ' 'nodes when using "flat" network interface or "neutron" ' 'DHCP provider. If a name is provided, it must be ' 'unique among all networks or cleaning will fail.'), deprecated_name='cleaning_network_uuid'), cfg.StrOpt('provisioning_network', help=_('Neutron network UUID or name for the ramdisk to be ' 'booted into for provisioning nodes. Required for ' '"neutron" network interface. If a name is provided, ' 'it must be unique among all networks or deploy will ' 'fail.'), deprecated_name='provisioning_network_uuid'), cfg.ListOpt('provisioning_network_security_groups', default=[], help=_('List of Neutron Security Group UUIDs to be ' 'applied during provisioning of the nodes. ' 'Optional for the "neutron" network interface and not ' 'used for the "flat" or "noop" network interfaces. ' 'If not specified, default security group ' 'is used.')), cfg.ListOpt('cleaning_network_security_groups', default=[], help=_('List of Neutron Security Group UUIDs to be ' 'applied during cleaning of the nodes. ' 'Optional for the "neutron" network interface and not ' 'used for the "flat" or "noop" network interfaces. ' 'If not specified, default security group ' 'is used.')), cfg.StrOpt('rescuing_network', help=_('Neutron network UUID or name for booting the ramdisk ' 'for rescue mode. This is not the network that the ' 'rescue ramdisk will use post-boot -- the tenant ' 'network is used for that. Required for "neutron" ' 'network interface, if rescue mode will be used. It ' 'is not used for the "flat" or "noop" network ' 'interfaces. If a name is provided, it must be unique ' 'among all networks or rescue will fail.')), cfg.ListOpt('rescuing_network_security_groups', default=[], help=_('List of Neutron Security Group UUIDs to be applied ' 'during the node rescue process. Optional for the ' '"neutron" network interface and not used for the ' '"flat" or "noop" network interfaces. If not ' 'specified, the default security group is used.')), cfg.IntOpt('request_timeout', default=45, help=_('Timeout for request processing when interacting ' 'with Neutron. This value should be increased if ' 'neutron port action timeouts are observed as neutron ' 'performs pre-commit validation prior returning to ' 'the API client which can take longer than normal ' 'client/server interactions.')), cfg.BoolOpt('add_all_ports', default=False, help=_('Option to enable transmission of all ports ' 'to neutron when creating ports for provisioning, ' 'cleaning, or rescue. This is done without IP ' 'addresses assigned to the port, and may be useful ' 'in some bonded network configurations.')), cfg.StrOpt('inspection_network', help=_('Neutron network UUID or name for the ramdisk to be ' 'booted into for in-band inspection of nodes. ' 'If a name is provided, it must be unique among all ' 'networks or inspection will fail.')), cfg.ListOpt('inspection_network_security_groups', default=[], help=_('List of Neutron Security Group UUIDs to be applied ' 'during the node inspection process. Optional for the ' '"neutron" network interface and not used for the ' '"flat" or "noop" network interfaces. If not ' 'specified, the default security group is used.')), cfg.IntOpt('dhcpv6_stateful_address_count', default=4, help=_('Number of IPv6 addresses to allocate for ports created ' 'for provisioning, cleaning, rescue or inspection on ' 'DHCPv6-stateful networks. Different stages of the ' 'chain-loading process will request addresses with ' 'different CLID/IAID. Due to non-identical identifiers ' 'multiple addresses must be reserved for the host to ' 'ensure each step of the boot process can successfully ' 'lease addresses.')) ] def register_opts(conf): conf.register_opts(opts, group='neutron') auth.register_auth_opts(conf, 'neutron', service_type='network') def list_opts(): return auth.add_auth_opts(opts, service_type='network') ironic-15.0.0/ironic/conf/nova.py0000664000175000017500000000231413652514273016637 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.BoolOpt('send_power_notifications', default=True, help=_('When set to True, it will enable the support ' 'for power state change callbacks to nova. This ' 'option should be set to False in deployments ' 'that do not have the openstack compute service.')) ] def register_opts(conf): conf.register_opts(opts, group='nova') auth.register_auth_opts(conf, 'nova', service_type='compute') def list_opts(): return auth.add_auth_opts(opts, service_type='compute') ironic-15.0.0/ironic/conf/service_catalog.py0000664000175000017500000000223613652514273021031 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth SERVICE_CATALOG_GROUP = cfg.OptGroup( 'service_catalog', title='Access info for Ironic service user', help=_('Holds credentials and session options to access ' 'Keystone catalog for Ironic API endpoint resolution.')) def register_opts(conf): auth.register_auth_opts(conf, SERVICE_CATALOG_GROUP.name, service_type='baremetal') def list_opts(): return auth.add_auth_opts([], service_type='baremetal') ironic-15.0.0/ironic/conf/database.py0000664000175000017500000000164013652514273017441 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('mysql_engine', default='InnoDB', help=_('MySQL engine to use.')) ] def register_opts(conf): conf.register_opts(opts, group='database') ironic-15.0.0/ironic/conf/deploy.py0000664000175000017500000001653313652514273017200 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright (c) 2012 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common import boot_modes from ironic.common.i18n import _ opts = [ cfg.StrOpt('http_url', help=_("ironic-conductor node's HTTP server URL. " "Example: http://192.1.2.3:8080")), cfg.StrOpt('http_root', default='/httpboot', help=_("ironic-conductor node's HTTP root path.")), cfg.BoolOpt('enable_ata_secure_erase', default=True, help=_('Whether to support the use of ATA Secure Erase ' 'during the cleaning process. Defaults to True.')), cfg.IntOpt('erase_devices_priority', help=_('Priority to run in-band erase devices via the Ironic ' 'Python Agent ramdisk. If unset, will use the priority ' 'set in the ramdisk (defaults to 10 for the ' 'GenericHardwareManager). If set to 0, will not run ' 'during cleaning.')), cfg.IntOpt('erase_devices_metadata_priority', help=_('Priority to run in-band clean step that erases ' 'metadata from devices, via the Ironic Python Agent ' 'ramdisk. If unset, will use the priority set in the ' 'ramdisk (defaults to 99 for the ' 'GenericHardwareManager). If set to 0, will not run ' 'during cleaning.')), cfg.IntOpt('shred_random_overwrite_iterations', default=1, min=0, help=_('During shred, overwrite all block devices N times with ' 'random data. This is only used if a device could not ' 'be ATA Secure Erased. Defaults to 1.')), cfg.BoolOpt('shred_final_overwrite_with_zeros', default=True, help=_("Whether to write zeros to a node's block devices " "after writing random data. This will write zeros to " "the device even when " "deploy.shred_random_overwrite_iterations is 0. This " "option is only used if a device could not be ATA " "Secure Erased. Defaults to True.")), cfg.BoolOpt('continue_if_disk_secure_erase_fails', default=False, help=_('Defines what to do if an ATA secure erase operation ' 'fails during cleaning in the Ironic Python Agent. ' 'If False, the cleaning operation will fail and the ' 'node will be put in ``clean failed`` state. ' 'If True, shred will be invoked and cleaning will ' 'continue.')), cfg.IntOpt('disk_erasure_concurrency', default=1, min=1, help=_('Defines the target pool size used by Ironic Python ' 'Agent ramdisk to erase disk devices. The number of ' 'threads created to erase disks will not exceed this ' 'value or the number of disks to be erased.')), cfg.BoolOpt('power_off_after_deploy_failure', default=True, help=_('Whether to power off a node after deploy failure. ' 'Defaults to True.')), cfg.StrOpt('default_boot_option', choices=[('netboot', _('boot from a network')), ('local', _('local boot'))], default='local', help=_('Default boot option to use when no boot option is ' 'requested in node\'s driver_info. Defaults to ' '"local". Prior to the Ussuri release, the default ' 'was "netboot".')), cfg.StrOpt('default_boot_mode', choices=[(boot_modes.UEFI, _('UEFI boot mode')), (boot_modes.LEGACY_BIOS, _('Legacy BIOS boot mode'))], default=boot_modes.LEGACY_BIOS, help=_('Default boot mode to use when no boot mode is ' 'requested in node\'s driver_info, capabilities or ' 'in the `instance_info` configuration. Currently the ' 'default boot mode is "%(bios)s", but it will be ' 'changed to "%(uefi)s in the future. It is recommended ' 'to set an explicit value for this option. This option ' 'only has effect when management interface supports ' 'boot mode management') % { 'bios': boot_modes.LEGACY_BIOS, 'uefi': boot_modes.UEFI}), cfg.BoolOpt('configdrive_use_object_store', default=False, deprecated_group='conductor', deprecated_name='configdrive_use_swift', help=_('Whether to upload the config drive to object store. ' 'Set this option to True to store config drive ' 'in a swift endpoint.')), cfg.StrOpt('http_image_subdir', default='agent_images', help=_('The name of subdirectory under ironic-conductor ' 'node\'s HTTP root path which is used to place instance ' 'images for the direct deploy interface, when local ' 'HTTP service is incorporated to provide instance image ' 'instead of swift tempurls.')), cfg.BoolOpt('fast_track', default=False, help=_('Whether to allow deployment agents to perform lookup, ' 'heartbeat operations during initial states of a ' 'machine lifecycle and by-pass the normal setup ' 'procedures for a ramdisk. This feature also enables ' 'power operations which are part of deployment ' 'processes to be bypassed if the ramdisk has performed ' 'a heartbeat operation using the fast_track_timeout ' 'setting.')), cfg.IntOpt('fast_track_timeout', default=300, min=0, max=300, help=_('Seconds for which the last heartbeat event is to be ' 'considered valid for the purpose of a fast ' 'track sequence. This setting should generally be ' 'less than the number of seconds for "Power-On Self ' 'Test" and typical ramdisk start-up. This value should ' 'not exceed the [api]ramdisk_heartbeat_timeout ' 'setting.')), ] def register_opts(conf): conf.register_opts(opts, group='deploy') ironic-15.0.0/ironic/conf/ipmi.py0000664000175000017500000000614613652514273016641 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('command_retry_timeout', default=60, help=_('Maximum time in seconds to retry retryable IPMI ' 'operations. (An operation is retryable, for ' 'example, if the requested operation fails ' 'because the BMC is busy.) Setting this too high ' 'can cause the sync power state ' 'periodic task to hang when there are slow or ' 'unresponsive BMCs.')), cfg.IntOpt('min_command_interval', default=5, help=_('Minimum time, in seconds, between IPMI operations ' 'sent to a server. There is a risk with some hardware ' 'that setting this too low may cause the BMC to crash. ' 'Recommended setting is 5 seconds.')), cfg.BoolOpt('kill_on_timeout', default=True, help=_('Kill `ipmitool` process invoked by ironic to read ' 'node power state if `ipmitool` process does not exit ' 'after `command_retry_timeout` timeout expires. ' 'Recommended setting is True')), cfg.BoolOpt('disable_boot_timeout', default=True, help=_('Default timeout behavior whether ironic sends a raw ' 'IPMI command to disable the 60 second timeout for ' 'booting. Setting this option to False will NOT send ' 'that command, the default value is True. It may be ' 'overridden by per-node \'ipmi_disable_boot_timeout\' ' 'option in node\'s \'driver_info\' field.')), cfg.MultiStrOpt('additional_retryable_ipmi_errors', default=[], help=_('Additional errors ipmitool may encounter, ' 'specific to the environment it is run in.')), cfg.BoolOpt('debug', default=False, help=_('Enables all ipmi commands to be executed with an ' 'additional debugging output. This is a separate ' 'option as ipmitool can log a substantial amount ' 'of misleading text when in this mode.')), ] def register_opts(conf): conf.register_opts(opts, group='ipmi') ironic-15.0.0/ironic/conf/glance.py0000664000175000017500000001422713652514273017133 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2010 OpenStack Foundation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.ListOpt('allowed_direct_url_schemes', default=[], help=_('A list of URL schemes that can be downloaded directly ' 'via the direct_url. Currently supported schemes: ' '[file].')), # To upload this key to Swift: # swift post -m Temp-Url-Key:secretkey # When using radosgw, temp url key could be uploaded via the above swift # command, or with: # radosgw-admin user modify --uid=user --temp-url-key=secretkey cfg.StrOpt('swift_temp_url_key', help=_('The secret token given to Swift to allow temporary URL ' 'downloads. Required for temporary URLs. For the ' 'Swift backend, the key on the service project (as set ' 'in the [swift] section) is used by default.'), secret=True), cfg.IntOpt('swift_temp_url_duration', default=1200, help=_('The length of time in seconds that the temporary URL ' 'will be valid for. Defaults to 20 minutes. If some ' 'deploys get a 401 response code when trying to ' 'download from the temporary URL, try raising this ' 'duration. This value must be greater than or equal to ' 'the value for ' 'swift_temp_url_expected_download_start_delay')), cfg.BoolOpt('swift_temp_url_cache_enabled', default=False, help=_('Whether to cache generated Swift temporary URLs. ' 'Setting it to true is only useful when an image ' 'caching proxy is used. Defaults to False.')), cfg.IntOpt('swift_temp_url_expected_download_start_delay', default=0, min=0, help=_('This is the delay (in seconds) from the time of the ' 'deploy request (when the Swift temporary URL is ' 'generated) to when the IPA ramdisk starts up and URL ' 'is used for the image download. This value is used to ' 'check if the Swift temporary URL duration is large ' 'enough to let the image download begin. Also if ' 'temporary URL caching is enabled this will determine ' 'if a cached entry will still be valid when the ' 'download starts. swift_temp_url_duration value must be ' 'greater than or equal to this option\'s value. ' 'Defaults to 0.')), cfg.StrOpt( 'swift_endpoint_url', help=_('The "endpoint" (scheme, hostname, optional port) for ' 'the Swift URL of the form ' '"endpoint_url/api_version/account/container/object_id". ' 'Do not include trailing "/". ' 'For example, use "https://swift.example.com". If using RADOS ' 'Gateway, endpoint may also contain /swift path; if it does ' 'not, it will be appended. Used for temporary URLs, will ' 'be fetched from the service catalog, if not provided.')), cfg.StrOpt( 'swift_api_version', default='v1', help=_('The Swift API version to create a temporary URL for. ' 'Defaults to "v1". Swift temporary URL format: ' '"endpoint_url/api_version/account/container/object_id"')), cfg.StrOpt( 'swift_account', help=_('The account that Glance uses to communicate with ' 'Swift. The format is "AUTH_uuid". "uuid" is the ' 'UUID for the account configured in the glance-api.conf. ' 'For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". ' 'If not set, the default value is calculated based on the ID ' 'of the project used to access Swift (as set in the [swift] ' 'section). Swift temporary URL format: ' '"endpoint_url/api_version/account/container/object_id"')), cfg.StrOpt( 'swift_container', default='glance', help=_('The Swift container Glance is configured to store its ' 'images in. Defaults to "glance", which is the default ' 'in glance-api.conf. ' 'Swift temporary URL format: ' '"endpoint_url/api_version/account/container/object_id"')), cfg.IntOpt('swift_store_multiple_containers_seed', default=0, help=_('This should match a config by the same name in the ' 'Glance configuration file. When set to 0, a ' 'single-tenant store will only use one ' 'container to store all images. When set to an integer ' 'value between 1 and 32, a single-tenant store will use ' 'multiple containers to store images, and this value ' 'will determine how many containers are created.')), cfg.IntOpt('num_retries', default=0, help=_('Number of retries when downloading an image from ' 'glance.')), ] def register_opts(conf): conf.register_opts(opts, group='glance') auth.register_auth_opts(conf, 'glance', service_type='image') def list_opts(): return auth.add_auth_opts(opts, service_type='image') ironic-15.0.0/ironic/conf/default.py0000664000175000017500000004405613652514273017331 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 Red Hat, Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import socket import tempfile from oslo_config import cfg from oslo_utils import netutils from ironic.common.i18n import _ from ironic.common import release_mappings as versions _ENABLED_IFACE_HELP = _('Specify the list of {0} interfaces to load during ' 'service initialization. Missing {0} interfaces, ' 'or {0} interfaces which fail to initialize, will ' 'prevent the ironic-conductor service from starting. ' 'At least one {0} interface that is supported by each ' 'enabled hardware type must be enabled here, or the ' 'ironic-conductor service will not start. ' 'Must not be an empty list. ' 'The default value is a recommended set of ' 'production-oriented {0} interfaces. A complete ' 'list of {0} interfaces present on your system may ' 'be found by enumerating the ' '"ironic.hardware.interfaces.{0}" entrypoint. ' 'When setting this value, please make sure that ' 'every enabled hardware type will have the same ' 'set of enabled {0} interfaces on every ' 'ironic-conductor service.') _DEFAULT_IFACE_HELP = _('Default {0} interface to be used for nodes that ' 'do not have {0}_interface field set. A complete ' 'list of {0} interfaces present on your system may ' 'be found by enumerating the ' '"ironic.hardware.interfaces.{0}" entrypoint.') api_opts = [ cfg.StrOpt( 'auth_strategy', default='keystone', choices=[('noauth', _('no authentication')), ('keystone', _('use the Identity service for ' 'authentication'))], help=_('Authentication strategy used by ironic-api. "noauth" should ' 'not be used in a production environment because all ' 'authentication will be disabled.')), cfg.BoolOpt('debug_tracebacks_in_api', default=False, help=_('Return server tracebacks in the API response for any ' 'error responses. WARNING: this is insecure ' 'and should not be used in a production environment.')), cfg.BoolOpt('pecan_debug', default=False, help=_('Enable pecan debug mode. WARNING: this is insecure ' 'and should not be used in a production environment.')), cfg.StrOpt('default_resource_class', help=_('Resource class to use for new nodes when no resource ' 'class is provided in the creation request.')), ] driver_opts = [ cfg.ListOpt('enabled_hardware_types', default=['ipmi'], help=_('Specify the list of hardware types to load during ' 'service initialization. Missing hardware types, or ' 'hardware types which fail to initialize, will prevent ' 'the conductor service from starting. This option ' 'defaults to a recommended set of production-oriented ' 'hardware types. ' 'A complete list of hardware types present on your ' 'system may be found by enumerating the ' '"ironic.hardware.types" entrypoint.')), cfg.ListOpt('enabled_bios_interfaces', default=['no-bios'], help=_ENABLED_IFACE_HELP.format('bios')), cfg.StrOpt('default_bios_interface', help=_DEFAULT_IFACE_HELP.format('bios')), cfg.ListOpt('enabled_boot_interfaces', default=['pxe'], help=_ENABLED_IFACE_HELP.format('boot')), cfg.StrOpt('default_boot_interface', help=_DEFAULT_IFACE_HELP.format('boot')), cfg.ListOpt('enabled_console_interfaces', default=['no-console'], help=_ENABLED_IFACE_HELP.format('console')), cfg.StrOpt('default_console_interface', help=_DEFAULT_IFACE_HELP.format('console')), cfg.ListOpt('enabled_deploy_interfaces', default=['iscsi', 'direct'], help=_ENABLED_IFACE_HELP.format('deploy')), cfg.StrOpt('default_deploy_interface', help=_DEFAULT_IFACE_HELP.format('deploy')), cfg.ListOpt('enabled_inspect_interfaces', default=['no-inspect'], help=_ENABLED_IFACE_HELP.format('inspect')), cfg.StrOpt('default_inspect_interface', help=_DEFAULT_IFACE_HELP.format('inspect')), cfg.ListOpt('enabled_management_interfaces', default=['ipmitool'], help=_ENABLED_IFACE_HELP.format('management')), cfg.StrOpt('default_management_interface', help=_DEFAULT_IFACE_HELP.format('management')), cfg.ListOpt('enabled_network_interfaces', default=['flat', 'noop'], help=_ENABLED_IFACE_HELP.format('network')), cfg.StrOpt('default_network_interface', help=_DEFAULT_IFACE_HELP.format('network')), cfg.ListOpt('enabled_power_interfaces', default=['ipmitool'], help=_ENABLED_IFACE_HELP.format('power')), cfg.StrOpt('default_power_interface', help=_DEFAULT_IFACE_HELP.format('power')), cfg.ListOpt('enabled_raid_interfaces', default=['agent', 'no-raid'], help=_ENABLED_IFACE_HELP.format('raid')), cfg.StrOpt('default_raid_interface', help=_DEFAULT_IFACE_HELP.format('raid')), cfg.ListOpt('enabled_rescue_interfaces', default=['no-rescue'], help=_ENABLED_IFACE_HELP.format('rescue')), cfg.StrOpt('default_rescue_interface', help=_DEFAULT_IFACE_HELP.format('rescue')), cfg.ListOpt('enabled_storage_interfaces', default=['cinder', 'noop'], help=_ENABLED_IFACE_HELP.format('storage')), cfg.StrOpt('default_storage_interface', default='noop', help=_DEFAULT_IFACE_HELP.format('storage')), cfg.ListOpt('enabled_vendor_interfaces', default=['ipmitool', 'no-vendor'], help=_ENABLED_IFACE_HELP.format('vendor')), cfg.StrOpt('default_vendor_interface', help=_DEFAULT_IFACE_HELP.format('vendor')), ] exc_log_opts = [ cfg.BoolOpt('fatal_exception_format_errors', default=False, help=_('Used if there is a formatting error when generating ' 'an exception message (a programming error). If True, ' 'raise an exception; if False, use the unformatted ' 'message.'), deprecated_for_removal=True, deprecated_reason=_('Same option in the ironic_lib section ' 'should be used instead.')), cfg.IntOpt('log_in_db_max_size', default=4096, help=_('Max number of characters of any node ' 'last_error/maintenance_reason pushed to database.')), ] hash_opts = [ cfg.IntOpt('hash_partition_exponent', default=5, help=_('Exponent to determine number of hash partitions to use ' 'when distributing load across conductors. Larger ' 'values will result in more even distribution of load ' 'and less load when rebalancing the ring, but more ' 'memory usage. Number of partitions per conductor is ' '(2^hash_partition_exponent). This determines the ' 'granularity of rebalancing: given 10 hosts, and an ' 'exponent of the 2, there are 40 partitions in the ring.' 'A few thousand partitions should make rebalancing ' 'smooth in most cases. The default is suitable for up ' 'to a few hundred conductors. Configuring for too many ' 'partitions has a negative impact on CPU usage.')), cfg.IntOpt('hash_ring_reset_interval', default=15, help=_('Time (in seconds) after which the hash ring is ' 'considered outdated and is refreshed on the next ' 'access.')), ] image_opts = [ cfg.BoolOpt('force_raw_images', default=True, help=_('If True, convert backing images to "raw" disk image ' 'format.')), cfg.StrOpt('isolinux_bin', default='/usr/lib/syslinux/isolinux.bin', help=_('Path to isolinux binary file.')), cfg.StrOpt('isolinux_config_template', default=os.path.join('$pybasedir', 'common/isolinux_config.template'), help=_('Template file for isolinux configuration file.')), cfg.StrOpt('grub_config_path', default='/boot/grub/grub.cfg', help=_('GRUB2 configuration file location on the UEFI ISO ' 'images produced by ironic.')), cfg.StrOpt('grub_config_template', default=os.path.join('$pybasedir', 'common/grub_conf.template'), help=_('Template file for grub configuration file.')), cfg.StrOpt('ldlinux_c32', help=_('Path to ldlinux.c32 file. This file is required for ' 'syslinux 5.0 or later. If not specified, the file is ' 'looked for in ' '"/usr/lib/syslinux/modules/bios/ldlinux.c32" and ' '"/usr/share/syslinux/ldlinux.c32".')), cfg.StrOpt('esp_image', help=_('Path to EFI System Partition image file. This file is ' 'recommended for creating UEFI bootable ISO images ' 'efficiently. ESP image should contain a ' 'FAT12/16/32-formatted file system holding EFI boot ' 'loaders (e.g. GRUB2) for each hardware architecture ' 'ironic needs to boot. This option is only used when ' 'neither ESP nor ISO deploy image is configured to ' 'the node being deployed in which case ironic will ' 'attempt to fetch ESP image from the configured ' 'location or extract ESP image from UEFI-bootable ' 'deploy ISO image.')), ] img_cache_opts = [ cfg.BoolOpt('parallel_image_downloads', default=False, help=_('Run image downloads and raw format conversions in ' 'parallel.')), ] netconf_opts = [ cfg.StrOpt('my_ip', default=netutils.get_my_ipv4(), sample_default='127.0.0.1', help=_('IPv4 address of this host. If unset, will determine ' 'the IP programmatically. If unable to do so, will use ' '"127.0.0.1". NOTE: This field does accept an IPv6 ' 'address as an override for templates and URLs, ' 'however it is recommended that [DEFAULT]my_ipv6 ' 'is used along with DNS names for service URLs for ' 'dual-stack environments.')), cfg.StrOpt('my_ipv6', default=None, sample_default='2001:db8::1', help=_('IP address of this host using IPv6. This value must ' 'be supplied via the configuration and cannot be ' 'adequately programmatically determined like the ' '[DEFAULT]my_ip parameter for IPv4.')), ] notification_opts = [ # NOTE(mariojv) By default, accessing this option when it's unset will # return None, indicating no notifications will be sent. oslo.config # returns None by default for options without set defaults that aren't # required. cfg.StrOpt('notification_level', choices=[('debug', _('"debug" level')), ('info', _('"info" level')), ('warning', _('"warning" level')), ('error', _('"error" level')), ('critical', _('"critical" level'))], help=_('Specifies the minimum level for which to send ' 'notifications. If not set, no notifications will ' 'be sent. The default is for this option to be unset.')), cfg.ListOpt( 'versioned_notifications_topics', default=['ironic_versioned_notifications'], help=_(""" Specifies the topics for the versioned notifications issued by Ironic. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Ironic will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/ironic/latest/admin/notifications.html """)), ] path_opts = [ cfg.StrOpt('pybasedir', default=os.path.abspath(os.path.join(os.path.dirname(__file__), '../')), sample_default='/usr/lib/python/site-packages/ironic/ironic', help=_('Directory where the ironic python module is ' 'installed.')), cfg.StrOpt('bindir', default='$pybasedir/bin', help=_('Directory where ironic binaries are installed.')), cfg.StrOpt('state_path', default='$pybasedir', help=_("Top-level directory for maintaining ironic's state.")), ] portgroup_opts = [ cfg.StrOpt( 'default_portgroup_mode', default='active-backup', help=_( 'Default mode for portgroups. Allowed values can be found in the ' 'linux kernel documentation on bonding: ' 'https://www.kernel.org/doc/Documentation/networking/bonding.txt.') ), ] service_opts = [ cfg.StrOpt('host', default=socket.getfqdn(), sample_default='localhost', help=_('Name of this node. This can be an opaque identifier. ' 'It is not necessarily a hostname, FQDN, or IP address. ' 'However, the node name must be valid within ' 'an AMQP key, and if using ZeroMQ (will be removed in ' 'the Stein release), a valid hostname, FQDN, ' 'or IP address.')), cfg.StrOpt('pin_release_version', choices=versions.RELEASE_VERSIONS_DESCS, mutable=True, help=_('Used for rolling upgrades. Setting this option ' 'downgrades (or pins) the Bare Metal API, ' 'the internal ironic RPC communication, and ' 'the database objects to their respective ' 'versions, so they are compatible with older services. ' 'When doing a rolling upgrade from version N to version ' 'N+1, set (to pin) this to N. To unpin (default), leave ' 'it unset and the latest versions will be used.')), cfg.StrOpt('rpc_transport', default='oslo', choices=[('oslo', _('use oslo.messaging transport')), ('json-rpc', _('use JSON RPC transport'))], help=_('Which RPC transport implementation to use between ' 'conductor and API services')), cfg.BoolOpt('require_agent_token', default=False, help=_('Used to require the use of agent tokens. These ' 'tokens are used to guard the api lookup endpoint and ' 'conductor heartbeat processing logic to authenticate ' 'transactions with the ironic-python-agent. Tokens ' 'are provided only upon the first lookup of a node ' 'and may be provided via out of band means through ' 'the use of virtual media.')), ] utils_opts = [ cfg.StrOpt('rootwrap_config', default="/etc/ironic/rootwrap.conf", help=_('Path to the rootwrap configuration file to use for ' 'running commands as root.')), cfg.StrOpt('tempdir', default=tempfile.gettempdir(), sample_default=tempfile.gettempdir(), help=_('Temporary working directory, default is Python temp ' 'dir.')), ] def register_opts(conf): conf.register_opts(api_opts) conf.register_opts(driver_opts) conf.register_opts(exc_log_opts) conf.register_opts(hash_opts) conf.register_opts(image_opts) conf.register_opts(img_cache_opts) conf.register_opts(netconf_opts) conf.register_opts(notification_opts) conf.register_opts(path_opts) conf.register_opts(portgroup_opts) conf.register_opts(service_opts) conf.register_opts(utils_opts) ironic-15.0.0/ironic/conf/ilo.py0000664000175000017500000001064513652514273016465 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('client_timeout', default=60, help=_('Timeout (in seconds) for iLO operations')), cfg.PortOpt('client_port', default=443, help=_('Port to be used for iLO operations')), cfg.StrOpt('swift_ilo_container', default='ironic_ilo_container', help=_('The Swift iLO container to store data.')), cfg.IntOpt('swift_object_expiry_timeout', default=900, help=_('Amount of time in seconds for Swift objects to ' 'auto-expire.')), cfg.BoolOpt('use_web_server_for_images', default=False, help=_('Set this to True to use http web server to host ' 'floppy images and generated boot ISO. This ' 'requires http_root and http_url to be configured ' 'in the [deploy] section of the config file. If this ' 'is set to False, then Ironic will use Swift ' 'to host the floppy images and generated ' 'boot_iso.')), cfg.IntOpt('clean_priority_reset_ilo', default=0, help=_('Priority for reset_ilo clean step.')), cfg.IntOpt('clean_priority_reset_bios_to_default', default=10, help=_('Priority for reset_bios_to_default clean step.')), cfg.IntOpt('clean_priority_reset_secure_boot_keys_to_default', default=20, help=_('Priority for reset_secure_boot_keys clean step. This ' 'step will reset the secure boot keys to manufacturing ' 'defaults.')), cfg.IntOpt('clean_priority_clear_secure_boot_keys', default=0, help=_('Priority for clear_secure_boot_keys clean step. This ' 'step is not enabled by default. It can be enabled to ' 'clear all secure boot keys enrolled with iLO.')), cfg.IntOpt('clean_priority_reset_ilo_credential', default=30, help=_('Priority for reset_ilo_credential clean step. This ' 'step requires "ilo_change_password" parameter to be ' 'updated in nodes\'s driver_info with the new ' 'password.')), cfg.IntOpt('power_wait', default=2, help=_('Amount of time in seconds to wait in between power ' 'operations')), cfg.IntOpt('oob_erase_devices_job_status_interval', min=10, default=300, help=_('Interval (in seconds) between periodic erase-devices ' 'status checks to determine whether the asynchronous ' 'out-of-band erase-devices was successfully finished or ' 'not.')), cfg.StrOpt('ca_file', help=_('CA certificate file to validate iLO.')), cfg.StrOpt('default_boot_mode', default='auto', choices=[('auto', _('based on boot mode settings on the ' 'system')), ('bios', _('BIOS boot mode')), ('uefi', _('UEFI boot mode'))], help=_('Default boot mode to be used in provisioning when ' '"boot_mode" capability is not provided in the ' '"properties/capabilities" of the node. The default is ' '"auto" for backward compatibility. When "auto" is ' 'specified, default boot mode will be selected based ' 'on boot mode settings on the system.')), ] def register_opts(conf): conf.register_opts(opts, group='ilo') ironic-15.0.0/ironic/conf/metrics.py0000664000175000017500000000451713652514273017351 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 Rackspace, Inc. # Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ # IPA config options: used by IPA to configure how it reports metric data cfg.StrOpt('agent_backend', default='noop', help=_('Backend for the agent ramdisk to use for metrics. ' 'Default possible backends are "noop" and "statsd".')), cfg.BoolOpt('agent_prepend_host', default=False, help=_('Prepend the hostname to all metric names sent by the ' 'agent ramdisk. The format of metric names is ' '[global_prefix.][uuid.][host_name.]prefix.' 'metric_name.')), cfg.BoolOpt('agent_prepend_uuid', default=False, help=_('Prepend the node\'s Ironic uuid to all metric names ' 'sent by the agent ramdisk. The format of metric names ' 'is [global_prefix.][uuid.][host_name.]prefix.' 'metric_name.')), cfg.BoolOpt('agent_prepend_host_reverse', default=True, help=_('Split the prepended host value by "." and reverse it ' 'for metrics sent by the agent ramdisk (to better ' 'match the reverse hierarchical form of domain ' 'names).')), cfg.StrOpt('agent_global_prefix', help=_('Prefix all metric names sent by the agent ramdisk ' 'with this value. The format of metric names is ' '[global_prefix.][uuid.][host_name.]prefix.' 'metric_name.')) ] def register_opts(conf): conf.register_opts(opts, group='metrics') ironic-15.0.0/ironic/conf/iscsi.py0000664000175000017500000000307613652514273017014 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.PortOpt('portal_port', default=3260, help=_('The port number on which the iSCSI portal listens ' 'for incoming connections.')), cfg.StrOpt('conv_flags', help=_('Flags that need to be sent to the dd command, ' 'to control the conversion of the original file ' 'when copying to the host. It can contain several ' 'options separated by commas.')), cfg.IntOpt('verify_attempts', default=3, min=1, help=_('Maximum attempts to verify an iSCSI connection is ' 'active, sleeping 1 second between attempts. Defaults ' 'to 3.')), ] def register_opts(conf): conf.register_opts(opts, group='iscsi') ironic-15.0.0/ironic/conf/cinder.py0000664000175000017500000000410413652514273017137 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company LP. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.URIOpt('url', schemes=('http', 'https'), deprecated_for_removal=True, deprecated_reason=_('Use [cinder]/endpoint_override option ' 'to set a specific cinder API URL to ' 'connect to.'), help=_('URL for connecting to cinder. If set, the value must ' 'start with either http:// or https://.')), cfg.IntOpt('retries', default=3, help=_('Client retries in the case of a failed request ' 'connection.')), cfg.IntOpt('action_retries', default=3, help=_('Number of retries in the case of a failed ' 'action (currently only used when detaching ' 'volumes).')), cfg.IntOpt('action_retry_interval', default=5, help=_('Retry interval in seconds in the case of a failed ' 'action (only specific actions are retried).')), ] # NOTE(pas-ha) cinder V3 which ironic requires is registered as volumev3 # service type ATM def register_opts(conf): conf.register_opts(opts, group='cinder') auth.register_auth_opts(conf, 'cinder', service_type='volumev3') def list_opts(): return auth.add_auth_opts(opts, service_type='volumev3') ironic-15.0.0/ironic/conf/conductor.py0000664000175000017500000003543413652514273017705 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('workers_pool_size', default=100, min=3, help=_('The size of the workers greenthread pool. ' 'Note that 2 threads will be reserved by the conductor ' 'itself for handling heart beats and periodic tasks. ' 'On top of that, `sync_power_state_workers` will take ' 'up to 7 green threads with the default value of 8.')), cfg.IntOpt('heartbeat_interval', default=10, help=_('Seconds between conductor heart beats.')), cfg.URIOpt('api_url', schemes=('http', 'https'), deprecated_for_removal=True, deprecated_reason=_("Use [service_catalog]endpoint_override " "option instead if required to use " "a specific ironic api address, " "for example in noauth mode."), help=_('URL of Ironic API service. If not set ironic can ' 'get the current value from the keystone service ' 'catalog. If set, the value must start with either ' 'http:// or https://.')), cfg.IntOpt('heartbeat_timeout', default=60, # We're using timedelta which can overflow if somebody sets this # too high, so limit to a sane value of 10 years. max=315576000, help=_('Maximum time (in seconds) since the last check-in ' 'of a conductor. A conductor is considered inactive ' 'when this time has been exceeded.')), cfg.IntOpt('sync_power_state_interval', default=60, help=_('Interval between syncing the node power state to the ' 'database, in seconds. Set to 0 to disable syncing.')), cfg.IntOpt('check_provision_state_interval', default=60, min=0, help=_('Interval between checks of provision timeouts, ' 'in seconds. Set to 0 to disable checks.')), cfg.IntOpt('check_rescue_state_interval', default=60, min=1, help=_('Interval (seconds) between checks of rescue ' 'timeouts.')), cfg.IntOpt('check_allocations_interval', default=60, min=0, help=_('Interval between checks of orphaned allocations, ' 'in seconds. Set to 0 to disable checks.')), cfg.IntOpt('deploy_callback_timeout', default=1800, help=_('Timeout (seconds) to wait for a callback from ' 'a deploy ramdisk. Set to 0 to disable timeout.')), cfg.BoolOpt('force_power_state_during_sync', default=True, help=_('During sync_power_state, should the hardware power ' 'state be set to the state recorded in the database ' '(True) or should the database be updated based on ' 'the hardware state (False).')), cfg.IntOpt('power_state_sync_max_retries', default=3, help=_('During sync_power_state failures, limit the ' 'number of times Ironic should try syncing the ' 'hardware node power state with the node power state ' 'in DB')), cfg.IntOpt('sync_power_state_workers', default=8, min=1, help=_('The maximum number of worker threads that can be ' 'started simultaneously to sync nodes power states from ' 'the periodic task.')), cfg.IntOpt('periodic_max_workers', default=8, help=_('Maximum number of worker threads that can be started ' 'simultaneously by a periodic task. Should be less ' 'than RPC thread pool size.')), cfg.IntOpt('node_locked_retry_attempts', default=3, help=_('Number of attempts to grab a node lock.')), cfg.IntOpt('node_locked_retry_interval', default=1, help=_('Seconds to sleep between node lock attempts.')), cfg.BoolOpt('send_sensor_data', default=False, help=_('Enable sending sensor data message via the ' 'notification bus')), cfg.IntOpt('send_sensor_data_interval', default=600, min=1, help=_('Seconds between conductor sending sensor data message ' 'to ceilometer via the notification bus.')), cfg.IntOpt('send_sensor_data_workers', default=4, min=1, help=_('The maximum number of workers that can be started ' 'simultaneously for send data from sensors periodic ' 'task.')), cfg.IntOpt('send_sensor_data_wait_timeout', default=300, help=_('The time in seconds to wait for send sensors data ' 'periodic task to be finished before allowing periodic ' 'call to happen again. Should be less than ' 'send_sensor_data_interval value.')), cfg.ListOpt('send_sensor_data_types', default=['ALL'], help=_('List of comma separated meter types which need to be' ' sent to Ceilometer. The default value, "ALL", is a ' 'special value meaning send all the sensor data.')), cfg.BoolOpt('send_sensor_data_for_undeployed_nodes', default=False, help=_('The default for sensor data collection is to only ' 'collect data for machines that are deployed, however ' 'operators may desire to know if there are failures ' 'in hardware that is not presently in use. ' 'When set to true, the conductor will collect sensor ' 'information from all nodes when sensor data ' 'collection is enabled via the send_sensor_data ' 'setting.')), cfg.IntOpt('sync_local_state_interval', default=180, help=_('When conductors join or leave the cluster, existing ' 'conductors may need to update any persistent ' 'local state as nodes are moved around the cluster. ' 'This option controls how often, in seconds, each ' 'conductor will check for nodes that it should ' '"take over". Set it to 0 (or a negative value) to ' 'disable the check entirely.')), cfg.StrOpt('configdrive_swift_container', default='ironic_configdrive_container', help=_('Name of the Swift container to store config drive ' 'data. Used when configdrive_use_object_store is ' 'True.')), cfg.IntOpt('configdrive_swift_temp_url_duration', min=60, help=_('The timeout (in seconds) after which a configdrive ' 'temporary URL becomes invalid. Defaults to ' 'deploy_callback_timeout if it is set, otherwise to ' '1800 seconds. Used when ' 'configdrive_use_object_store is True.')), cfg.IntOpt('inspect_wait_timeout', default=1800, help=_('Timeout (seconds) for waiting for node inspection. ' '0 - unlimited.')), cfg.BoolOpt('automated_clean', default=True, help=_('Enables or disables automated cleaning. Automated ' 'cleaning is a configurable set of steps, ' 'such as erasing disk drives, that are performed on ' 'the node to ensure it is in a baseline state and ' 'ready to be deployed to. This is ' 'done after instance deletion as well as during ' 'the transition from a "manageable" to "available" ' 'state. When enabled, the particular steps ' 'performed to clean a node depend on which driver ' 'that node is managed by; see the individual ' 'driver\'s documentation for details. ' 'NOTE: The introduction of the cleaning operation ' 'causes instance deletion to take significantly ' 'longer. In an environment where all tenants are ' 'trusted (eg, because there is only one tenant), ' 'this option could be safely disabled.')), cfg.BoolOpt('allow_provisioning_in_maintenance', default=True, mutable=True, help=_('Whether to allow nodes to enter or undergo deploy or ' 'cleaning when in maintenance mode. If this option is ' 'set to False, and a node enters maintenance during ' 'deploy or cleaning, the process will be aborted ' 'after the next heartbeat. Automated cleaning or ' 'making a node available will also fail. If True ' '(the default), the process will begin and will pause ' 'after the node starts heartbeating. Moving it from ' 'maintenance will make the process continue.')), cfg.IntOpt('clean_callback_timeout', default=1800, help=_('Timeout (seconds) to wait for a callback from the ' 'ramdisk doing the cleaning. If the timeout is reached ' 'the node will be put in the "clean failed" provision ' 'state. Set to 0 to disable timeout.')), cfg.IntOpt('rescue_callback_timeout', default=1800, min=0, help=_('Timeout (seconds) to wait for a callback from the ' 'rescue ramdisk. If the timeout is reached the node ' 'will be put in the "rescue failed" provision state. ' 'Set to 0 to disable timeout.')), cfg.IntOpt('soft_power_off_timeout', default=600, min=1, help=_('Timeout (in seconds) of soft reboot and soft power ' 'off operation. This value always has to be positive.')), cfg.IntOpt('power_state_change_timeout', min=2, default=60, help=_('Number of seconds to wait for power operations to ' 'complete, i.e., so that a baremetal node is in the ' 'desired power state. If timed out, the power operation ' 'is considered a failure.')), cfg.IntOpt('power_failure_recovery_interval', min=0, default=300, help=_('Interval (in seconds) between checking the power ' 'state for nodes previously put into maintenance mode ' 'due to power synchronization failure. A node is ' 'automatically moved out of maintenance mode once its ' 'power state is retrieved successfully. Set to 0 to ' 'disable this check.')), cfg.StrOpt('conductor_group', default='', help=_('Name of the conductor group to join. Can be up to ' '255 characters and is case insensitive. This ' 'conductor will only manage nodes with a matching ' '"conductor_group" field set on the node.')), cfg.BoolOpt('allow_deleting_available_nodes', default=True, mutable=True, help=_('Allow deleting nodes which are in state ' '\'available\'. Defaults to True.')), cfg.BoolOpt('enable_mdns', default=False, help=_('Whether to enable publishing the baremetal API ' 'endpoint via multicast DNS.')), cfg.StrOpt('deploy_kernel', mutable=True, help=_('Glance ID, http:// or file:// URL of the kernel of ' 'the default deploy image.')), cfg.StrOpt('deploy_ramdisk', mutable=True, help=_('Glance ID, http:// or file:// URL of the initramfs of ' 'the default deploy image.')), cfg.StrOpt('rescue_kernel', mutable=True, help=_('Glance ID, http:// or file:// URL of the kernel of ' 'the default rescue image.')), cfg.StrOpt('rescue_ramdisk', mutable=True, help=_('Glance ID, http:// or file:// URL of the initramfs of ' 'the default rescue image.')), cfg.StrOpt('rescue_password_hash_algorithm', default='sha256', choices=['sha256', 'sha512'], help=_('Password hash algorithm to be used for the rescue ' 'password.')), cfg.BoolOpt('require_rescue_password_hashed', # TODO(TheJulia): Change this to True in Victoria. default=False, help=_('Option to cause the conductor to not fallback to ' 'an un-hashed version of the rescue password, ' 'permitting rescue with older ironic-python-agent ' 'ramdisks.')), cfg.StrOpt('bootloader', mutable=True, help=_('Glance ID, http:// or file:// URL of the EFI system ' 'partition image containing EFI boot loader. This image ' 'will be used by ironic when building UEFI-bootable ISO ' 'out of kernel and ramdisk. Required for UEFI boot from ' 'partition images.')), ] def register_opts(conf): conf.register_opts(opts, group='conductor') ironic-15.0.0/ironic/conf/ibmc.py0000664000175000017500000000212113652514273016602 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Version 1.0.0 from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.IntOpt('connection_attempts', min=1, default=5, help=_('Maximum number of attempts to try to connect ' 'to iBMC')), cfg.IntOpt('connection_retry_interval', min=1, default=4, help=_('Number of seconds to wait between attempts to ' 'connect to iBMC')) ] def register_opts(conf): conf.register_opts(opts, group='ibmc') ironic-15.0.0/ironic/conf/metrics_statsd.py0000664000175000017500000000234213652514273020725 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 Rackspace, Inc. # Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('agent_statsd_host', default='localhost', help=_('Host for the agent ramdisk to use with the statsd ' 'backend. This must be accessible from networks the ' 'agent is booted on.')), cfg.PortOpt('agent_statsd_port', default=8125, help=_('Port for the agent ramdisk to use with the statsd ' 'backend.')), ] def register_opts(conf): conf.register_opts(opts, group='metrics_statsd') ironic-15.0.0/ironic/conf/swift.py0000664000175000017500000000223313652514273017030 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ from ironic.conf import auth opts = [ cfg.IntOpt('swift_max_retries', default=2, help=_('Maximum number of times to retry a Swift request, ' 'before failing.')) ] def register_opts(conf): conf.register_opts(opts, group='swift') auth.register_auth_opts(conf, 'swift', service_type='object-store') def list_opts(): return auth.add_auth_opts(opts, service_type='object-store') ironic-15.0.0/ironic/conf/audit.py0000664000175000017500000000277513652514273017015 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.BoolOpt('enabled', default=False, help=_('Enable auditing of API requests' ' (for ironic-api service).')), cfg.StrOpt('audit_map_file', default='/etc/ironic/api_audit_map.conf', help=_('Path to audit map file for ironic-api service. ' 'Used only when API audit is enabled.')), cfg.StrOpt('ignore_req_list', default='', help=_('Comma separated list of Ironic REST API HTTP methods ' 'to be ignored during audit logging. For example: ' 'auditing will not be done on any GET or POST ' 'requests if this is set to "GET,POST". It is used ' 'only when API audit is enabled.')), ] def register_opts(conf): conf.register_opts(opts, group='audit') ironic-15.0.0/ironic/conf/dhcp.py0000664000175000017500000000173113652514273016614 0ustar zuulzuul00000000000000# Copyright 2016 Intel Corporation # Copyright 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('dhcp_provider', default='neutron', help=_('DHCP provider to use. "neutron" uses Neutron, and ' '"none" uses a no-op provider.')), ] def register_opts(conf): conf.register_opts(opts, group='dhcp') ironic-15.0.0/ironic/conf/__init__.py0000664000175000017500000000471313652514273017440 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.conf import agent from ironic.conf import ansible from ironic.conf import api from ironic.conf import audit from ironic.conf import cinder from ironic.conf import conductor from ironic.conf import console from ironic.conf import database from ironic.conf import default from ironic.conf import deploy from ironic.conf import dhcp from ironic.conf import drac from ironic.conf import glance from ironic.conf import healthcheck from ironic.conf import ibmc from ironic.conf import ilo from ironic.conf import inspector from ironic.conf import ipmi from ironic.conf import irmc from ironic.conf import iscsi from ironic.conf import json_rpc from ironic.conf import metrics from ironic.conf import metrics_statsd from ironic.conf import neutron from ironic.conf import nova from ironic.conf import pxe from ironic.conf import redfish from ironic.conf import service_catalog from ironic.conf import snmp from ironic.conf import swift from ironic.conf import xclarity CONF = cfg.CONF agent.register_opts(CONF) ansible.register_opts(CONF) api.register_opts(CONF) audit.register_opts(CONF) cinder.register_opts(CONF) conductor.register_opts(CONF) console.register_opts(CONF) database.register_opts(CONF) default.register_opts(CONF) deploy.register_opts(CONF) drac.register_opts(CONF) dhcp.register_opts(CONF) glance.register_opts(CONF) healthcheck.register_opts(CONF) ibmc.register_opts(CONF) ilo.register_opts(CONF) inspector.register_opts(CONF) ipmi.register_opts(CONF) irmc.register_opts(CONF) iscsi.register_opts(CONF) json_rpc.register_opts(CONF) metrics.register_opts(CONF) metrics_statsd.register_opts(CONF) neutron.register_opts(CONF) nova.register_opts(CONF) pxe.register_opts(CONF) redfish.register_opts(CONF) service_catalog.register_opts(CONF) snmp.register_opts(CONF) swift.register_opts(CONF) xclarity.register_opts(CONF) ironic-15.0.0/ironic/conf/xclarity.py0000664000175000017500000000353013652514273017534 0ustar zuulzuul00000000000000# Copyright 2017 LENOVO Development Company, LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.StrOpt('manager_ip', help=_('IP address of the XClarity Controller. ' 'Configuration here is deprecated and will be removed ' 'in the Stein release. Please update the driver_info ' 'field to use "xclarity_manager_ip" instead')), cfg.StrOpt('username', help=_('Username for the XClarity Controller. ' 'Configuration here is deprecated and will be removed ' 'in the Stein release. Please update the driver_info ' 'field to use "xclarity_username" instead')), cfg.StrOpt('password', secret=True, help=_('Password for XClarity Controller username. ' 'Configuration here is deprecated and will be removed ' 'in the Stein release. Please update the driver_info ' 'field to use "xclarity_password" instead')), cfg.PortOpt('port', default=443, help=_('Port to be used for XClarity Controller ' 'connection.')), ] def register_opts(conf): conf.register_opts(opts, group='xclarity') ironic-15.0.0/ironic/conf/healthcheck.py0000664000175000017500000000213613652514273020141 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common.i18n import _ opts = [ cfg.BoolOpt('enabled', default=False, help=_('Enable the health check endpoint at /healthcheck. ' 'Note that this is unauthenticated. More information ' 'is available at ' 'https://docs.openstack.org/oslo.middleware/latest/' 'reference/healthcheck_plugins.html.')), ] def register_opts(conf): conf.register_opts(opts, group='healthcheck') ironic-15.0.0/ironic/conductor/0000775000175000017500000000000013652514443016374 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/conductor/manager.py0000664000175000017500000054035113652514273020371 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 International Business Machines Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Conduct all activity related to bare-metal deployments. A single instance of :py:class:`ironic.conductor.manager.ConductorManager` is created within the *ironic-conductor* process, and is responsible for performing all actions on bare metal resources (Chassis, Nodes, and Ports). Commands are received via RPCs. The conductor service also performs periodic tasks, eg. to monitor the status of active deployments. Drivers are loaded via entrypoints by the :py:class:`ironic.common.driver_factory` class. Each driver is instantiated only once, when the ConductorManager service starts. In this way, a single ConductorManager may use multiple drivers, and manage heterogeneous hardware. When multiple :py:class:`ConductorManager` are run on different hosts, they are all active and cooperatively manage all nodes in the deployment. Nodes are locked by each conductor when performing actions which change the state of that node; these locks are represented by the :py:class:`ironic.conductor.task_manager.TaskManager` class. A `tooz.hashring.HashRing `_ is used to distribute nodes across the set of active conductors which support each node's driver. Rebalancing this ring can trigger various actions by each conductor, such as building or tearing down the TFTP environment for a node, notifying Neutron of a change, etc. """ import collections import datetime import queue import eventlet from futurist import periodics from futurist import waiters from ironic_lib import metrics_utils from oslo_log import log import oslo_messaging as messaging from oslo_utils import excutils from oslo_utils import uuidutils from ironic.common import driver_factory from ironic.common import exception from ironic.common import faults from ironic.common.i18n import _ from ironic.common import images from ironic.common import network from ironic.common import nova from ironic.common import states from ironic.conductor import allocations from ironic.conductor import base_manager from ironic.conductor import cleaning from ironic.conductor import deployments from ironic.conductor import notification_utils as notify_utils from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils from ironic.conf import CONF from ironic.drivers import base as drivers_base from ironic import objects from ironic.objects import base as objects_base from ironic.objects import fields MANAGER_TOPIC = 'ironic.conductor_manager' LOG = log.getLogger(__name__) METRICS = metrics_utils.get_metrics_logger(__name__) SYNC_EXCLUDED_STATES = (states.DEPLOYWAIT, states.CLEANWAIT, states.ENROLL) class ConductorManager(base_manager.BaseConductorManager): """Ironic Conductor manager main class.""" # NOTE(rloo): This must be in sync with rpcapi.ConductorAPI's. # NOTE(pas-ha): This also must be in sync with # ironic.common.release_mappings.RELEASE_MAPPING['master'] RPC_API_VERSION = '1.50' target = messaging.Target(version=RPC_API_VERSION) def __init__(self, host, topic): super(ConductorManager, self).__init__(host, topic) self.power_state_sync_count = collections.defaultdict(int) @METRICS.timer('ConductorManager.create_node') # No need to add these since they are subclasses of InvalidParameterValue: # InterfaceNotFoundInEntrypoint # IncompatibleInterface, # NoValidDefaultForInterface @messaging.expected_exceptions(exception.InvalidParameterValue, exception.DriverNotFound) def create_node(self, context, node_obj): """Create a node in database. :param context: an admin context :param node_obj: a created (but not saved to the database) node object. :returns: created node object. :raises: InterfaceNotFoundInEntrypoint if validation fails for any dynamic interfaces (e.g. network_interface). :raises: IncompatibleInterface if one or more of the requested interfaces are not compatible with the hardware type. :raises: NoValidDefaultForInterface if no default can be calculated for some interfaces, and explicit values must be provided. :raises: InvalidParameterValue if some fields fail validation. :raises: DriverNotFound if the driver or hardware type is not found. """ LOG.debug("RPC create_node called for node %s.", node_obj.uuid) driver_factory.check_and_update_node_interfaces(node_obj) node_obj.create() return node_obj def _check_update_protected(self, node_obj, delta): if 'protected' in delta: if not node_obj.protected: node_obj.protected_reason = None elif node_obj.provision_state not in (states.ACTIVE, states.RESCUE): raise exception.InvalidState( "Node %(node)s can only be made protected in provision " "states 'active' or 'rescue', the current state is " "'%(state)s'" % {'node': node_obj.uuid, 'state': node_obj.provision_state}) if ('protected_reason' in delta and node_obj.protected_reason and not node_obj.protected): raise exception.InvalidParameterValue( "The protected_reason field can only be set when " "protected is True") def _check_update_retired(self, node_obj, delta): if 'retired' in delta: if not node_obj.retired: node_obj.retired_reason = None elif node_obj.provision_state == states.AVAILABLE: raise exception.InvalidState( "Node %(node)s can not have 'retired' set in provision " "state 'available', the current state is '%(state)s'" % {'node': node_obj.uuid, 'state': node_obj.provision_state}) if ('retired_reason' in delta and node_obj.retired_reason and not node_obj.retired): raise exception.InvalidParameterValue( "The retired_reason field can only be set when " "retired is True") @METRICS.timer('ConductorManager.update_node') # No need to add these since they are subclasses of InvalidParameterValue: # InterfaceNotFoundInEntrypoint # IncompatibleInterface, # NoValidDefaultForInterface @messaging.expected_exceptions(exception.InvalidParameterValue, exception.NodeLocked, exception.InvalidState, exception.DriverNotFound) def update_node(self, context, node_obj, reset_interfaces=False): """Update a node with the supplied data. This method is the main "hub" for PUT and PATCH requests in the API. It ensures that the requested change is safe to perform, validates the parameters with the node's driver, if necessary. :param context: an admin context :param node_obj: a changed (but not saved) node object. :param reset_interfaces: whether to reset hardware interfaces to their defaults. :raises: NoValidDefaultForInterface if no default can be calculated for some interfaces, and explicit values must be provided. """ node_id = node_obj.uuid LOG.debug("RPC update_node called for node %s.", node_id) # NOTE(jroll) clear maintenance_reason if node.update sets # maintenance to False for backwards compatibility, for tools # not using the maintenance endpoint. # NOTE(kaifeng) also clear fault when out of maintenance. delta = node_obj.obj_what_changed() if 'maintenance' in delta and not node_obj.maintenance: node_obj.maintenance_reason = None node_obj.fault = None self._check_update_protected(node_obj, delta) self._check_update_retired(node_obj, delta) # TODO(dtantsur): reconsider allowing changing some (but not all) # interfaces for active nodes in the future. # NOTE(kaifeng): INSPECTING is allowed to keep backwards # compatibility, starting from API 1.39 node update is disallowed # in this state. allowed_update_states = [states.ENROLL, states.INSPECTING, states.INSPECTWAIT, states.MANAGEABLE, states.AVAILABLE] action = _("Node %(node)s can not have %(field)s " "updated unless it is in one of allowed " "(%(allowed)s) states or in maintenance mode.") updating_driver = 'driver' in delta for iface in drivers_base.ALL_INTERFACES: interface_field = '%s_interface' % iface if interface_field not in delta: if updating_driver and reset_interfaces: setattr(node_obj, interface_field, None) continue if not (node_obj.provision_state in allowed_update_states or node_obj.maintenance): raise exception.InvalidState( action % {'node': node_obj.uuid, 'allowed': ', '.join(allowed_update_states), 'field': interface_field}) driver_factory.check_and_update_node_interfaces(node_obj) # NOTE(dtantsur): if we're updating the driver from an invalid value, # loading the old driver may be impossible. Since we only need to # update the node record in the database, skip loading the driver # completely. with task_manager.acquire(context, node_id, shared=False, load_driver=False, purpose='node update') as task: # Prevent instance_uuid overwriting if ('instance_uuid' in delta and node_obj.instance_uuid and task.node.instance_uuid): raise exception.NodeAssociated( node=node_id, instance=task.node.instance_uuid) # NOTE(dtantsur): if the resource class is changed for an active # instance, nova will not update its internal record. That will # result in the new resource class exposed on the node as available # for consumption, and nova may try to schedule on this node again. if ('resource_class' in delta and task.node.resource_class and task.node.provision_state not in allowed_update_states): action = _("Node %(node)s can not have resource_class " "updated unless it is in one of allowed " "(%(allowed)s) states.") raise exception.InvalidState( action % {'node': node_obj.uuid, 'allowed': ', '.join(allowed_update_states)}) if ('instance_uuid' in delta and task.node.allocation_id and not node_obj.instance_uuid): if (not task.node.maintenance and task.node.provision_state not in allowed_update_states): action = _("Node %(node)s with an allocation can not have " "instance_uuid removed unless it is in one of " "allowed (%(allowed)s) states or in " "maintenance mode.") raise exception.InvalidState( action % {'node': node_obj.uuid, 'allowed': ', '.join(allowed_update_states)}) try: allocation = objects.Allocation.get_by_id( context, task.node.allocation_id) allocation.destroy() except exception.AllocationNotFound: pass node_obj.save() return node_obj @METRICS.timer('ConductorManager.change_node_power_state') @messaging.expected_exceptions(exception.InvalidParameterValue, exception.NoFreeConductorWorker, exception.NodeLocked) def change_node_power_state(self, context, node_id, new_state, timeout=None): """RPC method to encapsulate changes to a node's state. Perform actions such as power on, power off. The validation is performed synchronously, and if successful, the power action is updated in the background (asynchronously). Once the power action is finished and successful, it updates the power_state for the node with the new power state. :param context: an admin context. :param node_id: the id or uuid of a node. :param new_state: the desired power state of the node. :param timeout: timeout (in seconds) positive integer (> 0) for any power state. ``None`` indicates to use default timeout. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InvalidParameterValue :raises: MissingParameterValue """ LOG.debug("RPC change_node_power_state called for node %(node)s. " "The desired new state is %(state)s.", {'node': node_id, 'state': new_state}) with task_manager.acquire(context, node_id, shared=False, purpose='changing node power state') as task: task.driver.power.validate(task) if (new_state not in task.driver.power.get_supported_power_states(task)): # FIXME(naohirot): # After driver composition, we should print power interface # name here instead of driver. raise exception.InvalidParameterValue( _('The driver %(driver)s does not support the power state,' ' %(state)s') % {'driver': task.node.driver, 'state': new_state}) if new_state in (states.SOFT_REBOOT, states.SOFT_POWER_OFF): power_timeout = (timeout or CONF.conductor.soft_power_off_timeout) else: power_timeout = timeout # Set the target_power_state and clear any last_error, since we're # starting a new operation. This will expose to other processes # and clients that work is in progress. if new_state in (states.POWER_ON, states.REBOOT, states.SOFT_REBOOT): task.node.target_power_state = states.POWER_ON else: task.node.target_power_state = states.POWER_OFF task.node.last_error = None task.node.save() task.set_spawn_error_hook(utils.power_state_error_handler, task.node, task.node.power_state) task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state, timeout=power_timeout) @METRICS.timer('ConductorManager.vendor_passthru') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InvalidParameterValue, exception.UnsupportedDriverExtension) def vendor_passthru(self, context, node_id, driver_method, http_method, info): """RPC method to encapsulate vendor action. Synchronously validate driver specific info, and if successful invoke the vendor method. If the method mode is 'async' the conductor will start background worker to perform vendor action. :param context: an admin context. :param node_id: the id or uuid of a node. :param driver_method: the name of the vendor method. :param http_method: the HTTP method used for the request. :param info: vendor method args. :raises: InvalidParameterValue if supplied info is not valid. :raises: MissingParameterValue if missing supplied info :raises: UnsupportedDriverExtension if current driver does not have vendor interface or method is unsupported. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NodeLocked if the vendor passthru method requires an exclusive lock but the node is locked by another conductor :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ LOG.debug("RPC vendor_passthru called for node %s.", node_id) # NOTE(mariojv): Not all vendor passthru methods require an exclusive # lock on a node, so we acquire a shared lock initially. If a method # requires an exclusive lock, we'll acquire one after checking # vendor_opts before starting validation. with task_manager.acquire(context, node_id, shared=True, purpose='calling vendor passthru') as task: vendor_iface = task.driver.vendor try: vendor_opts = vendor_iface.vendor_routes[driver_method] vendor_func = vendor_opts['func'] except KeyError: raise exception.InvalidParameterValue( _('No handler for method %s') % driver_method) http_method = http_method.upper() if http_method not in vendor_opts['http_methods']: raise exception.InvalidParameterValue( _('The method %(method)s does not support HTTP %(http)s') % {'method': driver_method, 'http': http_method}) # Change shared lock to exclusive if a vendor method requires # it. Vendor methods default to requiring an exclusive lock. if vendor_opts['require_exclusive_lock']: task.upgrade_lock() vendor_iface.validate(task, method=driver_method, http_method=http_method, **info) # Inform the vendor method which HTTP method it was invoked with info['http_method'] = http_method # Invoke the vendor method accordingly with the mode is_async = vendor_opts['async'] ret = None if is_async: task.spawn_after(self._spawn_worker, vendor_func, task, **info) else: ret = vendor_func(task, **info) return {'return': ret, 'async': is_async, 'attach': vendor_opts['attach']} @METRICS.timer('ConductorManager.driver_vendor_passthru') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.InvalidParameterValue, exception.UnsupportedDriverExtension, exception.DriverNotFound, exception.NoValidDefaultForInterface, exception.InterfaceNotFoundInEntrypoint) def driver_vendor_passthru(self, context, driver_name, driver_method, http_method, info): """Handle top-level vendor actions. RPC method which handles driver-level vendor passthru calls. These calls don't require a node UUID and are executed on a random conductor with the specified driver. If the method mode is async the conductor will start background worker to perform vendor action. For dynamic drivers, the calculated default vendor interface is used. :param context: an admin context. :param driver_name: name of the driver or hardware type on which to call the method. :param driver_method: name of the vendor method, for use by the driver. :param http_method: the HTTP method used for the request. :param info: user-supplied data to pass through to the driver. :raises: MissingParameterValue if missing supplied info :raises: InvalidParameterValue if supplied info is not valid. :raises: UnsupportedDriverExtension if current driver does not have vendor interface, if the vendor interface does not implement driver-level vendor passthru or if the passthru method is unsupported. :raises: DriverNotFound if the supplied driver is not loaded. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's vendor interface. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ # Any locking in a top-level vendor action will need to be done by the # implementation, as there is little we could reasonably lock on here. LOG.debug("RPC driver_vendor_passthru for driver %s.", driver_name) driver = driver_factory.get_hardware_type(driver_name) vendor = None vendor_name = driver_factory.default_interface( driver, 'vendor', driver_name=driver_name) vendor = driver_factory.get_interface(driver, 'vendor', vendor_name) try: vendor_opts = vendor.driver_routes[driver_method] vendor_func = vendor_opts['func'] except KeyError: raise exception.InvalidParameterValue( _('No handler for method %s') % driver_method) http_method = http_method.upper() if http_method not in vendor_opts['http_methods']: raise exception.InvalidParameterValue( _('The method %(method)s does not support HTTP %(http)s') % {'method': driver_method, 'http': http_method}) # Inform the vendor method which HTTP method it was invoked with info['http_method'] = http_method # Invoke the vendor method accordingly with the mode is_async = vendor_opts['async'] ret = None vendor.driver_validate(method=driver_method, **info) if is_async: self._spawn_worker(vendor_func, context, **info) else: ret = vendor_func(context, **info) return {'return': ret, 'async': is_async, 'attach': vendor_opts['attach']} @METRICS.timer('ConductorManager.get_node_vendor_passthru_methods') @messaging.expected_exceptions(exception.UnsupportedDriverExtension) def get_node_vendor_passthru_methods(self, context, node_id): """Retrieve information about vendor methods of the given node. :param context: an admin context. :param node_id: the id or uuid of a node. :returns: dictionary of : entries. """ LOG.debug("RPC get_node_vendor_passthru_methods called for node %s", node_id) lock_purpose = 'listing vendor passthru methods' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: return get_vendor_passthru_metadata( task.driver.vendor.vendor_routes) @METRICS.timer('ConductorManager.get_driver_vendor_passthru_methods') @messaging.expected_exceptions(exception.UnsupportedDriverExtension, exception.DriverNotFound, exception.NoValidDefaultForInterface, exception.InterfaceNotFoundInEntrypoint) def get_driver_vendor_passthru_methods(self, context, driver_name): """Retrieve information about vendor methods of the given driver. For dynamic drivers, the default vendor interface is used. :param context: an admin context. :param driver_name: name of the driver or hardware_type :raises: UnsupportedDriverExtension if current driver does not have vendor interface. :raises: DriverNotFound if the supplied driver is not loaded. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's vendor interface. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :returns: dictionary of : entries. """ # Any locking in a top-level vendor action will need to be done by the # implementation, as there is little we could reasonably lock on here. LOG.debug("RPC get_driver_vendor_passthru_methods for driver %s", driver_name) driver = driver_factory.get_hardware_type(driver_name) vendor = None vendor_name = driver_factory.default_interface( driver, 'vendor', driver_name=driver_name) vendor = driver_factory.get_interface(driver, 'vendor', vendor_name) return get_vendor_passthru_metadata(vendor.driver_routes) @METRICS.timer('ConductorManager.do_node_rescue') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeInMaintenance, exception.NodeLocked, exception.InstanceRescueFailure, exception.InvalidStateRequested, exception.UnsupportedDriverExtension ) def do_node_rescue(self, context, node_id, rescue_password): """RPC method to rescue an existing node deployment. Validate driver specific information synchronously, and then spawn a background worker to rescue the node asynchronously. :param context: an admin context. :param node_id: the id or uuid of a node. :param rescue_password: string to be set as the password inside the rescue environment. :raises: InstanceRescueFailure if the node cannot be placed into rescue mode. :raises: InvalidStateRequested if the state transition is not supported or allowed. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NodeLocked if the node is locked by another conductor. :raises: NodeInMaintenance if the node is in maintenance mode. :raises: UnsupportedDriverExtension if rescue interface is not supported by the driver. """ LOG.debug("RPC do_node_rescue called for node %s.", node_id) with task_manager.acquire(context, node_id, purpose='node rescue') as task: node = task.node # Record of any pre-existing agent_url should be removed. utils.remove_agent_url(node) if node.maintenance: raise exception.NodeInMaintenance(op=_('rescuing'), node=node.uuid) # driver validation may check rescue_password, so save it on the # node early i_info = node.instance_info i_info['rescue_password'] = rescue_password i_info['hashed_rescue_password'] = utils.hash_password( rescue_password) node.instance_info = i_info node.save() try: task.driver.power.validate(task) task.driver.rescue.validate(task) task.driver.network.validate(task) except (exception.InvalidParameterValue, exception.UnsupportedDriverExtension, exception.MissingParameterValue) as e: utils.remove_node_rescue_password(node, save=True) raise exception.InstanceRescueFailure( instance=node.instance_uuid, node=node.uuid, reason=_("Validation failed. Error: %s") % e) try: task.process_event( 'rescue', callback=self._spawn_worker, call_args=(self._do_node_rescue, task), err_handler=utils.spawn_rescue_error_handler) except exception.InvalidState: utils.remove_node_rescue_password(node, save=True) raise exception.InvalidStateRequested( action='rescue', node=node.uuid, state=node.provision_state) def _do_node_rescue(self, task): """Internal RPC method to rescue an existing node deployment.""" node = task.node def handle_failure(e, errmsg, log_func=LOG.error): utils.remove_node_rescue_password(node, save=False) node.last_error = errmsg % e task.process_event('fail') log_func('Error while performing rescue operation for node ' '%(node)s with instance %(instance)s: %(err)s', {'node': node.uuid, 'instance': node.instance_uuid, 'err': e}) try: next_state = task.driver.rescue.rescue(task) except exception.IronicException as e: with excutils.save_and_reraise_exception(): handle_failure(e, _('Failed to rescue: %s')) except Exception as e: with excutils.save_and_reraise_exception(): handle_failure(e, _('Failed to rescue. Exception: %s'), log_func=LOG.exception) if next_state == states.RESCUEWAIT: task.process_event('wait') elif next_state == states.RESCUE: task.process_event('done') else: error = (_("Driver returned unexpected state %s") % next_state) handle_failure(error, _('Failed to rescue: %s')) @METRICS.timer('ConductorManager.do_node_unrescue') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeInMaintenance, exception.NodeLocked, exception.InstanceUnrescueFailure, exception.InvalidStateRequested, exception.UnsupportedDriverExtension ) def do_node_unrescue(self, context, node_id): """RPC method to unrescue a node in rescue mode. Validate driver specific information synchronously, and then spawn a background worker to unrescue the node asynchronously. :param context: an admin context. :param node_id: the id or uuid of a node. :raises: InstanceUnrescueFailure if the node fails to be unrescued :raises: InvalidStateRequested if the state transition is not supported or allowed. :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: NodeLocked if the node is locked by another conductor. :raises: NodeInMaintenance if the node is in maintenance mode. :raises: UnsupportedDriverExtension if rescue interface is not supported by the driver. """ LOG.debug("RPC do_node_unrescue called for node %s.", node_id) with task_manager.acquire(context, node_id, purpose='node unrescue') as task: node = task.node # Record of any pre-existing agent_url should be removed, # Not that there should be. utils.remove_agent_url(node) if node.maintenance: raise exception.NodeInMaintenance(op=_('unrescuing'), node=node.uuid) try: task.driver.power.validate(task) except (exception.InvalidParameterValue, exception.MissingParameterValue) as e: raise exception.InstanceUnrescueFailure( instance=node.instance_uuid, node=node.uuid, reason=_("Validation failed. Error: %s") % e) try: task.process_event( 'unrescue', callback=self._spawn_worker, call_args=(self._do_node_unrescue, task), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action='unrescue', node=node.uuid, state=node.provision_state) def _do_node_unrescue(self, task): """Internal RPC method to unrescue a node in rescue mode.""" node = task.node def handle_failure(e, errmsg, log_func=LOG.error): node.last_error = errmsg % e task.process_event('fail') log_func('Error while performing unrescue operation for node ' '%(node)s with instance %(instance)s: %(err)s', {'node': node.uuid, 'instance': node.instance_uuid, 'err': e}) try: next_state = task.driver.rescue.unrescue(task) except exception.IronicException as e: with excutils.save_and_reraise_exception(): handle_failure(e, _('Failed to unrescue: %s')) except Exception as e: with excutils.save_and_reraise_exception(): handle_failure(e, _('Failed to unrescue. Exception: %s'), log_func=LOG.exception) if next_state == states.ACTIVE: task.process_event('done') else: error = (_("Driver returned unexpected state %s") % next_state) handle_failure(error, _('Failed to unrescue: %s')) @task_manager.require_exclusive_lock def _do_node_rescue_abort(self, task): """Internal method to abort an ongoing rescue operation. :param task: a TaskManager instance with an exclusive lock """ node = task.node try: task.driver.rescue.clean_up(task) except Exception as e: LOG.exception('Failed to clean up rescue for node %(node)s ' 'after aborting the operation. Error: %(err)s', {'node': node.uuid, 'err': e}) error_msg = _('Failed to clean up rescue after aborting ' 'the operation') node.refresh() node.last_error = error_msg node.maintenance = True node.maintenance_reason = error_msg node.fault = faults.RESCUE_ABORT_FAILURE node.save() return info_message = _('Rescue operation aborted for node %s.') % node.uuid last_error = _('By request, the rescue operation was aborted.') node.refresh() utils.remove_agent_url(node) node.last_error = last_error node.save() LOG.info(info_message) @METRICS.timer('ConductorManager.do_node_deploy') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.NodeInMaintenance, exception.InstanceDeployFailure, exception.InvalidStateRequested, exception.NodeProtected) def do_node_deploy(self, context, node_id, rebuild=False, configdrive=None): """RPC method to initiate deployment to a node. Initiate the deployment of a node. Validations are done synchronously and the actual deploy work is performed in background (asynchronously). :param context: an admin context. :param node_id: the id or uuid of a node. :param rebuild: True if this is a rebuild request. A rebuild will recreate the instance on the same node, overwriting all disk. The ephemeral partition, if it exists, can optionally be preserved. :param configdrive: Optional. A gzipped and base64 encoded configdrive. :raises: InstanceDeployFailure :raises: NodeInMaintenance if the node is in maintenance mode. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InvalidStateRequested when the requested state is not a valid target from the current state. :raises: NodeProtected if the node is protected. """ LOG.debug("RPC do_node_deploy called for node %s.", node_id) event = 'rebuild' if rebuild else 'deploy' # NOTE(comstud): If the _sync_power_states() periodic task happens # to have locked this node, we'll fail to acquire the lock. The # client should perhaps retry in this case unless we decide we # want to add retries or extra synchronization here. with task_manager.acquire(context, node_id, shared=False, purpose='node deployment') as task: deployments.validate_node(task, event=event) deployments.start_deploy(task, self, configdrive, event=event) @METRICS.timer('ConductorManager.continue_node_deploy') def continue_node_deploy(self, context, node_id): """RPC method to continue deploying a node. This is useful for deploying tasks that are async. When they complete, they call back via RPC, a new worker and lock are set up, and deploying continues. This can also be used to resume deploying on take_over. :param context: an admin context. :param node_id: the ID or UUID of a node. :raises: InvalidStateRequested if the node is not in DEPLOYWAIT state :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node no longer appears in the database """ LOG.debug("RPC continue_node_deploy called for node %s.", node_id) with task_manager.acquire(context, node_id, shared=False, purpose='continue node deploying') as task: node = task.node # FIXME(rloo): This should be states.DEPLOYWAIT, but we're using # this temporarily to get control back to the conductor, to finish # the deployment. Once we split up the deployment into separate # deploy steps and after we've crossed a rolling-upgrade boundary, # we should be able to check for DEPLOYWAIT only. expected_states = [states.DEPLOYWAIT, states.DEPLOYING] if node.provision_state not in expected_states: raise exception.InvalidStateRequested(_( 'Cannot continue deploying on %(node)s. Node is in ' '%(state)s state; should be in one of %(deploy_state)s') % {'node': node.uuid, 'state': node.provision_state, 'deploy_state': ', '.join(expected_states)}) save_required = False info = node.driver_internal_info # Agent is now running, we're ready to validate the remaining steps if not info.get('steps_validated'): conductor_steps.validate_deploy_templates(task) conductor_steps.set_node_deployment_steps( task, reset_current=False) info['steps_validated'] = True save_required = True try: skip_current_step = info.pop('skip_current_deploy_step') except KeyError: skip_current_step = True else: save_required = True if info.pop('deployment_polling', None) is not None: save_required = True if save_required: node.driver_internal_info = info node.save() next_step_index = utils.get_node_next_deploy_steps( task, skip_current_step=skip_current_step) # TODO(rloo): When deprecation period is over and node is in # states.DEPLOYWAIT only, delete the check and always 'resume'. if node.provision_state == states.DEPLOYING: LOG.warning('Node %(node)s was found in the state %(state)s ' 'in the continue_node_deploy RPC call. This is ' 'deprecated, the driver must be updated to leave ' 'nodes in %(new)s state instead.', {'node': node.uuid, 'state': states.DEPLOYING, 'new': states.DEPLOYWAIT}) else: task.process_event('resume') task.set_spawn_error_hook(utils.spawn_deploying_error_handler, task.node) task.spawn_after( self._spawn_worker, deployments.do_next_deploy_step, task, next_step_index, self.conductor.id) @METRICS.timer('ConductorManager.do_node_tear_down') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InstanceDeployFailure, exception.InvalidStateRequested, exception.NodeProtected) def do_node_tear_down(self, context, node_id): """RPC method to tear down an existing node deployment. Validate driver specific information synchronously, and then spawn a background worker to tear down the node asynchronously. :param context: an admin context. :param node_id: the id or uuid of a node. :raises: InstanceDeployFailure :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: InvalidStateRequested when the requested state is not a valid target from the current state. :raises: NodeProtected if the node is protected. """ LOG.debug("RPC do_node_tear_down called for node %s.", node_id) with task_manager.acquire(context, node_id, shared=False, purpose='node tear down') as task: # Record of any pre-existing agent_url should be removed. utils.remove_agent_url(task.node) if task.node.protected: raise exception.NodeProtected(node=task.node.uuid) try: # NOTE(ghe): Valid power driver values are needed to perform # a tear-down. Deploy info is useful to purge the cache but not # required for this method. task.driver.power.validate(task) except exception.InvalidParameterValue as e: raise exception.InstanceDeployFailure(_( "Failed to validate power driver interface. " "Can not delete instance. Error: %(msg)s") % {'msg': e}) try: task.process_event( 'delete', callback=self._spawn_worker, call_args=(self._do_node_tear_down, task, task.node.provision_state), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action='delete', node=task.node.uuid, state=task.node.provision_state) @task_manager.require_exclusive_lock def _do_node_tear_down(self, task, initial_state): """Internal RPC method to tear down an existing node deployment. :param task: a task from TaskManager. :param initial_state: The initial provision state from which node has moved into deleting state. """ node = task.node try: if (initial_state in (states.RESCUEWAIT, states.RESCUE, states.UNRESCUEFAIL, states.RESCUEFAIL)): # Perform rescue clean up. Rescue clean up will remove # rescuing network as well. task.driver.rescue.clean_up(task) # stop the console # do it in this thread since we're already out of the main # conductor thread. if node.console_enabled: notify_utils.emit_console_notification( task, 'console_stop', fields.NotificationStatus.START) try: # Keep console_enabled=True for next deployment task.driver.console.stop_console(task) except Exception as err: with excutils.save_and_reraise_exception(): LOG.error('Failed to stop console while tearing down ' 'the node %(node)s: %(err)s.', {'node': node.uuid, 'err': err}) notify_utils.emit_console_notification( task, 'console_stop', fields.NotificationStatus.ERROR) else: notify_utils.emit_console_notification( task, 'console_stop', fields.NotificationStatus.END) task.driver.deploy.clean_up(task) task.driver.deploy.tear_down(task) except Exception as e: with excutils.save_and_reraise_exception(): LOG.exception('Error in tear_down of node %(node)s: %(err)s', {'node': node.uuid, 'err': e}) node.last_error = _("Failed to tear down. Error: %s") % e task.process_event('error') else: # NOTE(tenbrae): When tear_down finishes, the deletion is done, # cleaning will start next LOG.info('Successfully unprovisioned node %(node)s with ' 'instance %(instance)s.', {'node': node.uuid, 'instance': node.instance_uuid}) finally: # NOTE(tenbrae): there is no need to unset conductor_affinity # because it is a reference to the most recent conductor which # deployed a node, and does not limit any future actions. # But we do need to clear the instance-related fields. node.instance_info = {} node.instance_uuid = None driver_internal_info = node.driver_internal_info driver_internal_info.pop('agent_secret_token', None) driver_internal_info.pop('agent_secret_token_pregenerated', None) driver_internal_info.pop('instance', None) driver_internal_info.pop('clean_steps', None) driver_internal_info.pop('root_uuid_or_disk_id', None) driver_internal_info.pop('is_whole_disk_image', None) driver_internal_info.pop('deploy_boot_mode', None) node.driver_internal_info = driver_internal_info network.remove_vifs_from_node(task) node.save() if node.allocation_id: allocation = objects.Allocation.get_by_id(task.context, node.allocation_id) allocation.destroy() # The destroy() call above removes allocation_id and # instance_uuid, refresh the node to get these changes. node.refresh() # Begin cleaning task.process_event('clean') cleaning.do_node_clean(task) @METRICS.timer('ConductorManager.do_node_clean') @messaging.expected_exceptions(exception.InvalidParameterValue, exception.InvalidStateRequested, exception.NodeInMaintenance, exception.NodeLocked, exception.NoFreeConductorWorker) def do_node_clean(self, context, node_id, clean_steps): """RPC method to initiate manual cleaning. :param context: an admin context. :param node_id: the ID or UUID of a node. :param clean_steps: an ordered list of clean steps that will be performed on the node. A clean step is a dictionary with required keys 'interface' and 'step', and optional key 'args'. If specified, the 'args' arguments are passed to the clean step method.:: { 'interface': , 'step': , 'args': {: , ..., : } } For example (this isn't a real example, this clean step doesn't exist):: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :raises: InvalidParameterValue if power validation fails. :raises: InvalidStateRequested if the node is not in manageable state. :raises: NodeLocked if node is locked by another conductor. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ with task_manager.acquire(context, node_id, shared=False, purpose='node manual cleaning') as task: node = task.node # Record of any pre-existing agent_url should be removed. if not utils.is_fast_track(task): # If clean->clean with an online agent, we should honor # the operating agent and not prevent the action. utils.remove_agent_url(node) if node.maintenance: raise exception.NodeInMaintenance(op=_('cleaning'), node=node.uuid) # NOTE(rloo): cleaning.do_node_clean() will also make similar calls # to validate power & network, but we are doing it again here so # that the user gets immediate feedback of any issues. This # behaviour (of validating) is consistent with other methods like # self.do_node_deploy(). try: task.driver.power.validate(task) task.driver.network.validate(task) except exception.InvalidParameterValue as e: msg = (_('Validation failed. Cannot clean node %(node)s. ' 'Error: %(msg)s') % {'node': node.uuid, 'msg': e}) raise exception.InvalidParameterValue(msg) try: task.process_event( 'clean', callback=self._spawn_worker, call_args=(cleaning.do_node_clean, task, clean_steps), err_handler=utils.provisioning_error_handler, target_state=states.MANAGEABLE) except exception.InvalidState: raise exception.InvalidStateRequested( action='manual clean', node=node.uuid, state=node.provision_state) @METRICS.timer('ConductorManager.continue_node_clean') def continue_node_clean(self, context, node_id): """RPC method to continue cleaning a node. This is useful for cleaning tasks that are async. When they complete, they call back via RPC, a new worker and lock are set up, and cleaning continues. This can also be used to resume cleaning on take_over. :param context: an admin context. :param node_id: the id or uuid of a node. :raises: InvalidStateRequested if the node is not in CLEANWAIT state :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node no longer appears in the database """ LOG.debug("RPC continue_node_clean called for node %s.", node_id) with task_manager.acquire(context, node_id, shared=False, purpose='continue node cleaning') as task: node = task.node if node.target_provision_state == states.MANAGEABLE: target_state = states.MANAGEABLE else: target_state = None if node.provision_state != states.CLEANWAIT: raise exception.InvalidStateRequested(_( 'Cannot continue cleaning on %(node)s, node is in ' '%(state)s state, should be %(clean_state)s') % {'node': node.uuid, 'state': node.provision_state, 'clean_state': states.CLEANWAIT}) save_required = False info = node.driver_internal_info try: skip_current_step = info.pop('skip_current_clean_step') except KeyError: skip_current_step = True else: save_required = True if info.pop('cleaning_polling', None) is not None: save_required = True if save_required: node.driver_internal_info = info node.save() next_step_index = utils.get_node_next_clean_steps( task, skip_current_step=skip_current_step) # If this isn't the final clean step in the cleaning operation # and it is flagged to abort after the clean step that just # finished, we abort the cleaning operation. if node.clean_step.get('abort_after'): step_name = node.clean_step['step'] if next_step_index is not None: LOG.debug('The cleaning operation for node %(node)s was ' 'marked to be aborted after step "%(step)s ' 'completed. Aborting now that it has completed.', {'node': task.node.uuid, 'step': step_name}) task.process_event( 'abort', callback=self._spawn_worker, call_args=(cleaning.do_node_clean_abort, task, step_name), err_handler=utils.provisioning_error_handler, target_state=target_state) return LOG.debug('The cleaning operation for node %(node)s was ' 'marked to be aborted after step "%(step)s" ' 'completed. However, since there are no more ' 'clean steps after this, the abort is not going ' 'to be done.', {'node': node.uuid, 'step': step_name}) task.process_event('resume', target_state=target_state) task.set_spawn_error_hook(utils.spawn_cleaning_error_handler, task.node) task.spawn_after( self._spawn_worker, cleaning.do_next_clean_step, task, next_step_index) @task_manager.require_exclusive_lock def _do_node_verify(self, task): """Internal method to perform power credentials verification.""" node = task.node LOG.debug('Starting power credentials verification for node %s', node.uuid) error = None try: task.driver.power.validate(task) except Exception as e: error = (_('Failed to validate power driver interface for node ' '%(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) else: try: power_state = task.driver.power.get_power_state(task) except Exception as e: error = (_('Failed to get power state for node ' '%(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) if error is None: if power_state != node.power_state: old_power_state = node.power_state node.power_state = power_state task.process_event('done') notify_utils.emit_power_state_corrected_notification( task, old_power_state) else: task.process_event('done') else: LOG.error(error) node.last_error = error task.process_event('fail') @METRICS.timer('ConductorManager.do_provisioning_action') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InvalidParameterValue, exception.InvalidStateRequested, exception.UnsupportedDriverExtension, exception.NodeInMaintenance) def do_provisioning_action(self, context, node_id, action): """RPC method to initiate certain provisioning state transitions. Initiate a provisioning state change through the state machine, rather than through an RPC call to do_node_deploy / do_node_tear_down :param context: an admin context. :param node_id: the id or uuid of a node. :param action: an action. One of ironic.common.states.VERBS :raises: InvalidParameterValue :raises: InvalidStateRequested :raises: NoFreeConductorWorker :raises: NodeInMaintenance """ with task_manager.acquire(context, node_id, shared=False, purpose='provision action %s' % action) as task: node = task.node if (action == states.VERBS['provide'] and node.provision_state == states.MANAGEABLE): # NOTE(dtantsur): do this early to avoid entering cleaning. if (not CONF.conductor.allow_provisioning_in_maintenance and node.maintenance): raise exception.NodeInMaintenance(op=_('providing'), node=node.uuid) if node.retired: raise exception.NodeIsRetired(op=_('providing'), node=node.uuid) task.process_event( 'provide', callback=self._spawn_worker, call_args=(cleaning.do_node_clean, task), err_handler=utils.provisioning_error_handler) return if (action == states.VERBS['manage'] and node.provision_state == states.ENROLL): task.process_event( 'manage', callback=self._spawn_worker, call_args=(self._do_node_verify, task), err_handler=utils.provisioning_error_handler) return if (action == states.VERBS['adopt'] and node.provision_state in (states.MANAGEABLE, states.ADOPTFAIL)): task.process_event( 'adopt', callback=self._spawn_worker, call_args=(self._do_adoption, task), err_handler=utils.provisioning_error_handler) return if (action == states.VERBS['abort'] and node.provision_state in (states.CLEANWAIT, states.RESCUEWAIT, states.INSPECTWAIT)): self._do_abort(task) return try: task.process_event(action) except exception.InvalidState: raise exception.InvalidStateRequested( action=action, node=node.uuid, state=node.provision_state) def _do_abort(self, task): """Handle node abort for certain states.""" node = task.node if node.provision_state == states.CLEANWAIT: # Check if the clean step is abortable; if so abort it. # Otherwise, indicate in that clean step, that cleaning # should be aborted after that step is done. if (node.clean_step and not node.clean_step.get('abortable')): LOG.info('The current clean step "%(clean_step)s" for ' 'node %(node)s is not abortable. Adding a ' 'flag to abort the cleaning after the clean ' 'step is completed.', {'clean_step': node.clean_step['step'], 'node': node.uuid}) clean_step = node.clean_step if not clean_step.get('abort_after'): clean_step['abort_after'] = True node.clean_step = clean_step node.save() return LOG.debug('Aborting the cleaning operation during clean step ' '"%(step)s" for node %(node)s in provision state ' '"%(prov)s".', {'node': node.uuid, 'prov': node.provision_state, 'step': node.clean_step.get('step')}) target_state = None if node.target_provision_state == states.MANAGEABLE: target_state = states.MANAGEABLE task.process_event( 'abort', callback=self._spawn_worker, call_args=(cleaning.do_node_clean_abort, task), err_handler=utils.provisioning_error_handler, target_state=target_state) return if node.provision_state == states.RESCUEWAIT: utils.remove_node_rescue_password(node, save=True) task.process_event( 'abort', callback=self._spawn_worker, call_args=(self._do_node_rescue_abort, task), err_handler=utils.provisioning_error_handler) return if node.provision_state == states.INSPECTWAIT: try: task.driver.inspect.abort(task) except exception.UnsupportedDriverExtension: with excutils.save_and_reraise_exception(): intf_name = task.driver.inspect.__class__.__name__ LOG.error('Inspect interface %(intf)s does not ' 'support abort operation when aborting ' 'inspection of node %(node)s', {'intf': intf_name, 'node': node.uuid}) except Exception as e: with excutils.save_and_reraise_exception(): LOG.exception('Error in aborting the inspection of ' 'node %(node)s', {'node': node.uuid}) node.last_error = _('Failed to abort inspection. ' 'Error: %s') % e node.save() node.last_error = _('Inspection was aborted by request.') task.process_event('abort') LOG.info('Successfully aborted inspection of node %(node)s', {'node': node.uuid}) return @METRICS.timer('ConductorManager._sync_power_states') @periodics.periodic(spacing=CONF.conductor.sync_power_state_interval, enabled=CONF.conductor.sync_power_state_interval > 0) def _sync_power_states(self, context): """Periodic task to sync power states for the nodes.""" filters = {'maintenance': False} # NOTE(etingof): prioritize non-responding nodes to fail them fast nodes = sorted( self.iter_nodes(fields=['id'], filters=filters), key=lambda n: -self.power_state_sync_count.get(n[0], 0) ) nodes_queue = queue.Queue() for node_info in nodes: nodes_queue.put(node_info) number_of_workers = min(CONF.conductor.sync_power_state_workers, CONF.conductor.periodic_max_workers, nodes_queue.qsize()) futures = [] for worker_number in range(max(0, number_of_workers - 1)): try: futures.append( self._spawn_worker(self._sync_power_state_nodes_task, context, nodes_queue)) except exception.NoFreeConductorWorker: LOG.warning("There are no more conductor workers for " "power sync task. %(workers)d workers have " "been already spawned.", {'workers': worker_number}) break try: self._sync_power_state_nodes_task(context, nodes_queue) finally: waiters.wait_for_all(futures) def _sync_power_state_nodes_task(self, context, nodes): """Invokes power state sync on nodes from synchronized queue. Attempt to grab a lock and sync only if the following conditions are met: 1) Node is mapped to this conductor. 2) Node is not in maintenance mode. 3) Node is not in DEPLOYWAIT/CLEANWAIT provision state. 4) Node doesn't have a reservation NOTE: Grabbing a lock here can cause other methods to fail to grab it. We want to avoid trying to grab a lock while a node is in the DEPLOYWAIT/CLEANWAIT state so we don't unnecessarily cause a deploy/cleaning callback to fail. There's not much we can do here to avoid failing a brand new deploy to a node that we've locked here, though. """ # FIXME(comstud): Since our initial state checks are outside # of the lock (to try to avoid the lock), some checks are # repeated after grabbing the lock so we can unlock quickly. # The node mapping is not re-checked because it doesn't much # matter if things happened to re-balance. # # This is inefficient and racey. We end up with calling DB API's # get_node() twice (once here, and once in acquire(). Ideally we # add a way to pass constraints to task_manager.acquire() # (through to its DB API call) so that we can eliminate our call # and first set of checks below. while not self._shutdown: try: (node_uuid, driver, conductor_group, node_id) = nodes.get_nowait() except queue.Empty: break try: # NOTE(dtantsur): start with a shared lock, upgrade if needed with task_manager.acquire(context, node_uuid, purpose='power state sync', shared=True) as task: # NOTE(tenbrae): we should not acquire a lock on a node in # DEPLOYWAIT/CLEANWAIT, as this could cause # an error within a deploy ramdisk POSTing back # at the same time. # NOTE(dtantsur): it's also pointless (and dangerous) to # sync power state when a power action is in progress if (task.node.provision_state in SYNC_EXCLUDED_STATES or task.node.maintenance or task.node.target_power_state or task.node.reservation): continue count = do_sync_power_state( task, self.power_state_sync_count[node_uuid]) if count: self.power_state_sync_count[node_uuid] = count else: # don't bloat the dict with non-failing nodes del self.power_state_sync_count[node_uuid] except exception.NodeNotFound: LOG.info("During sync_power_state, node %(node)s was not " "found and presumed deleted by another process.", {'node': node_uuid}) except exception.NodeLocked: LOG.info("During sync_power_state, node %(node)s was " "already locked by another process. Skip.", {'node': node_uuid}) finally: # Yield on every iteration eventlet.sleep(0) @METRICS.timer('ConductorManager._power_failure_recovery') @periodics.periodic(spacing=CONF.conductor.power_failure_recovery_interval, enabled=bool( CONF.conductor.power_failure_recovery_interval)) def _power_failure_recovery(self, context): """Periodic task to check power states for nodes in maintenance. Attempt to grab a lock and sync only if the following conditions are met: 1) Node is mapped to this conductor. 2) Node is in maintenance with maintenance type of power failure. 3) Node is not reserved. 4) Node is not in the ENROLL state. """ def should_sync_power_state_for_recovery(task): """Check if ironic should sync power state for recovery.""" # NOTE(dtantsur): it's also pointless (and dangerous) to # sync power state when a power action is in progress if (task.node.provision_state == states.ENROLL or not task.node.maintenance or task.node.fault != faults.POWER_FAILURE or task.node.target_power_state or task.node.reservation): return False return True def handle_recovery(task, actual_power_state): """Handle recovery when power sync is succeeded.""" task.upgrade_lock() node = task.node # Update power state old_power_state = node.power_state node.power_state = actual_power_state # Clear maintenance related fields node.maintenance = False node.maintenance_reason = None node.fault = None node.save() LOG.info("Node %(node)s is recovered from power failure " "with actual power state '%(state)s'.", {'node': node.uuid, 'state': actual_power_state}) if old_power_state != actual_power_state: if node.instance_uuid: nova.power_update( task.context, node.instance_uuid, node.power_state) notify_utils.emit_power_state_corrected_notification( task, old_power_state) # NOTE(kaifeng) To avoid conflicts with periodic task of the # regular power state checking, maintenance is still a required # condition. filters = {'maintenance': True, 'fault': faults.POWER_FAILURE} node_iter = self.iter_nodes(fields=['id'], filters=filters) for (node_uuid, driver, conductor_group, node_id) in node_iter: try: with task_manager.acquire(context, node_uuid, purpose='power failure recovery', shared=True) as task: if not should_sync_power_state_for_recovery(task): continue try: # Validate driver info in case of parameter changed # in maintenance. task.driver.power.validate(task) # The driver may raise an exception, or may return # ERROR. Handle both the same way. power_state = task.driver.power.get_power_state(task) if power_state == states.ERROR: raise exception.PowerStateFailure( _("Power driver returned ERROR state " "while trying to get power state.")) except Exception as e: LOG.debug("During power_failure_recovery, could " "not get power state for node %(node)s, " "Error: %(err)s.", {'node': task.node.uuid, 'err': e}) else: handle_recovery(task, power_state) except exception.NodeNotFound: LOG.info("During power_failure_recovery, node %(node)s was " "not found and presumed deleted by another process.", {'node': node_uuid}) except exception.NodeLocked: LOG.info("During power_failure_recovery, node %(node)s was " "already locked by another process. Skip.", {'node': node_uuid}) finally: # Yield on every iteration eventlet.sleep(0) @METRICS.timer('ConductorManager._check_deploy_timeouts') @periodics.periodic( spacing=CONF.conductor.check_provision_state_interval, enabled=CONF.conductor.check_provision_state_interval > 0 and CONF.conductor.deploy_callback_timeout != 0) def _check_deploy_timeouts(self, context): """Periodically checks whether a deploy RPC call has timed out. If a deploy call has timed out, the deploy failed and we clean up. :param context: request context. """ # FIXME(rloo): If the value is < 0, it will be enabled. That doesn't # seem right. callback_timeout = CONF.conductor.deploy_callback_timeout filters = {'reserved': False, 'provision_state': states.DEPLOYWAIT, 'maintenance': False, 'provisioned_before': callback_timeout} sort_key = 'provision_updated_at' callback_method = utils.cleanup_after_timeout err_handler = utils.provisioning_error_handler self._fail_if_in_state(context, filters, states.DEPLOYWAIT, sort_key, callback_method, err_handler) @METRICS.timer('ConductorManager._check_orphan_nodes') @periodics.periodic( spacing=CONF.conductor.check_provision_state_interval, enabled=CONF.conductor.check_provision_state_interval > 0) def _check_orphan_nodes(self, context): """Periodically checks the status of nodes that were taken over. Periodically checks the nodes that are managed by this conductor but have a reservation from a conductor that went offline. 1. Nodes in DEPLOYING state move to DEPLOY FAIL. 2. Nodes in CLEANING state move to CLEAN FAIL with maintenance set. 3. Nodes in a transient power state get the power operation aborted. 4. Reservation is removed. The latter operation happens even for nodes in maintenance mode, otherwise it's not possible to move them out of maintenance. :param context: request context. """ offline_conductors = self.dbapi.get_offline_conductors() if not offline_conductors: return node_iter = self.iter_nodes( fields=['id', 'reservation', 'maintenance', 'provision_state', 'target_power_state'], filters={'reserved_by_any_of': offline_conductors}) state_cleanup_required = [] for (node_uuid, driver, conductor_group, node_id, conductor_hostname, maintenance, provision_state, target_power_state) in node_iter: # NOTE(lucasagomes): Although very rare, this may lead to a # race condition. By the time we release the lock the conductor # that was previously managing the node could be back online. try: objects.Node.release(context, conductor_hostname, node_id) except exception.NodeNotFound: LOG.warning("During checking for deploying state, node " "%s was not found and presumed deleted by " "another process. Skipping.", node_uuid) continue except exception.NodeLocked: LOG.warning("During checking for deploying state, when " "releasing the lock of the node %s, it was " "locked by another process. Skipping.", node_uuid) continue except exception.NodeNotLocked: LOG.warning("During checking for deploying state, when " "releasing the lock of the node %s, it was " "already unlocked.", node_uuid) else: LOG.warning('Forcibly removed reservation of conductor %(old)s' ' on node %(node)s as that conductor went offline', {'old': conductor_hostname, 'node': node_uuid}) # TODO(dtantsur): clean up all states that are not stable and # are not one of WAIT states. if not maintenance and (provision_state in (states.DEPLOYING, states.CLEANING) or target_power_state is not None): LOG.debug('Node %(node)s taken over from conductor %(old)s ' 'requires state clean up: provision state is ' '%(state)s, target power state is %(pstate)s', {'node': node_uuid, 'old': conductor_hostname, 'state': provision_state, 'pstate': target_power_state}) state_cleanup_required.append(node_uuid) for node_uuid in state_cleanup_required: with task_manager.acquire(context, node_uuid, purpose='power state clean up') as task: if not task.node.maintenance and task.node.target_power_state: old_state = task.node.target_power_state task.node.target_power_state = None task.node.last_error = _('Pending power operation was ' 'aborted due to conductor take ' 'over') task.node.save() LOG.warning('Aborted pending power operation %(op)s ' 'on node %(node)s due to conductor take over', {'op': old_state, 'node': node_uuid}) self._fail_if_in_state( context, {'uuid': node_uuid}, {states.DEPLOYING, states.CLEANING}, 'provision_updated_at', callback_method=utils.abort_on_conductor_take_over, err_handler=utils.provisioning_error_handler) @METRICS.timer('ConductorManager._do_adoption') @task_manager.require_exclusive_lock def _do_adoption(self, task): """Adopt the node. Similar to node takeover, adoption performs a driver boot validation and then triggers node takeover in order to make the conductor responsible for the node. Upon completion of takeover, the node is moved to ACTIVE state. The goal of this method is to set the conditions for the node to be managed by Ironic as an ACTIVE node without having performed a deployment operation. :param task: a TaskManager instance """ node = task.node LOG.debug('Conductor %(cdr)s attempting to adopt node %(node)s', {'cdr': self.host, 'node': node.uuid}) try: # NOTE(TheJulia): A number of drivers expect to know if a # whole disk image was used prior to their takeover logic # being triggered, as such we need to populate the # internal info based on the configuration the user has # supplied. iwdi = images.is_whole_disk_image(task.context, task.node.instance_info) driver_internal_info = node.driver_internal_info driver_internal_info['is_whole_disk_image'] = iwdi node.driver_internal_info = driver_internal_info # Calling boot validate to ensure that sufficient information # is supplied to allow the node to be able to boot if takeover # writes items such as kernel/ramdisk data to disk. task.driver.boot.validate(task) # NOTE(TheJulia): While task.driver.boot.validate() is called # above, and task.driver.power.validate() could be called, it # is called as part of the transition from ENROLL to MANAGEABLE # states. As such it is redundant to call here. self._do_takeover(task) LOG.info("Successfully adopted node %(node)s", {'node': node.uuid}) task.process_event('done') except Exception as err: msg = (_('Error while attempting to adopt node %(node)s: ' '%(err)s.') % {'node': node.uuid, 'err': err}) LOG.error(msg) node.last_error = msg task.process_event('fail') @METRICS.timer('ConductorManager._do_takeover') def _do_takeover(self, task): """Take over this node. Prepares a node for takeover by this conductor, performs the takeover, and changes the conductor associated with the node. The node with the new conductor affiliation is saved to the DB. :param task: a TaskManager instance """ LOG.debug('Conductor %(cdr)s taking over node %(node)s', {'cdr': self.host, 'node': task.node.uuid}) task.driver.deploy.prepare(task) task.driver.deploy.take_over(task) # NOTE(zhenguo): If console enabled, take over the console session # as well. console_error = None if task.node.console_enabled: notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.START) # NOTE(kaifeng) Clear allocated_ipmi_terminal_port if exists, # so current conductor can allocate a new free port from local # resources. internal_info = task.node.driver_internal_info if 'allocated_ipmi_terminal_port' in internal_info: internal_info.pop('allocated_ipmi_terminal_port') task.node.driver_internal_info = internal_info try: task.driver.console.start_console(task) except Exception as err: msg = (_('Failed to start console while taking over the ' 'node %(node)s: %(err)s.') % {'node': task.node.uuid, 'err': err}) LOG.error(msg) # If taking over console failed, set node's console_enabled # back to False and set node's last error. task.node.last_error = msg task.node.console_enabled = False console_error = True else: notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.END) # NOTE(lucasagomes): Set the ID of the new conductor managing # this node task.node.conductor_affinity = self.conductor.id task.node.save() if console_error: notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.ERROR) @METRICS.timer('ConductorManager._check_cleanwait_timeouts') @periodics.periodic( spacing=CONF.conductor.check_provision_state_interval, enabled=CONF.conductor.check_provision_state_interval > 0 and CONF.conductor.clean_callback_timeout != 0) def _check_cleanwait_timeouts(self, context): """Periodically checks for nodes being cleaned. If a node doing cleaning is unresponsive (detected when it stops heart beating), the operation should be aborted. :param context: request context. """ # FIXME(rloo): If the value is < 0, it will be enabled. That doesn't # seem right. callback_timeout = CONF.conductor.clean_callback_timeout filters = {'reserved': False, 'provision_state': states.CLEANWAIT, 'maintenance': False, 'provisioned_before': callback_timeout} self._fail_if_in_state(context, filters, states.CLEANWAIT, 'provision_updated_at', keep_target_state=True, callback_method=utils.cleanup_cleanwait_timeout) @METRICS.timer('ConductorManager._check_rescuewait_timeouts') @periodics.periodic(spacing=CONF.conductor.check_rescue_state_interval, enabled=bool(CONF.conductor.rescue_callback_timeout)) def _check_rescuewait_timeouts(self, context): """Periodically checks if rescue has timed out waiting for heartbeat. If a rescue call has timed out, fail the rescue and clean up. :param context: request context. """ callback_timeout = CONF.conductor.rescue_callback_timeout filters = {'reserved': False, 'provision_state': states.RESCUEWAIT, 'maintenance': False, 'provisioned_before': callback_timeout} self._fail_if_in_state(context, filters, states.RESCUEWAIT, 'provision_updated_at', keep_target_state=True, callback_method=utils.cleanup_rescuewait_timeout ) @METRICS.timer('ConductorManager._sync_local_state') @periodics.periodic(spacing=CONF.conductor.sync_local_state_interval, enabled=CONF.conductor.sync_local_state_interval > 0) def _sync_local_state(self, context): """Perform any actions necessary to sync local state. This is called periodically to refresh the conductor's copy of the consistent hash ring. If any mappings have changed, this method then determines which, if any, nodes need to be "taken over". The ensuing actions could include preparing a PXE environment, updating the DHCP server, and so on. """ filters = {'reserved': False, 'maintenance': False, 'provision_state': states.ACTIVE} node_iter = self.iter_nodes(fields=['id', 'conductor_affinity'], filters=filters) workers_count = 0 for (node_uuid, driver, conductor_group, node_id, conductor_affinity) in node_iter: if conductor_affinity == self.conductor.id: continue # Node is mapped here, but not updated by this conductor last try: with task_manager.acquire(context, node_uuid, purpose='node take over') as task: # NOTE(tenbrae): now that we have the lock, check again to # avoid racing with deletes and other state changes node = task.node if (node.maintenance or node.conductor_affinity == self.conductor.id or node.provision_state != states.ACTIVE): continue task.spawn_after(self._spawn_worker, self._do_takeover, task) except exception.NoFreeConductorWorker: break except (exception.NodeLocked, exception.NodeNotFound): continue workers_count += 1 if workers_count == CONF.conductor.periodic_max_workers: break @METRICS.timer('ConductorManager.validate_driver_interfaces') @messaging.expected_exceptions(exception.NodeLocked) def validate_driver_interfaces(self, context, node_id): """Validate the `core` and `standardized` interfaces for drivers. :param context: request context. :param node_id: node id or uuid. :returns: a dictionary containing the results of each interface validation. """ LOG.debug('RPC validate_driver_interfaces called for node %s.', node_id) ret_dict = {} lock_purpose = 'driver interface validation' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: # NOTE(sirushtim): the is_whole_disk_image variable is needed by # deploy drivers for doing their validate(). Since the deploy # isn't being done yet and the driver information could change in # the meantime, we don't know if the is_whole_disk_image value will # change or not. It isn't saved to the DB, but only used with this # node instance for the current validations. iwdi = images.is_whole_disk_image(context, task.node.instance_info) task.node.driver_internal_info['is_whole_disk_image'] = iwdi for iface_name in task.driver.non_vendor_interfaces: iface = getattr(task.driver, iface_name) result = reason = None try: iface.validate(task) if iface_name == 'deploy': utils.validate_instance_info_traits(task.node) # NOTE(dtantsur): without the agent running we cannot # have the complete list of steps, so skip ones that we # don't know. conductor_steps.validate_deploy_templates( task, skip_missing=True) result = True except (exception.InvalidParameterValue, exception.UnsupportedDriverExtension) as e: result = False reason = str(e) except Exception as e: result = False reason = (_('Unexpected exception, traceback saved ' 'into log by ironic conductor service ' 'that is running on %(host)s: %(error)s') % {'host': self.host, 'error': e}) LOG.exception( 'Unexpected exception occurred while validating ' '%(iface)s driver interface for driver ' '%(driver)s: %(err)s on node %(node)s.', {'iface': iface_name, 'driver': task.node.driver, 'err': e, 'node': task.node.uuid}) ret_dict[iface_name] = {} ret_dict[iface_name]['result'] = result if reason is not None: ret_dict[iface_name]['reason'] = reason return ret_dict @METRICS.timer('ConductorManager.destroy_node') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeAssociated, exception.InvalidState, exception.NodeProtected) def destroy_node(self, context, node_id): """Delete a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: NodeAssociated if the node contains an instance associated with it. :raises: InvalidState if the node is in the wrong provision state to perform deletion. :raises: NodeProtected if the node is protected. """ # NOTE(dtantsur): we allow deleting a node in maintenance mode even if # we would disallow it otherwise. That's done for recovering hopelessly # broken nodes (e.g. with broken BMC). with task_manager.acquire(context, node_id, purpose='node deletion') as task: node = task.node if not node.maintenance and node.instance_uuid is not None: raise exception.NodeAssociated(node=node.uuid, instance=node.instance_uuid) if task.node.protected: raise exception.NodeProtected(node=node.uuid) # NOTE(lucasagomes): For the *FAIL states we users should # move it to a safe state prior to deletion. This is because we # should try to avoid deleting a node in a dirty/whacky state, # e.g: A node in DEPLOYFAIL, if deleted without passing through # tear down/cleaning may leave data from the previous tenant # in the disk. So nodes in *FAIL states should first be moved to: # CLEANFAIL -> MANAGEABLE # INSPECTIONFAIL -> MANAGEABLE # DEPLOYFAIL -> DELETING delete_allowed_states = states.DELETE_ALLOWED_STATES if CONF.conductor.allow_deleting_available_nodes: delete_allowed_states += (states.AVAILABLE,) if (not node.maintenance and node.provision_state not in delete_allowed_states): msg = (_('Can not delete node "%(node)s" while it is in ' 'provision state "%(state)s". Valid provision states ' 'to perform deletion are: "%(valid_states)s", ' 'or set the node into maintenance mode') % {'node': node.uuid, 'state': node.provision_state, 'valid_states': delete_allowed_states}) raise exception.InvalidState(msg) if node.console_enabled: notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.START) try: task.driver.console.stop_console(task) except Exception as err: LOG.error('Failed to stop console while deleting ' 'the node %(node)s: %(err)s.', {'node': node.uuid, 'err': err}) notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.ERROR) else: node.console_enabled = False notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.END) node.destroy() LOG.info('Successfully deleted node %(node)s.', {'node': node.uuid}) @METRICS.timer('ConductorManager.destroy_port') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.InvalidState) def destroy_port(self, context, port): """Delete a port. :param context: request context. :param port: port object :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the port does not exist. """ LOG.debug('RPC destroy_port called for port %(port)s', {'port': port.uuid}) with task_manager.acquire(context, port.node_id, purpose='port deletion') as task: node = task.node vif = task.driver.network.get_current_vif(task, port) if ((node.provision_state == states.ACTIVE or node.instance_uuid) and not node.maintenance and vif): msg = _("Cannot delete the port %(port)s as node " "%(node)s is active or has " "instance UUID assigned or port is bound " "to vif %(vif)s") raise exception.InvalidState(msg % {'node': node.uuid, 'port': port.uuid, 'vif': vif}) port.destroy() LOG.info('Successfully deleted port %(port)s. ' 'The node associated with the port was %(node)s', {'port': port.uuid, 'node': task.node.uuid}) @METRICS.timer('ConductorManager.destroy_portgroup') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.PortgroupNotEmpty) def destroy_portgroup(self, context, portgroup): """Delete a portgroup. :param context: request context. :param portgroup: portgroup object :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the portgroup does not exist. :raises: PortgroupNotEmpty if portgroup is not empty """ LOG.debug('RPC destroy_portgroup called for portgroup %(portgroup)s', {'portgroup': portgroup.uuid}) with task_manager.acquire(context, portgroup.node_id, purpose='portgroup deletion') as task: portgroup.destroy() LOG.info('Successfully deleted portgroup %(portgroup)s. ' 'The node associated with the portgroup was %(node)s', {'portgroup': portgroup.uuid, 'node': task.node.uuid}) @METRICS.timer('ConductorManager.destroy_volume_connector') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.VolumeConnectorNotFound, exception.InvalidStateRequested) def destroy_volume_connector(self, context, connector): """Delete a volume connector. :param context: request context :param connector: volume connector object :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the connector does not exist :raises: VolumeConnectorNotFound if the volume connector cannot be found :raises: InvalidStateRequested if the node associated with the connector is not powered off. """ LOG.debug('RPC destroy_volume_connector called for volume connector ' '%(connector)s', {'connector': connector.uuid}) with task_manager.acquire(context, connector.node_id, purpose='volume connector deletion') as task: node = task.node if node.power_state != states.POWER_OFF: raise exception.InvalidStateRequested( action='volume connector deletion', node=node.uuid, state=node.power_state) connector.destroy() LOG.info('Successfully deleted volume connector %(connector)s. ' 'The node associated with the connector was %(node)s', {'connector': connector.uuid, 'node': task.node.uuid}) @METRICS.timer('ConductorManager.destroy_volume_target') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.VolumeTargetNotFound, exception.InvalidStateRequested) def destroy_volume_target(self, context, target): """Delete a volume target. :param context: request context :param target: volume target object :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found :raises: InvalidStateRequested if the node associated with the target is not powered off. """ LOG.debug('RPC destroy_volume_target called for volume target ' '%(target)s', {'target': target.uuid}) with task_manager.acquire(context, target.node_id, purpose='volume target deletion') as task: node = task.node if node.power_state != states.POWER_OFF: raise exception.InvalidStateRequested( action='volume target deletion', node=node.uuid, state=node.power_state) target.destroy() LOG.info('Successfully deleted volume target %(target)s. ' 'The node associated with the target was %(node)s', {'target': target.uuid, 'node': task.node.uuid}) @METRICS.timer('ConductorManager.get_console_information') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.NodeConsoleNotEnabled, exception.InvalidParameterValue) def get_console_information(self, context, node_id): """Get connection information about the console. :param context: request context. :param node_id: node id or uuid. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: NodeConsoleNotEnabled if the console is not enabled. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC get_console_information called for node %s', node_id) lock_purpose = 'getting console information' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: node = task.node if not node.console_enabled: raise exception.NodeConsoleNotEnabled(node=node.uuid) task.driver.console.validate(task) return task.driver.console.get_console(task) @METRICS.timer('ConductorManager.set_console_mode') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def set_console_mode(self, context, node_id, enabled): """Enable/Disable the console. Validate driver specific information synchronously, and then spawn a background worker to set console mode asynchronously. :param context: request context. :param node_id: node id or uuid. :param enabled: Boolean value; whether the console is enabled or disabled. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :raises: NoFreeConductorWorker when there is no free worker to start async task """ LOG.debug('RPC set_console_mode called for node %(node)s with ' 'enabled %(enabled)s', {'node': node_id, 'enabled': enabled}) with task_manager.acquire(context, node_id, shared=False, purpose='setting console mode') as task: node = task.node task.driver.console.validate(task) if enabled == node.console_enabled: op = 'enabled' if enabled else 'disabled' LOG.info("No console action was triggered because the " "console is already %s", op) else: node.last_error = None node.save() task.spawn_after(self._spawn_worker, self._set_console_mode, task, enabled) @task_manager.require_exclusive_lock def _set_console_mode(self, task, enabled): """Internal method to set console mode on a node.""" node = task.node notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.START) try: if enabled: task.driver.console.start_console(task) # TODO(tenbrae): We should be updating conductor_affinity here # but there is no support for console sessions in # take_over() right now. else: task.driver.console.stop_console(task) except Exception as e: with excutils.save_and_reraise_exception(): op = _('enabling') if enabled else _('disabling') msg = (_('Error %(op)s the console on node %(node)s. ' 'Reason: %(error)s') % {'op': op, 'node': node.uuid, 'error': e}) node.last_error = msg LOG.error(msg) node.save() notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.ERROR) node.console_enabled = enabled node.last_error = None node.save() notify_utils.emit_console_notification( task, 'console_set', fields.NotificationStatus.END) @METRICS.timer('ConductorManager.create_port') @messaging.expected_exceptions(exception.NodeLocked, exception.Conflict, exception.MACAlreadyExists, exception.PortgroupPhysnetInconsistent) def create_port(self, context, port_obj): """Create a port. :param context: request context. :param port_obj: a changed (but not saved) port object. :raises: NodeLocked if node is locked by another conductor :raises: MACAlreadyExists if the port has a MAC which is registered on another port already. :raises: Conflict if the port is a member of a portgroup which is on a different physical network. :raises: PortgroupPhysnetInconsistent if the port's portgroup has ports which are not all assigned the same physical network. """ port_uuid = port_obj.uuid LOG.debug("RPC create_port called for port %s.", port_uuid) with task_manager.acquire(context, port_obj.node_id, purpose='port create') as task: utils.validate_port_physnet(task, port_obj) port_obj.create() return port_obj @METRICS.timer('ConductorManager.update_port') @messaging.expected_exceptions(exception.NodeLocked, exception.FailedToUpdateMacOnPort, exception.MACAlreadyExists, exception.InvalidState, exception.FailedToUpdateDHCPOptOnPort, exception.Conflict, exception.InvalidParameterValue, exception.NetworkError, exception.PortgroupPhysnetInconsistent) def update_port(self, context, port_obj): """Update a port. :param context: request context. :param port_obj: a changed (but not saved) port object. :raises: DHCPLoadError if the dhcp_provider cannot be loaded. :raises: FailedToUpdateMacOnPort if MAC address changed and update failed. :raises: MACAlreadyExists if the update is setting a MAC which is registered on another port already. :raises: InvalidState if port connectivity attributes are updated while node not in a MANAGEABLE or ENROLL or INSPECTING or INSPECTWAIT state or not in MAINTENANCE mode. :raises: Conflict if trying to set extra/vif_port_id or pxe_enabled=True on port which is a member of portgroup with standalone_ports_supported=False. :raises: Conflict if the port is a member of a portgroup which is on a different physical network. :raises: PortgroupPhysnetInconsistent if the port's portgroup has ports which are not all assigned the same physical network. """ port_uuid = port_obj.uuid LOG.debug("RPC update_port called for port %s.", port_uuid) with task_manager.acquire(context, port_obj.node_id, purpose='port update') as task: node = task.node # Only allow updating MAC addresses for active nodes if maintenance # mode is on. if ((node.provision_state == states.ACTIVE or node.instance_uuid) and 'address' in port_obj.obj_what_changed() and not node.maintenance): action = _("Cannot update hardware address for port " "%(port)s as node %(node)s is active or has " "instance UUID assigned") raise exception.InvalidState(action % {'node': node.uuid, 'port': port_uuid}) # If port update is modifying the portgroup membership of the port # or modifying the local_link_connection, pxe_enabled or physical # network flags then node should be in MANAGEABLE/INSPECTING/ENROLL # provisioning state or in maintenance mode. Otherwise # InvalidState exception is raised. connectivity_attr = {'portgroup_id', 'pxe_enabled', 'local_link_connection', 'physical_network'} allowed_update_states = [states.ENROLL, states.INSPECTING, states.INSPECTWAIT, states.MANAGEABLE] if (set(port_obj.obj_what_changed()) & connectivity_attr and not (node.provision_state in allowed_update_states or node.maintenance)): action = _("Port %(port)s can not have any connectivity " "attributes (%(connect)s) updated unless " "node %(node)s is in a %(allowed)s state " "or in maintenance mode.") raise exception.InvalidState( action % {'port': port_uuid, 'node': node.uuid, 'connect': ', '.join(connectivity_attr), 'allowed': ', '.join(allowed_update_states)}) utils.validate_port_physnet(task, port_obj) task.driver.network.validate(task) # Handle mac_address update and VIF attach/detach stuff. task.driver.network.port_changed(task, port_obj) port_obj.save() return port_obj @METRICS.timer('ConductorManager.update_portgroup') @messaging.expected_exceptions(exception.NodeLocked, exception.FailedToUpdateMacOnPort, exception.PortgroupMACAlreadyExists, exception.PortgroupNotEmpty, exception.InvalidState, exception.Conflict, exception.InvalidParameterValue, exception.NetworkError) def update_portgroup(self, context, portgroup_obj): """Update a portgroup. :param context: request context. :param portgroup_obj: a changed (but not saved) portgroup object. :raises: DHCPLoadError if the dhcp_provider cannot be loaded. :raises: FailedToUpdateMacOnPort if MAC address changed and update failed. :raises: PortgroupMACAlreadyExists if the update is setting a MAC which is registered on another portgroup already. :raises: InvalidState if portgroup-node association is updated while node not in a MANAGEABLE or ENROLL or INSPECTING or INSPECTWAIT state or not in MAINTENANCE mode. :raises: PortgroupNotEmpty if there are ports associated with this portgroup. :raises: Conflict when trying to set standalone_ports_supported=False on portgroup with ports that has pxe_enabled=True and vice versa. """ portgroup_uuid = portgroup_obj.uuid LOG.debug("RPC update_portgroup called for portgroup %s.", portgroup_uuid) lock_purpose = 'update portgroup' with task_manager.acquire(context, portgroup_obj.node_id, purpose=lock_purpose) as task: node = task.node if 'node_id' in portgroup_obj.obj_what_changed(): # NOTE(zhenguo): If portgroup update is modifying the # portgroup-node association then node should be in # MANAGEABLE/INSPECTING/INSPECTWAIT/ENROLL provisioning state # or in maintenance mode, otherwise InvalidState is raised. allowed_update_states = [states.ENROLL, states.INSPECTING, states.INSPECTWAIT, states.MANAGEABLE] if (node.provision_state not in allowed_update_states and not node.maintenance): action = _("Portgroup %(portgroup)s can not be associated " "to node %(node)s unless the node is in a " "%(allowed)s state or in maintenance mode.") raise exception.InvalidState( action % {'portgroup': portgroup_uuid, 'node': node.uuid, 'allowed': ', '.join(allowed_update_states)}) # NOTE(zhenguo): If portgroup update is modifying the # portgroup-node association then there should not be # any Port associated to the PortGroup, otherwise # PortgroupNotEmpty exception is raised. associated_ports = self.dbapi.get_ports_by_portgroup_id( portgroup_uuid) if associated_ports: action = _("Portgroup %(portgroup)s can not be associated " "with node %(node)s because there are ports " "associated with this portgroup.") raise exception.PortgroupNotEmpty( action % {'portgroup': portgroup_uuid, 'node': node.uuid}) task.driver.network.validate(task) # Handle mac_address update and VIF attach/detach stuff. task.driver.network.portgroup_changed(task, portgroup_obj) portgroup_obj.save() return portgroup_obj @METRICS.timer('ConductorManager.update_volume_connector') @messaging.expected_exceptions( exception.InvalidParameterValue, exception.NodeLocked, exception.NodeNotFound, exception.VolumeConnectorNotFound, exception.VolumeConnectorTypeAndIdAlreadyExists, exception.InvalidStateRequested) def update_volume_connector(self, context, connector): """Update a volume connector. :param context: request context :param connector: a changed (but not saved) volume connector object :returns: an updated volume connector object :raises: InvalidParameterValue if the volume connector's UUID is being changed :raises: NodeLocked if the node is already locked :raises: NodeNotFound if the node associated with the conductor does not exist :raises: VolumeConnectorNotFound if the volume connector cannot be found :raises: VolumeConnectorTypeAndIdAlreadyExists if another connector already exists with the same values for type and connector_id fields :raises: InvalidStateRequested if the node associated with the connector is not powered off. """ LOG.debug("RPC update_volume_connector called for connector " "%(connector)s.", {'connector': connector.uuid}) with task_manager.acquire(context, connector.node_id, purpose='volume connector update') as task: node = task.node if node.power_state != states.POWER_OFF: raise exception.InvalidStateRequested( action='volume connector update', node=node.uuid, state=node.power_state) connector.save() LOG.info("Successfully updated volume connector %(connector)s.", {'connector': connector.uuid}) return connector @METRICS.timer('ConductorManager.update_volume_target') @messaging.expected_exceptions( exception.InvalidParameterValue, exception.NodeLocked, exception.NodeNotFound, exception.VolumeTargetNotFound, exception.VolumeTargetBootIndexAlreadyExists, exception.InvalidStateRequested) def update_volume_target(self, context, target): """Update a volume target. :param context: request context :param target: a changed (but not saved) volume target object :returns: an updated volume target object :raises: InvalidParameterValue if the volume target's UUID is being changed :raises: NodeLocked if the node is already locked :raises: NodeNotFound if the node associated with the volume target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same node ID and boot index values :raises: InvalidStateRequested if the node associated with the target is not powered off. """ LOG.debug("RPC update_volume_target called for target %(target)s.", {'target': target.uuid}) with task_manager.acquire(context, target.node_id, purpose='volume target update') as task: node = task.node if node.power_state != states.POWER_OFF: raise exception.InvalidStateRequested( action='volume target update', node=node.uuid, state=node.power_state) target.save() LOG.info("Successfully updated volume target %(target)s.", {'target': target.uuid}) return target @METRICS.timer('ConductorManager.get_driver_properties') @messaging.expected_exceptions(exception.DriverNotFound) def get_driver_properties(self, context, driver_name): """Get the properties of the driver. :param context: request context. :param driver_name: name of the driver. :returns: a dictionary with : entries. :raises: DriverNotFound if the driver is not loaded. """ LOG.debug("RPC get_driver_properties called for driver %s.", driver_name) driver = driver_factory.get_hardware_type(driver_name) return driver.get_properties() @METRICS.timer('ConductorManager._sensors_nodes_task') def _sensors_nodes_task(self, context, nodes): """Sends sensors data for nodes from synchronized queue.""" while not self._shutdown: try: (node_uuid, driver, conductor_group, instance_uuid) = nodes.get_nowait() except queue.Empty: break # populate the message which will be sent to ceilometer message = {'message_id': uuidutils.generate_uuid(), 'instance_uuid': instance_uuid, 'node_uuid': node_uuid, 'timestamp': datetime.datetime.utcnow()} try: lock_purpose = 'getting sensors data' with task_manager.acquire(context, node_uuid, shared=True, purpose=lock_purpose) as task: if task.node.maintenance: LOG.debug('Skipping sending sensors data for node ' '%s as it is in maintenance mode', task.node.uuid) continue # Add the node name, as the name would be hand for other # notifier plugins message['node_name'] = task.node.name # We should convey the proper hardware type, # which previously was hard coded to ipmi, but other # drivers were transmitting other values under the # guise of ipmi. ev_type = 'hardware.{driver}.metrics'.format( driver=task.node.driver) message['event_type'] = ev_type + '.update' task.driver.management.validate(task) sensors_data = task.driver.management.get_sensors_data( task) except NotImplementedError: LOG.warning( 'get_sensors_data is not implemented for driver' ' %(driver)s, node_uuid is %(node)s', {'node': node_uuid, 'driver': driver}) except exception.FailedToParseSensorData as fps: LOG.warning( "During get_sensors_data, could not parse " "sensor data for node %(node)s. Error: %(err)s.", {'node': node_uuid, 'err': str(fps)}) except exception.FailedToGetSensorData as fgs: LOG.warning( "During get_sensors_data, could not get " "sensor data for node %(node)s. Error: %(err)s.", {'node': node_uuid, 'err': str(fgs)}) except exception.NodeNotFound: LOG.warning( "During send_sensor_data, node %(node)s was not " "found and presumed deleted by another process.", {'node': node_uuid}) except Exception as e: LOG.warning( "Failed to get sensor data for node %(node)s. " "Error: %(error)s", {'node': node_uuid, 'error': e}) else: message['payload'] = ( self._filter_out_unsupported_types(sensors_data)) if message['payload']: self.sensors_notifier.info( context, ev_type, message) finally: # Yield on every iteration eventlet.sleep(0) @METRICS.timer('ConductorManager._send_sensor_data') @periodics.periodic(spacing=CONF.conductor.send_sensor_data_interval, enabled=CONF.conductor.send_sensor_data) def _send_sensor_data(self, context): """Periodically collects and transmits sensor data notifications.""" filters = {} if not CONF.conductor.send_sensor_data_for_undeployed_nodes: filters['provision_state'] = states.ACTIVE nodes = queue.Queue() for node_info in self.iter_nodes(fields=['instance_uuid'], filters=filters): nodes.put_nowait(node_info) number_of_threads = min(CONF.conductor.send_sensor_data_workers, nodes.qsize()) futures = [] for thread_number in range(number_of_threads): try: futures.append( self._spawn_worker(self._sensors_nodes_task, context, nodes)) except exception.NoFreeConductorWorker: LOG.warning("There is no more conductor workers for " "task of sending sensors data. %(workers)d " "workers has been already spawned.", {'workers': thread_number}) break done, not_done = waiters.wait_for_all( futures, timeout=CONF.conductor.send_sensor_data_wait_timeout) if not_done: LOG.warning("%d workers for send sensors data did not complete", len(not_done)) def _filter_out_unsupported_types(self, sensors_data): """Filters out sensor data types that aren't specified in the config. Removes sensor data types that aren't specified in CONF.conductor.send_sensor_data_types. :param sensors_data: dict containing sensor types and the associated data :returns: dict with unsupported sensor types removed """ allowed = set(x.lower() for x in CONF.conductor.send_sensor_data_types) if 'all' in allowed: return sensors_data return dict((sensor_type, sensor_value) for (sensor_type, sensor_value) in sensors_data.items() if sensor_type.lower() in allowed) @METRICS.timer('ConductorManager.set_boot_device') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def set_boot_device(self, context, node_id, device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param context: request context. :param node_id: node id or uuid. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC set_boot_device called for node %(node)s with ' 'device %(device)s', {'node': node_id, 'device': device}) with task_manager.acquire(context, node_id, purpose='setting boot device') as task: task.driver.management.validate(task) task.driver.management.set_boot_device(task, device, persistent=persistent) @METRICS.timer('ConductorManager.get_boot_device') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def get_boot_device(self, context, node_id): """Get the current boot device. Returns the current boot device of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ LOG.debug('RPC get_boot_device called for node %s', node_id) with task_manager.acquire(context, node_id, purpose='getting boot device') as task: task.driver.management.validate(task) return task.driver.management.get_boot_device(task) @METRICS.timer('ConductorManager.inject_nmi') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def inject_nmi(self, context, node_id): """Inject NMI for a node. Inject NMI (Non Maskable Interrupt) for a node immediately. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management or management.inject_nmi. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC inject_nmi called for node %s', node_id) with task_manager.acquire(context, node_id, purpose='inject nmi') as task: task.driver.management.validate(task) task.driver.management.inject_nmi(task) @METRICS.timer('ConductorManager.get_supported_boot_devices') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def get_supported_boot_devices(self, context, node_id): """Get the list of supported devices. Returns the list of supported boot devices of a node. :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ LOG.debug('RPC get_supported_boot_devices called for node %s', node_id) lock_purpose = 'getting supported boot devices' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: return task.driver.management.get_supported_boot_devices(task) @METRICS.timer('ConductorManager.set_indicator_state') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def set_indicator_state(self, context, node_id, component, indicator, state): """Set node hardware components indicator to the desired state. :param context: request context. :param node_id: node id or uuid. :param component: The hardware component, one of :mod:`ironic.common.components`. :param indicator: Indicator IDs, as reported by `get_supported_indicators`) :param state: Indicator state, one of mod:`ironic.common.indicator_states`. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ LOG.debug('RPC set_indicator_state called for node %(node)s with ' 'component %(component)s, indicator %(indicator)s and state ' '%(state)s', {'node': node_id, 'component': component, 'indicator': indicator, 'state': state}) with task_manager.acquire(context, node_id, purpose='setting indicator state') as task: task.driver.management.validate(task) task.driver.management.set_indicator_state( task, component, indicator, state) @METRICS.timer('ConductorManager.get_indicator_states') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def get_indicator_state(self, context, node_id, component, indicator): """Get node hardware component indicator state. :param context: request context. :param node_id: node id or uuid. :param component: The hardware component, one of :mod:`ironic.common.components`. :param indicator: Indicator IDs, as reported by `get_supported_indicators`) :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: Indicator state, one of mod:`ironic.common.indicator_states`. """ LOG.debug('RPC get_indicator_states called for node %s', node_id) with task_manager.acquire(context, node_id, purpose='getting indicators states') as task: task.driver.management.validate(task) return task.driver.management.get_indicator_state( task, component, indicator) @METRICS.timer('ConductorManager.get_supported_indicators') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def get_supported_indicators(self, context, node_id, component=None): """Get node hardware components and their indicators. :param context: request context. :param node_id: node id or uuid. :param component: If not `None`, return indicator information for just this component, otherwise return indicators for all existing components. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A dictionary of hardware components (:mod:`ironic.common.components`) as keys with indicator IDs as values. :: { 'chassis': { 'enclosure-0': { "readonly": true, "states": [ "OFF", "ON" ] } }, 'system': 'blade-A': { "readonly": true, "states": [ "OFF", "ON" ] } }, 'drive': 'ssd0': { "readonly": true, "states": [ "OFF", "ON" ] } } } """ LOG.debug('RPC get_supported_indicators called for node %s', node_id) lock_purpose = 'getting supported indicators' with task_manager.acquire(context, node_id, shared=True, purpose=lock_purpose) as task: return task.driver.management.get_supported_indicators( task, component) @METRICS.timer('ConductorManager.inspect_hardware') @messaging.expected_exceptions(exception.NoFreeConductorWorker, exception.NodeLocked, exception.InvalidParameterValue, exception.InvalidStateRequested, exception.UnsupportedDriverExtension) def inspect_hardware(self, context, node_id): """Inspect hardware to obtain hardware properties. Initiate the inspection of a node. Validations are done synchronously and the actual inspection work is performed in background (asynchronously). :param context: request context. :param node_id: node id or uuid. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support inspect. :raises: NoFreeConductorWorker when there is no free worker to start async task :raises: InvalidParameterValue when unable to get essential scheduling properties from hardware. :raises: MissingParameterValue when required information is not found. :raises: InvalidStateRequested if 'inspect' is not a valid action to do in the current state. """ LOG.debug('RPC inspect_hardware called for node %s', node_id) with task_manager.acquire(context, node_id, shared=False, purpose='hardware inspection') as task: task.driver.power.validate(task) task.driver.inspect.validate(task) try: task.process_event( 'inspect', callback=self._spawn_worker, call_args=(_do_inspect_hardware, task), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action='inspect', node=task.node.uuid, state=task.node.provision_state) @METRICS.timer('ConductorManager._check_inspect_wait_timeouts') @periodics.periodic( spacing=CONF.conductor.check_provision_state_interval, enabled=CONF.conductor.check_provision_state_interval > 0 and CONF.conductor.inspect_wait_timeout != 0) def _check_inspect_wait_timeouts(self, context): """Periodically checks inspect_wait_timeout and fails upon reaching it. :param context: request context """ # FIXME(rloo): If the value is < 0, it will be enabled. That doesn't # seem right. callback_timeout = CONF.conductor.inspect_wait_timeout filters = {'reserved': False, 'maintenance': False, 'provision_state': states.INSPECTWAIT, 'inspection_started_before': callback_timeout} sort_key = 'inspection_started_at' last_error = _("timeout reached while inspecting the node") self._fail_if_in_state(context, filters, states.INSPECTWAIT, sort_key, last_error=last_error) @METRICS.timer('ConductorManager.set_target_raid_config') @messaging.expected_exceptions(exception.NodeLocked, exception.UnsupportedDriverExtension, exception.InvalidParameterValue) def set_target_raid_config(self, context, node_id, target_raid_config): """Stores the target RAID configuration on the node. Stores the target RAID configuration on node.target_raid_config :param context: request context. :param node_id: node id or uuid. :param target_raid_config: Dictionary containing the target RAID configuration. It may be an empty dictionary as well. :raises: UnsupportedDriverExtension, if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: MissingParameterValue, if some required parameters are missing. :raises: NodeLocked if node is locked by another conductor. """ LOG.debug('RPC set_target_raid_config called for node %(node)s with ' 'RAID configuration %(target_raid_config)s', {'node': node_id, 'target_raid_config': target_raid_config}) with task_manager.acquire( context, node_id, purpose='setting target RAID config') as task: node = task.node # Operator may try to unset node.target_raid_config. So, try to # validate only if it is not empty. if target_raid_config: task.driver.raid.validate_raid_config(task, target_raid_config) node.target_raid_config = target_raid_config node.save() @METRICS.timer('ConductorManager.get_raid_logical_disk_properties') @messaging.expected_exceptions(exception.UnsupportedDriverExtension, exception.NoValidDefaultForInterface, exception.InterfaceNotFoundInEntrypoint) def get_raid_logical_disk_properties(self, context, driver_name): """Get the logical disk properties for RAID configuration. Gets the information about logical disk properties which can be specified in the input RAID configuration. For dynamic drivers, the default vendor interface is used. :param context: request context. :param driver_name: name of the driver :raises: UnsupportedDriverExtension, if the driver doesn't support RAID configuration. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's RAID interface. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :returns: A dictionary containing the properties and a textual description for them. """ LOG.debug("RPC get_raid_logical_disk_properties " "called for driver %s", driver_name) driver = driver_factory.get_hardware_type(driver_name) raid_iface = None raid_iface_name = driver_factory.default_interface( driver, 'raid', driver_name=driver_name) raid_iface = driver_factory.get_interface(driver, 'raid', raid_iface_name) return raid_iface.get_logical_disk_properties() @METRICS.timer('ConductorManager.heartbeat') @messaging.expected_exceptions(exception.NoFreeConductorWorker) def heartbeat(self, context, node_id, callback_url, agent_version=None, agent_token=None): """Process a heartbeat from the ramdisk. :param context: request context. :param node_id: node id or uuid. :param agent_version: The version of the agent that is heartbeating. If not provided it either indicates that the agent that is heartbeating is a version before sending agent_version was introduced or that we're in the middle of a rolling upgrade and the RPC version is pinned so the API isn't passing us the agent_version, in these cases assume agent v3.0.0 (the last release before sending agent_version was introduced). :param callback_url: URL to reach back to the ramdisk. :raises: NoFreeConductorWorker if there are no conductors to process this heartbeat request. """ LOG.debug('RPC heartbeat called for node %s', node_id) if agent_version is None: agent_version = '3.0.0' token_required = CONF.require_agent_token # NOTE(dtantsur): we acquire a shared lock to begin with, drivers are # free to promote it to an exclusive one. with task_manager.acquire(context, node_id, shared=True, purpose='heartbeat') as task: # NOTE(TheJulia): The "token" line of defense. # either tokens are required and they are present, # or a token is present in general and needs to be # validated. if (token_required or (utils.is_agent_token_present(task.node) and agent_token)): if not utils.is_agent_token_valid(task.node, agent_token): LOG.error('Invalid agent_token receieved for node ' '%(node)s', {'node': node_id}) raise exception.InvalidParameterValue( 'Invalid or missing agent token received.') elif utils.is_agent_token_supported(agent_version): LOG.error('Suspicious activity detected for node %(node)s ' 'when attempting to heartbeat. Heartbeat ' 'request has been rejected as the version of ' 'ironic-python-agent indicated in the heartbeat ' 'operation should support agent token ' 'functionality.', {'node': task.node.uuid}) raise exception.InvalidParameterValue( 'Invalid or missing agent token received.') else: LOG.warning('Out of date agent detected for node ' '%(node)s. Agent version %(version)s ' 'reported. Support for this version is ' 'deprecated.', {'node': task.node.uuid, 'version': agent_version}) # TODO(TheJulia): raise an exception as of the # ?Victoria? development cycle. task.spawn_after( self._spawn_worker, task.driver.deploy.heartbeat, task, callback_url, agent_version) @METRICS.timer('ConductorManager.vif_list') @messaging.expected_exceptions(exception.NetworkError, exception.InvalidParameterValue) def vif_list(self, context, node_id): """List attached VIFs for a node :param context: request context. :param node_id: node ID or UUID. :returns: List of VIF dictionaries, each dictionary will have an 'id' entry with the ID of the VIF. :raises: NetworkError, if something goes wrong during list the VIFs. :raises: InvalidParameterValue, if a parameter that's required for VIF list is wrong/missing. """ LOG.debug("RPC vif_list called for the node %s", node_id) with task_manager.acquire(context, node_id, purpose='list vifs', shared=True) as task: task.driver.network.validate(task) return task.driver.network.vif_list(task) @METRICS.timer('ConductorManager.vif_attach') @messaging.expected_exceptions(exception.NodeLocked, exception.NetworkError, exception.VifAlreadyAttached, exception.NoFreePhysicalPorts, exception.PortgroupPhysnetInconsistent, exception.VifInvalidForAttach, exception.InvalidParameterValue) def vif_attach(self, context, node_id, vif_info): """Attach a VIF to a node :param context: request context. :param node_id: node ID or UUID. :param vif_info: a dictionary representing VIF object. It must have an 'id' key, whose value is a unique identifier for that VIF. :raises: VifAlreadyAttached, if VIF is already attached to node :raises: NoFreePhysicalPorts, if no free physical ports left to attach :raises: NodeLocked, if node has an exclusive lock held on it :raises: NetworkError, if an error occurs during attaching the VIF. :raises: InvalidParameterValue, if a parameter that's required for VIF attach is wrong/missing. :raises: PortgroupPhysnetInconsistent if one of the node's portgroups has ports which are not all assigned the same physical network. :raises: VifInvalidForAttach if the VIF is not valid for attachment to the node. """ LOG.debug("RPC vif_attach called for the node %(node_id)s with " "vif_info %(vif_info)s", {'node_id': node_id, 'vif_info': vif_info}) with task_manager.acquire(context, node_id, purpose='attach vif') as task: task.driver.network.validate(task) task.driver.network.vif_attach(task, vif_info) LOG.info("VIF %(vif_id)s successfully attached to node %(node_id)s", {'vif_id': vif_info['id'], 'node_id': node_id}) @METRICS.timer('ConductorManager.vif_detach') @messaging.expected_exceptions(exception.NodeLocked, exception.NetworkError, exception.VifNotAttached, exception.InvalidParameterValue) def vif_detach(self, context, node_id, vif_id): """Detach a VIF from a node :param context: request context. :param node_id: node ID or UUID. :param vif_id: A VIF ID. :raises: VifNotAttached, if VIF not attached to node :raises: NodeLocked, if node has an exclusive lock held on it :raises: NetworkError, if an error occurs during detaching the VIF. :raises: InvalidParameterValue, if a parameter that's required for VIF detach is wrong/missing. """ LOG.debug("RPC vif_detach called for the node %(node_id)s with " "vif_id %(vif_id)s", {'node_id': node_id, 'vif_id': vif_id}) with task_manager.acquire(context, node_id, purpose='detach vif') as task: task.driver.network.validate(task) task.driver.network.vif_detach(task, vif_id) LOG.info("VIF %(vif_id)s successfully detached from node %(node_id)s", {'vif_id': vif_id, 'node_id': node_id}) def _object_dispatch(self, target, method, context, args, kwargs): """Dispatch a call to an object method. This ensures that object methods get called and any exception that is raised gets wrapped in an ExpectedException for forwarding back to the caller (without spamming the conductor logs). """ try: # NOTE(danms): Keep the getattr inside the try block since # a missing method is really a client problem return getattr(target, method)(context, *args, **kwargs) except Exception: # NOTE(danms): This is oslo.messaging fu. ExpectedException() # grabs sys.exc_info here and forwards it along. This allows the # caller to see the exception information, but causes us *not* to # log it as such in this service. This is something that is quite # critical so that things that conductor does on behalf of another # node are not logged as exceptions in conductor logs. Otherwise, # you'd have the same thing logged in both places, even though an # exception here *always* means that the caller screwed up, so # there's no reason to log it here. raise messaging.ExpectedException() @METRICS.timer('ConductorManager.object_class_action_versions') def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): """Perform an action on a VersionedObject class. :param context: The context within which to perform the action :param objname: The registry name of the object :param objmethod: The name of the action method to call :param object_versions: A dict of {objname: version} mappings :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :returns: The result of the action method, which may (or may not) be an instance of the implementing VersionedObject class. """ objclass = objects_base.IronicObject.obj_class_from_name( objname, object_versions[objname]) result = self._object_dispatch(objclass, objmethod, context, args, kwargs) # NOTE(danms): The RPC layer will convert to primitives for us, # but in this case, we need to honor the version the client is # asking for, so we do it before returning here. if isinstance(result, objects_base.IronicObject): result = result.obj_to_primitive( target_version=object_versions[objname], version_manifest=object_versions) return result @METRICS.timer('ConductorManager.object_action') def object_action(self, context, objinst, objmethod, args, kwargs): """Perform an action on a VersionedObject instance. :param context: The context within which to perform the action :param objinst: The object instance on which to perform the action :param objmethod: The name of the action method to call :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :returns: A tuple with the updates made to the object and the result of the action method """ oldobj = objinst.obj_clone() result = self._object_dispatch(objinst, objmethod, context, args, kwargs) updates = dict() # NOTE(danms): Diff the object with the one passed to us and # generate a list of changes to forward back for name, field in objinst.fields.items(): if not objinst.obj_attr_is_set(name): # Avoid demand-loading anything continue if (not oldobj.obj_attr_is_set(name) or getattr(oldobj, name) != getattr(objinst, name)): updates[name] = field.to_primitive(objinst, name, getattr(objinst, name)) # This is safe since a field named this would conflict with the # method anyway updates['obj_what_changed'] = objinst.obj_what_changed() return updates, result @METRICS.timer('ConductorManager.object_backport_versions') def object_backport_versions(self, context, objinst, object_versions): """Perform a backport of an object instance. The default behavior of the base VersionedObjectSerializer, upon receiving an object with a version newer than what is in the local registry, is to call this method to request a backport of the object. :param context: The context within which to perform the backport :param objinst: An instance of a VersionedObject to be backported :param object_versions: A dict of {objname: version} mappings :returns: The downgraded instance of objinst """ target = object_versions[objinst.obj_name()] LOG.debug('Backporting %(obj)s to %(ver)s with versions %(manifest)s', {'obj': objinst.obj_name(), 'ver': target, 'manifest': ','.join( ['%s=%s' % (name, ver) for name, ver in object_versions.items()])}) return objinst.obj_to_primitive(target_version=target, version_manifest=object_versions) @METRICS.timer('ConductorManager.add_node_traits') @messaging.expected_exceptions(exception.InvalidParameterValue, exception.NodeLocked, exception.NodeNotFound) def add_node_traits(self, context, node_id, traits, replace=False): """Add or replace traits for a node. :param context: request context. :param node_id: node ID or UUID. :param traits: a list of traits to add to the node. :param replace: True to replace all of the node's traits. :raises: InvalidParameterValue if adding the traits would exceed the per-node traits limit. Traits added prior to reaching the limit will not be removed. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node does not exist. """ LOG.debug("RPC add_node_traits called for the node %(node_id)s with " "traits %(traits)s", {'node_id': node_id, 'traits': traits}) with task_manager.acquire(context, node_id, purpose='add node traits'): if replace: objects.TraitList.create(context, node_id=node_id, traits=traits) else: for trait in traits: trait = objects.Trait(context, node_id=node_id, trait=trait) trait.create() @METRICS.timer('ConductorManager.remove_node_traits') @messaging.expected_exceptions(exception.NodeLocked, exception.NodeNotFound, exception.NodeTraitNotFound) def remove_node_traits(self, context, node_id, traits): """Remove some or all traits from a node. :param context: request context. :param node_id: node ID or UUID. :param traits: a list of traits to remove from the node, or None. If None, all traits will be removed from the node. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node does not exist. :raises: NodeTraitNotFound if one of the traits is not found. Traits removed prior to the non-existent trait will not be replaced. """ LOG.debug("RPC remove_node_traits called for the node %(node_id)s " "with traits %(traits)s", {'node_id': node_id, 'traits': traits}) with task_manager.acquire(context, node_id, purpose='remove node traits'): if traits is None: objects.TraitList.destroy(context, node_id=node_id) else: for trait in traits: objects.Trait.destroy(context, node_id=node_id, trait=trait) @METRICS.timer('ConductorManager.create_allocation') @messaging.expected_exceptions(exception.InvalidParameterValue, exception.NodeAssociated, exception.InstanceAssociated, exception.NodeNotFound) def create_allocation(self, context, allocation): """Create an allocation in database. :param context: an admin context :param allocation: a created (but not saved to the database) allocation object. :returns: created allocation object. :raises: InvalidParameterValue if some fields fail validation. :raises: NodeAssociated if allocation backfill is requested for a node that is associated with another instance. :raises: InstanceAssociated if allocation backfill is requested, but the allocation UUID is already used as instance_uuid on another node. :raises: NodeNotFound if allocation backfill is requested for a node that cannot be found. """ LOG.debug("RPC create_allocation called for allocation %s.", allocation.uuid) allocation.conductor_affinity = self.conductor.id # Allocation backfilling is handled separately, remove node_id for now. # Cannot use plain getattr here since oslo.versionedobjects raise # NotImplementedError instead of AttributeError (because life is pain). if 'node_id' in allocation and allocation.node_id: node_id = allocation.node_id allocation.node_id = None else: node_id = None allocation.create() if node_id: # This is a fast operation and should be done synchronously allocations.backfill_allocation(context, allocation, node_id) else: # Spawn an asynchronous worker to process the allocation. Copy it # to avoid data races. self._spawn_worker(allocations.do_allocate, context, allocation.obj_clone()) # Return the current status of the allocation return allocation @METRICS.timer('ConductorManager.destroy_allocation') @messaging.expected_exceptions(exception.InvalidState) def destroy_allocation(self, context, allocation): """Delete an allocation. :param context: request context. :param allocation: allocation object. :raises: InvalidState if the associated node is in the wrong provision state to perform deallocation. """ if allocation.node_id: with task_manager.acquire(context, allocation.node_id, purpose='allocation deletion', shared=False) as task: allocations.verify_node_for_deallocation(task.node, allocation) # NOTE(dtantsur): remove the allocation while still holding # the node lock to avoid races. allocation.destroy() else: allocation.destroy() LOG.info('Successfully deleted allocation %s', allocation.uuid) @METRICS.timer('ConductorManager._check_orphan_allocations') @periodics.periodic( spacing=CONF.conductor.check_allocations_interval, enabled=CONF.conductor.check_allocations_interval > 0) def _check_orphan_allocations(self, context): """Periodically checks the status of allocations that were taken over. Periodically checks the allocations assigned to a conductor that went offline, tries to take them over and finish. :param context: request context. """ offline_conductors = self.dbapi.get_offline_conductors(field='id') for conductor_id in offline_conductors: filters = {'state': states.ALLOCATING, 'conductor_affinity': conductor_id} for allocation in objects.Allocation.list(context, filters=filters): try: if not self.dbapi.take_over_allocation(allocation.id, conductor_id, self.conductor.id): # Another conductor has taken over, skipping continue LOG.debug('Taking over allocation %s', allocation.uuid) allocations.do_allocate(context, allocation) except Exception: LOG.exception('Unexpected exception when taking over ' 'allocation %s', allocation.uuid) @METRICS.timer('ConductorManager.get_node_with_token') @messaging.expected_exceptions(exception.NodeLocked, exception.Invalid) def get_node_with_token(self, context, node_id): """Add secret agent token to node. :param context: request context. :param node_id: node ID or UUID. :returns: Secret token set for the node. :raises: NodeLocked, if node has an exclusive lock held on it :raises: Invalid, if the node already has a token set. """ LOG.debug("RPC get_node_with_token called for the node %(node_id)s", {'node_id': node_id}) with task_manager.acquire(context, node_id, purpose='generate_token', shared=True) as task: node = task.node if utils.is_agent_token_present(task.node): LOG.warning('An agent token generation request is being ' 'refused as one is already present for ' 'node %(node)s', {'node': node_id}) # Allow lookup to work by returning a value, it is just an # unusable value that can't be verified against. # This is important if the agent lookup has occured with # pre-generation of tokens with virtual media usage. node.driver_internal_info['agent_secret_token'] = "******" return node task.upgrade_lock() LOG.debug('Generating agent token for node %(node)s', {'node': task.node.uuid}) utils.add_secret_token(task.node) task.node.save() return task.node @METRICS.timer('get_vendor_passthru_metadata') def get_vendor_passthru_metadata(route_dict): d = {} for method, metadata in route_dict.items(): # 'func' is the vendor method reference, ignore it d[method] = {k: metadata[k] for k in metadata if k != 'func'} return d @task_manager.require_exclusive_lock def handle_sync_power_state_max_retries_exceeded(task, actual_power_state, exception=None): """Handles power state sync exceeding the max retries. When synchronizing the power state between a node and the DB has exceeded the maximum number of retries, change the DB power state to be the actual node power state and place the node in maintenance. :param task: a TaskManager instance with an exclusive lock :param actual_power_state: the actual power state of the node; a power state from ironic.common.states :param exception: the exception object that caused the sync power state to fail, if present. """ node = task.node msg = (_("During sync_power_state, max retries exceeded " "for node %(node)s, node state %(actual)s " "does not match expected state '%(state)s'. " "Updating DB state to '%(actual)s' " "Switching node to maintenance mode.") % {'node': node.uuid, 'actual': actual_power_state, 'state': node.power_state}) if exception is not None: msg += _(" Error: %s") % exception old_power_state = node.power_state node.power_state = actual_power_state node.last_error = msg node.maintenance = True node.maintenance_reason = msg node.fault = faults.POWER_FAILURE node.save() if old_power_state != actual_power_state: if node.instance_uuid: nova.power_update( task.context, node.instance_uuid, node.power_state) notify_utils.emit_power_state_corrected_notification( task, old_power_state) LOG.error(msg) @METRICS.timer('do_sync_power_state') def do_sync_power_state(task, count): """Sync the power state for this node, incrementing the counter on failure. When the limit of power_state_sync_max_retries is reached, the node is put into maintenance mode and the error recorded. :param task: a TaskManager instance :param count: number of times this node has previously failed a sync :raises: NodeLocked if unable to upgrade task lock to an exclusive one :returns: Count of failed attempts. On success, the counter is set to 0. On failure, the count is incremented by one """ node = task.node old_power_state = node.power_state power_state = None count += 1 max_retries = CONF.conductor.power_state_sync_max_retries # If power driver info can not be validated, and node has no prior state, # do not attempt to sync the node's power state. if node.power_state is None: try: task.driver.power.validate(task) except exception.InvalidParameterValue: return 0 try: # The driver may raise an exception, or may return ERROR. # Handle both the same way. power_state = task.driver.power.get_power_state(task) if power_state == states.ERROR: raise exception.PowerStateFailure( _("Power driver returned ERROR state " "while trying to sync power state.")) except Exception as e: # Stop if any exception is raised when getting the power state if count > max_retries: task.upgrade_lock() handle_sync_power_state_max_retries_exceeded(task, power_state, exception=e) else: LOG.warning("During sync_power_state, could not get power " "state for node %(node)s, attempt %(attempt)s of " "%(retries)s. Error: %(err)s.", {'node': node.uuid, 'attempt': count, 'retries': max_retries, 'err': e}) return count if node.power_state and node.power_state == power_state: # No action is needed return 0 # We will modify a node, so upgrade our lock and use reloaded node. # This call may raise NodeLocked that will be caught on upper level. task.upgrade_lock() node = task.node # Repeat all checks with exclusive lock to avoid races if node.power_state and node.power_state == power_state: # Node power state was updated to the correct value return 0 elif node.provision_state in SYNC_EXCLUDED_STATES or node.maintenance: # Something was done to a node while a shared lock was held return 0 elif node.power_state is None: # If node has no prior state AND we successfully got a state, # simply record that and send a notification. LOG.info("During sync_power_state, node %(node)s has no " "previous known state. Recording current state '%(state)s'.", {'node': node.uuid, 'state': power_state}) node.power_state = power_state node.save() if node.instance_uuid: nova.power_update( task.context, node.instance_uuid, node.power_state) notify_utils.emit_power_state_corrected_notification( task, None) return 0 if count > max_retries: handle_sync_power_state_max_retries_exceeded(task, power_state) return count if CONF.conductor.force_power_state_during_sync: LOG.warning("During sync_power_state, node %(node)s state " "'%(actual)s' does not match expected state. " "Changing hardware state to '%(state)s'.", {'node': node.uuid, 'actual': power_state, 'state': node.power_state}) try: # node_power_action will update the node record # so don't do that again here. utils.node_power_action(task, node.power_state) except Exception: LOG.error( "Failed to change power state of node %(node)s " "to '%(state)s', attempt %(attempt)s of %(retries)s.", {'node': node.uuid, 'state': node.power_state, 'attempt': count, 'retries': max_retries}) else: LOG.warning("During sync_power_state, node %(node)s state " "does not match expected state '%(state)s'. " "Updating recorded state to '%(actual)s'.", {'node': node.uuid, 'actual': power_state, 'state': node.power_state}) node.power_state = power_state node.save() if node.instance_uuid: nova.power_update( task.context, node.instance_uuid, node.power_state) notify_utils.emit_power_state_corrected_notification( task, old_power_state) return count @task_manager.require_exclusive_lock def _do_inspect_hardware(task): """Initiates inspection. :param task: a TaskManager instance with an exclusive lock on its node. :raises: HardwareInspectionFailure if driver doesn't return the state as states.MANAGEABLE, states.INSPECTWAIT. """ node = task.node def handle_failure(e, log_func=LOG.error): node.last_error = e task.process_event('fail') log_func("Failed to inspect node %(node)s: %(err)s", {'node': node.uuid, 'err': e}) # Remove agent_url, while not strictly needed for the inspection path, # lets just remove it out of good practice. utils.remove_agent_url(node) try: new_state = task.driver.inspect.inspect_hardware(task) except exception.IronicException as e: with excutils.save_and_reraise_exception(): error = str(e) handle_failure(error) except Exception as e: error = (_('Unexpected exception of type %(type)s: %(msg)s') % {'type': type(e).__name__, 'msg': e}) handle_failure(error, log_func=LOG.exception) raise exception.HardwareInspectionFailure(error=error) if new_state == states.MANAGEABLE: task.process_event('done') LOG.info('Successfully inspected node %(node)s', {'node': node.uuid}) elif new_state == states.INSPECTWAIT: task.process_event('wait') LOG.info('Successfully started introspection on node %(node)s', {'node': node.uuid}) else: error = (_("During inspection, driver returned unexpected " "state %(state)s") % {'state': new_state}) handle_failure(error) raise exception.HardwareInspectionFailure(error=error) ironic-15.0.0/ironic/conductor/allocations.py0000664000175000017500000003242313652514273021263 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functionality related to allocations.""" import random from ironic_lib import metrics_utils from oslo_config import cfg from oslo_log import log from oslo_utils import excutils import retrying from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.conductor import task_manager from ironic import objects CONF = cfg.CONF LOG = log.getLogger(__name__) METRICS = metrics_utils.get_metrics_logger(__name__) def do_allocate(context, allocation): """Process the allocation. This call runs in a separate thread on a conductor. It finds suitable nodes for the allocation and reserves one of them. This call does not raise exceptions since it's designed to work asynchronously. :param context: an admin context :param allocation: an allocation object """ try: nodes = _candidate_nodes(context, allocation) _allocate_node(context, allocation, nodes) except exception.AllocationFailed as exc: LOG.error(str(exc)) _allocation_failed(allocation, exc) except Exception as exc: LOG.exception("Unexpected exception during processing of " "allocation %s", allocation.uuid) reason = _("Unexpected exception during allocation: %s") % exc _allocation_failed(allocation, reason) def verify_node_for_deallocation(node, allocation): """Verify that allocation can be removed for the node. :param node: a node object :param allocation: an allocation object associated with the node """ if node.maintenance: # Allocations can always be removed in the maintenance mode. return if (node.target_provision_state and node.provision_state not in states.UPDATE_ALLOWED_STATES): msg = (_("Cannot remove allocation %(uuid)s for node %(node)s, " "because the node is in state %(state)s where updates are " "not allowed (and maintenance mode is off)") % {'node': node.uuid, 'uuid': allocation.uuid, 'state': node.provision_state}) raise exception.InvalidState(msg) if node.provision_state == states.ACTIVE: msg = (_("Cannot remove allocation %(uuid)s for node %(node)s, " "because the node is active (and maintenance mode is off)") % {'node': node.uuid, 'uuid': allocation.uuid}) raise exception.InvalidState(msg) def _allocation_failed(allocation, reason): """Failure handler for the allocation.""" try: allocation.node_id = None allocation.state = states.ERROR allocation.last_error = str(reason) allocation.save() except exception.AllocationNotFound as exc: LOG.debug('Not saving a failed allocation: %s', exc) except Exception: LOG.exception('Could not save the failed allocation %s', allocation.uuid) def _traits_match(traits, node): return {t.trait for t in node.traits.objects}.issuperset(traits) def _candidate_nodes(context, allocation): """Get a list of candidate nodes for the allocation.""" filters = {'resource_class': allocation.resource_class, 'provision_state': states.AVAILABLE, 'associated': False, 'with_power_state': True, 'maintenance': False} if allocation.candidate_nodes: # NOTE(dtantsur): we assume that candidate_nodes were converted to # UUIDs on the API level. filters['uuid_in'] = allocation.candidate_nodes if allocation.owner: filters['project'] = allocation.owner nodes = objects.Node.list(context, filters=filters) if not nodes: if allocation.candidate_nodes: error = _("none of the requested nodes are available and match " "the resource class %s") % allocation.resource_class else: error = _("no available nodes match the resource class %s") % ( allocation.resource_class) raise exception.AllocationFailed(uuid=allocation.uuid, error=error) # TODO(dtantsur): database-level filtering? if allocation.traits: traits = set(allocation.traits) nodes = [n for n in nodes if _traits_match(traits, n)] if not nodes: error = (_("no suitable nodes have the requested traits %s") % ', '.join(traits)) raise exception.AllocationFailed(uuid=allocation.uuid, error=error) # NOTE(dtantsur): make sure that parallel allocations do not try the nodes # in the same order. random.shuffle(nodes) LOG.debug('%(count)d nodes are candidates for allocation %(uuid)s', {'count': len(nodes), 'uuid': allocation.uuid}) return nodes def _verify_node(node, allocation): """Check that the node still satisfiest the request.""" if node.maintenance: LOG.debug('Node %s is now in maintenance, skipping', node.uuid) return False if node.instance_uuid: LOG.debug('Node %(node)s is already associated with instance ' '%(inst)s, skipping', {'node': node.uuid, 'inst': node.instance_uuid}) return False if node.provision_state != states.AVAILABLE: LOG.debug('Node %s is no longer available, skipping', node.uuid) return False if node.resource_class != allocation.resource_class: LOG.debug('Resource class of node %(node)s no longer ' 'matches requested resource class %(rsc)s for ' 'allocation %(uuid)s, skipping', {'node': node.uuid, 'rsc': allocation.resource_class, 'uuid': allocation.uuid}) return False if allocation.traits and not _traits_match(set(allocation.traits), node): LOG.debug('List of traits of node %(node)s no longer ' 'matches requested traits %(traits)s for ' 'allocation %(uuid)s, skipping', {'node': node.uuid, 'traits': allocation.traits, 'uuid': allocation.uuid}) return False return True # NOTE(dtantsur): instead of trying to allocate each node # node_locked_retry_attempt times, we try to allocate *any* node the same # number of times. This avoids getting stuck on a node reserved e.g. for power # sync periodic task. @retrying.retry( retry_on_exception=lambda e: isinstance(e, exception.AllocationFailed), stop_max_attempt_number=CONF.conductor.node_locked_retry_attempts, wait_fixed=CONF.conductor.node_locked_retry_interval * 1000) def _allocate_node(context, allocation, nodes): """Go through the list of nodes and try to allocate one of them.""" retry_nodes = [] for node in nodes: try: # NOTE(dtantsur): retries are done for all nodes above, so disable # per-node retry. Also disable loading the driver, since the # current conductor may not have the requried hardware type or # interfaces (it's picked at random). with task_manager.acquire(context, node.uuid, shared=False, retry=False, load_driver=False, purpose='allocating') as task: # NOTE(dtantsur): double-check the node details, since they # could have changed before we acquired the lock. if not _verify_node(task.node, allocation): continue allocation.node_id = task.node.id allocation.state = states.ACTIVE # NOTE(dtantsur): the node.instance_uuid and allocation_id are # updated inside of the save() call within the same # transaction to avoid races. NodeAssociated can be raised if # another process allocates this node first. allocation.save() LOG.info('Node %(node)s has been successfully reserved for ' 'allocation %(uuid)s', {'node': node.uuid, 'uuid': allocation.uuid}) return allocation except exception.NodeLocked: LOG.debug('Node %s is currently locked, moving to the next one', node.uuid) retry_nodes.append(node) except exception.NodeAssociated: LOG.debug('Node %s is already associated, moving to the next one', node.uuid) # NOTE(dtantsur): rewrite the passed list to only contain the nodes that # are worth retrying. Do not include nodes that are no longer suitable. nodes[:] = retry_nodes if nodes: error = _('could not reserve any of %d suitable nodes') % len(nodes) else: error = _('all nodes were filtered out during reservation') raise exception.AllocationFailed(uuid=allocation.uuid, error=error) def backfill_allocation(context, allocation, node_id): """Assign the previously allocated node to the node allocation. This is not the actual allocation process, but merely backfilling of allocation_uuid for a previously allocated node. :param context: an admin context :param allocation: an allocation object associated with the node :param node_id: An ID of the node. :raises: AllocationFailed if the node does not match the allocation :raises: NodeAssociated if the node is already associated with another instance or allocation. :raises: InstanceAssociated if the allocation's UUID is already used on another node as instance_uuid. :raises: NodeNotFound if the node with the provided ID cannot be found. """ try: _do_backfill_allocation(context, allocation, node_id) except (exception.AllocationFailed, exception.InstanceAssociated, exception.NodeAssociated, exception.NodeNotFound) as exc: with excutils.save_and_reraise_exception(): LOG.error(str(exc)) _allocation_failed(allocation, exc) except Exception as exc: with excutils.save_and_reraise_exception(): LOG.exception("Unexpected exception during backfilling of " "allocation %s", allocation.uuid) reason = _("Unexpected exception during allocation: %s") % exc _allocation_failed(allocation, reason) def _do_backfill_allocation(context, allocation, node_id): with task_manager.acquire(context, node_id, purpose='allocation backfilling') as task: node = task.node errors = [] # NOTE(dtantsur): this feature is not designed to bypass the allocation # mechanism, but to backfill allocations for active nodes, hence this # check. if node.provision_state != states.ACTIVE: errors.append(_('Node must be in the "active" state, but the ' 'current state is "%s"') % node.provision_state) # NOTE(dtantsur): double-check that the node is still suitable. if (allocation.resource_class and node.resource_class != allocation.resource_class): errors.append(_('Resource class %(curr)s does not match ' 'the requested resource class %(rsc)s') % {'curr': node.resource_class, 'rsc': allocation.resource_class}) if (allocation.traits and not _traits_match(set(allocation.traits), node)): errors.append(_('List of traits %(curr)s does not match ' 'the requested traits %(traits)s') % {'curr': node.traits, 'traits': allocation.traits}) if (allocation.candidate_nodes and node.uuid not in allocation.candidate_nodes): errors.append(_('Candidate nodes must be empty or contain the ' 'target node, but got %s') % allocation.candidate_nodes) if errors: error = _('Cannot backfill an allocation for node %(node)s: ' '%(errors)s') % {'node': node.uuid, 'errors': '; '.join(errors)} raise exception.AllocationFailed(uuid=allocation.uuid, error=error) allocation.node_id = task.node.id allocation.state = states.ACTIVE # NOTE(dtantsur): the node.instance_uuid and allocation_id are # updated inside of the save() call within the same # transaction to avoid races. NodeAssociated can be raised if # another process allocates this node first. allocation.save() LOG.info('Node %(node)s has been successfully reserved for ' 'allocation %(uuid)s', {'node': node.uuid, 'uuid': allocation.uuid}) ironic-15.0.0/ironic/conductor/cleaning.py0000664000175000017500000002722713652514273020541 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functionality related to cleaning.""" from oslo_log import log from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils from ironic.conf import CONF LOG = log.getLogger(__name__) @task_manager.require_exclusive_lock def do_node_clean(task, clean_steps=None): """Internal RPC method to perform cleaning of a node. :param task: a TaskManager instance with an exclusive lock on its node :param clean_steps: For a manual clean, the list of clean steps to perform. Is None For automated cleaning (default). For more information, see the clean_steps parameter of :func:`ConductorManager.do_node_clean`. """ node = task.node manual_clean = clean_steps is not None clean_type = 'manual' if manual_clean else 'automated' LOG.debug('Starting %(type)s cleaning for node %(node)s', {'type': clean_type, 'node': node.uuid}) if not manual_clean and utils.skip_automated_cleaning(node): # Skip cleaning, move to AVAILABLE. node.clean_step = None node.save() task.process_event('done') LOG.info('Automated cleaning is disabled, node %s has been ' 'successfully moved to AVAILABLE state.', node.uuid) return # NOTE(dtantsur): this is only reachable during automated cleaning, # for manual cleaning we verify maintenance mode earlier on. if (not CONF.conductor.allow_provisioning_in_maintenance and node.maintenance): msg = _('Cleaning a node in maintenance mode is not allowed') return utils.cleaning_error_handler(task, msg, tear_down_cleaning=False) try: # NOTE(ghe): Valid power and network values are needed to perform # a cleaning. task.driver.power.validate(task) task.driver.network.validate(task) except exception.InvalidParameterValue as e: msg = (_('Validation failed. Cannot clean node %(node)s. ' 'Error: %(msg)s') % {'node': node.uuid, 'msg': e}) return utils.cleaning_error_handler(task, msg) if manual_clean: info = node.driver_internal_info info['clean_steps'] = clean_steps node.driver_internal_info = info node.save() # Do caching of bios settings if supported by driver, # this will be called for both manual and automated cleaning. try: task.driver.bios.cache_bios_settings(task) except exception.UnsupportedDriverExtension: LOG.warning('BIOS settings are not supported for node %s, ' 'skipping', task.node.uuid) # TODO(zshi) remove this check when classic drivers are removed except Exception: msg = (_('Caching of bios settings failed on node %(node)s. ' 'Continuing with node cleaning.') % {'node': node.uuid}) LOG.exception(msg) # Allow the deploy driver to set up the ramdisk again (necessary for # IPA cleaning) try: prepare_result = task.driver.deploy.prepare_cleaning(task) except Exception as e: msg = (_('Failed to prepare node %(node)s for cleaning: %(e)s') % {'node': node.uuid, 'e': e}) LOG.exception(msg) return utils.cleaning_error_handler(task, msg) if prepare_result == states.CLEANWAIT: # Prepare is asynchronous, the deploy driver will need to # set node.driver_internal_info['clean_steps'] and # node.clean_step and then make an RPC call to # continue_node_clean to start cleaning. # For manual cleaning, the target provision state is MANAGEABLE, # whereas for automated cleaning, it is AVAILABLE (the default). target_state = states.MANAGEABLE if manual_clean else None task.process_event('wait', target_state=target_state) return try: conductor_steps.set_node_cleaning_steps(task) except (exception.InvalidParameterValue, exception.NodeCleaningFailure) as e: msg = (_('Cannot clean node %(node)s. Error: %(msg)s') % {'node': node.uuid, 'msg': e}) return utils.cleaning_error_handler(task, msg) steps = node.driver_internal_info.get('clean_steps', []) step_index = 0 if steps else None do_next_clean_step(task, step_index) @task_manager.require_exclusive_lock def do_next_clean_step(task, step_index): """Do cleaning, starting from the specified clean step. :param task: a TaskManager instance with an exclusive lock :param step_index: The first clean step in the list to execute. This is the index (from 0) into the list of clean steps in the node's driver_internal_info['clean_steps']. Is None if there are no steps to execute. """ node = task.node # For manual cleaning, the target provision state is MANAGEABLE, # whereas for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE if step_index is None: steps = [] else: steps = node.driver_internal_info['clean_steps'][step_index:] LOG.info('Executing %(state)s on node %(node)s, remaining steps: ' '%(steps)s', {'node': node.uuid, 'steps': steps, 'state': node.provision_state}) # Execute each step until we hit an async step or run out of steps for ind, step in enumerate(steps): # Save which step we're about to start so we can restart # if necessary node.clean_step = step driver_internal_info = node.driver_internal_info driver_internal_info['clean_step_index'] = step_index + ind node.driver_internal_info = driver_internal_info node.save() interface = getattr(task.driver, step.get('interface')) LOG.info('Executing %(step)s on node %(node)s', {'step': step, 'node': node.uuid}) try: result = interface.execute_clean_step(task, step) except Exception as e: if isinstance(e, exception.AgentConnectionFailed): if task.node.driver_internal_info.get('cleaning_reboot'): LOG.info('Agent is not yet running on node %(node)s ' 'after cleaning reboot, waiting for agent to ' 'come up to run next clean step %(step)s.', {'node': node.uuid, 'step': step}) driver_internal_info['skip_current_clean_step'] = False node.driver_internal_info = driver_internal_info target_state = (states.MANAGEABLE if manual_clean else None) task.process_event('wait', target_state=target_state) return msg = (_('Node %(node)s failed step %(step)s: ' '%(exc)s') % {'node': node.uuid, 'exc': e, 'step': node.clean_step}) LOG.exception(msg) utils.cleaning_error_handler(task, msg) return # Check if the step is done or not. The step should return # states.CLEANWAIT if the step is still being executed, or # None if the step is done. if result == states.CLEANWAIT: # Kill this worker, the async step will make an RPC call to # continue_node_clean to continue cleaning LOG.info('Clean step %(step)s on node %(node)s being ' 'executed asynchronously, waiting for driver.', {'node': node.uuid, 'step': step}) target_state = states.MANAGEABLE if manual_clean else None task.process_event('wait', target_state=target_state) return elif result is not None: msg = (_('While executing step %(step)s on node ' '%(node)s, step returned invalid value: %(val)s') % {'step': step, 'node': node.uuid, 'val': result}) LOG.error(msg) return utils.cleaning_error_handler(task, msg) LOG.info('Node %(node)s finished clean step %(step)s', {'node': node.uuid, 'step': step}) # Clear clean_step node.clean_step = None driver_internal_info = node.driver_internal_info driver_internal_info['clean_steps'] = None driver_internal_info.pop('clean_step_index', None) driver_internal_info.pop('cleaning_reboot', None) driver_internal_info.pop('cleaning_polling', None) driver_internal_info.pop('agent_secret_token', None) driver_internal_info.pop('agent_secret_token_pregenerated', None) # Remove agent_url if not utils.fast_track_able(task): driver_internal_info.pop('agent_url', None) node.driver_internal_info = driver_internal_info node.save() try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: msg = (_('Failed to tear down from cleaning for node %(node)s, ' 'reason: %(err)s') % {'node': node.uuid, 'err': e}) LOG.exception(msg) return utils.cleaning_error_handler(task, msg, tear_down_cleaning=False) LOG.info('Node %s cleaning complete', node.uuid) event = 'manage' if manual_clean or node.retired else 'done' # NOTE(rloo): No need to specify target prov. state; we're done task.process_event(event) @task_manager.require_exclusive_lock def do_node_clean_abort(task, step_name=None): """Internal method to abort an ongoing operation. :param task: a TaskManager instance with an exclusive lock :param step_name: The name of the clean step. """ node = task.node try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: LOG.exception('Failed to tear down cleaning for node %(node)s ' 'after aborting the operation. Error: %(err)s', {'node': node.uuid, 'err': e}) error_msg = _('Failed to tear down cleaning after aborting ' 'the operation') utils.cleaning_error_handler(task, error_msg, tear_down_cleaning=False, set_fail_state=False) return info_message = _('Clean operation aborted for node %s') % node.uuid last_error = _('By request, the clean operation was aborted') if step_name: msg = _(' after the completion of step "%s"') % step_name last_error += msg info_message += msg node.last_error = last_error node.clean_step = None info = node.driver_internal_info # Clear any leftover metadata about cleaning info.pop('clean_step_index', None) info.pop('cleaning_reboot', None) info.pop('cleaning_polling', None) info.pop('skip_current_clean_step', None) info.pop('agent_url', None) info.pop('agent_secret_token', None) info.pop('agent_secret_token_pregenerated', None) node.driver_internal_info = info node.save() LOG.info(info_message) ironic-15.0.0/ironic/conductor/deployments.py0000664000175000017500000004064113652514273021317 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functionality related to deploying and undeploying.""" import tempfile from ironic_lib import metrics_utils from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import excutils from ironic.common import exception from ironic.common.glance_service import service_utils as glance_utils from ironic.common.i18n import _ from ironic.common import images from ironic.common import states from ironic.common import swift from ironic.conductor import notification_utils as notify_utils from ironic.conductor import steps as conductor_steps from ironic.conductor import task_manager from ironic.conductor import utils from ironic.conf import CONF from ironic.objects import fields LOG = log.getLogger(__name__) METRICS = metrics_utils.get_metrics_logger(__name__) def validate_node(task, event='deploy'): """Validate that a node is suitable for deployment/rebuilding. :param task: a TaskManager instance. :param event: event to process: deploy or rebuild. :raises: NodeInMaintenance, NodeProtected, InvalidStateRequested """ if task.node.maintenance: raise exception.NodeInMaintenance(op=_('provisioning'), node=task.node.uuid) if event == 'rebuild' and task.node.protected: raise exception.NodeProtected(node=task.node.uuid) if not task.fsm.is_actionable_event(event): raise exception.InvalidStateRequested( action=event, node=task.node.uuid, state=task.node.provision_state) @METRICS.timer('start_deploy') @task_manager.require_exclusive_lock def start_deploy(task, manager, configdrive=None, event='deploy'): """Start deployment or rebuilding on a node. This function does not check the node suitability for deployment, it's left up to the caller. :param task: a TaskManager instance. :param manager: a ConductorManager to run tasks on. :param configdrive: a configdrive, if requested. :param event: event to process: deploy or rebuild. """ node = task.node # Record of any pre-existing agent_url should be removed # except when we are in fast track conditions. if not utils.is_fast_track(task): utils.remove_agent_url(node) if event == 'rebuild': # Note(gilliard) Clear these to force the driver to # check whether they have been changed in glance # NOTE(vdrok): If image_source is not from Glance we should # not clear kernel and ramdisk as they're input manually if glance_utils.is_glance_image( node.instance_info.get('image_source')): instance_info = node.instance_info instance_info.pop('kernel', None) instance_info.pop('ramdisk', None) node.instance_info = instance_info # Infer the image type to make sure the deploy driver # validates only the necessary variables for different # image types. # NOTE(sirushtim): The iwdi variable can be None. It's up to # the deploy driver to validate this. iwdi = images.is_whole_disk_image(task.context, node.instance_info) driver_internal_info = node.driver_internal_info driver_internal_info['is_whole_disk_image'] = iwdi node.driver_internal_info = driver_internal_info node.save() try: task.driver.power.validate(task) task.driver.deploy.validate(task) utils.validate_instance_info_traits(task.node) conductor_steps.validate_deploy_templates(task) except exception.InvalidParameterValue as e: raise exception.InstanceDeployFailure( _("Failed to validate deploy or power info for node " "%(node_uuid)s. Error: %(msg)s") % {'node_uuid': node.uuid, 'msg': e}, code=e.code) try: task.process_event( event, callback=manager._spawn_worker, call_args=(do_node_deploy, task, manager.conductor.id, configdrive), err_handler=utils.provisioning_error_handler) except exception.InvalidState: raise exception.InvalidStateRequested( action=event, node=task.node.uuid, state=task.node.provision_state) @METRICS.timer('do_node_deploy') @task_manager.require_exclusive_lock def do_node_deploy(task, conductor_id=None, configdrive=None): """Prepare the environment and deploy a node.""" node = task.node utils.wipe_deploy_internal_info(node) utils.del_secret_token(node) try: if configdrive: if isinstance(configdrive, dict): configdrive = utils.build_configdrive(node, configdrive) _store_configdrive(node, configdrive) except (exception.SwiftOperationError, exception.ConfigInvalid) as e: with excutils.save_and_reraise_exception(): utils.deploying_error_handler( task, ('Error while uploading the configdrive for %(node)s ' 'to Swift') % {'node': node.uuid}, _('Failed to upload the configdrive to Swift. ' 'Error: %s') % e, clean_up=False) except db_exception.DBDataError as e: with excutils.save_and_reraise_exception(): # NOTE(hshiina): This error happens when the configdrive is # too large. Remove the configdrive from the # object to update DB successfully in handling # the failure. node.obj_reset_changes() utils.deploying_error_handler( task, ('Error while storing the configdrive for %(node)s into ' 'the database: %(err)s') % {'node': node.uuid, 'err': e}, _("Failed to store the configdrive in the database. " "%s") % e, clean_up=False) except Exception as e: with excutils.save_and_reraise_exception(): utils.deploying_error_handler( task, ('Unexpected error while preparing the configdrive for ' 'node %(node)s') % {'node': node.uuid}, _("Failed to prepare the configdrive. Exception: %s") % e, traceback=True, clean_up=False) try: task.driver.deploy.prepare(task) except exception.IronicException as e: with excutils.save_and_reraise_exception(): utils.deploying_error_handler( task, ('Error while preparing to deploy to node %(node)s: ' '%(err)s') % {'node': node.uuid, 'err': e}, _("Failed to prepare to deploy: %s") % e, clean_up=False) except Exception as e: with excutils.save_and_reraise_exception(): utils.deploying_error_handler( task, ('Unexpected error while preparing to deploy to node ' '%(node)s') % {'node': node.uuid}, _("Failed to prepare to deploy. Exception: %s") % e, traceback=True, clean_up=False) try: # This gets the deploy steps (if any) and puts them in the node's # driver_internal_info['deploy_steps']. In-band steps are skipped since # we know that an agent is not running yet. conductor_steps.set_node_deployment_steps(task, skip_missing=True) except exception.InstanceDeployFailure as e: with excutils.save_and_reraise_exception(): utils.deploying_error_handler( task, 'Error while getting deploy steps; cannot deploy to node ' '%(node)s. Error: %(err)s' % {'node': node.uuid, 'err': e}, _("Cannot get deploy steps; failed to deploy: %s") % e) if not node.driver_internal_info.get('deploy_steps'): msg = _('Error while getting deploy steps: no steps returned for ' 'node %s') % node.uuid utils.deploying_error_handler( task, msg, _("No deploy steps returned by the driver")) raise exception.InstanceDeployFailure(msg) do_next_deploy_step(task, 0, conductor_id) @task_manager.require_exclusive_lock def do_next_deploy_step(task, step_index, conductor_id): """Do deployment, starting from the specified deploy step. :param task: a TaskManager instance with an exclusive lock :param step_index: The first deploy step in the list to execute. This is the index (from 0) into the list of deploy steps in the node's driver_internal_info['deploy_steps']. Is None if there are no steps to execute. """ node = task.node if step_index is None: steps = [] else: steps = node.driver_internal_info['deploy_steps'][step_index:] LOG.info('Executing %(state)s on node %(node)s, remaining steps: ' '%(steps)s', {'node': node.uuid, 'steps': steps, 'state': node.provision_state}) # Execute each step until we hit an async step or run out of steps for ind, step in enumerate(steps): # Save which step we're about to start so we can restart # if necessary node.deploy_step = step driver_internal_info = node.driver_internal_info driver_internal_info['deploy_step_index'] = step_index + ind node.driver_internal_info = driver_internal_info node.save() interface = getattr(task.driver, step.get('interface')) LOG.info('Executing %(step)s on node %(node)s', {'step': step, 'node': node.uuid}) try: result = interface.execute_deploy_step(task, step) except exception.IronicException as e: if isinstance(e, exception.AgentConnectionFailed): if task.node.driver_internal_info.get('deployment_reboot'): LOG.info('Agent is not yet running on node %(node)s after ' 'deployment reboot, waiting for agent to come up ' 'to run next deploy step %(step)s.', {'node': node.uuid, 'step': step}) driver_internal_info['skip_current_deploy_step'] = False node.driver_internal_info = driver_internal_info task.process_event('wait') return log_msg = ('Node %(node)s failed deploy step %(step)s. Error: ' '%(err)s' % {'node': node.uuid, 'step': node.deploy_step, 'err': e}) utils.deploying_error_handler( task, log_msg, _("Failed to deploy: Deploy step %(step)s, " "error: %(err)s.") % { 'step': node.deploy_step, 'err': e}) return except Exception as e: log_msg = ('Node %(node)s failed deploy step %(step)s with ' 'unexpected error: %(err)s' % {'node': node.uuid, 'step': node.deploy_step, 'err': e}) utils.deploying_error_handler( task, log_msg, _("Failed to deploy. Exception: %s") % e, traceback=True) return if ind == 0: # We've done the very first deploy step. # Update conductor_affinity to reference this conductor's ID # since there may be local persistent state node.conductor_affinity = conductor_id node.save() # Check if the step is done or not. The step should return # states.DEPLOYWAIT if the step is still being executed, or # None if the step is done. # NOTE(tenbrae): Some drivers may return states.DEPLOYWAIT # eg. if they are waiting for a callback if result == states.DEPLOYWAIT: # Kill this worker, the async step will make an RPC call to # continue_node_deploy() to continue deploying LOG.info('Deploy step %(step)s on node %(node)s being ' 'executed asynchronously, waiting for driver.', {'node': node.uuid, 'step': step}) task.process_event('wait') return elif result is not None: # NOTE(rloo): This is an internal/dev error; shouldn't happen. log_msg = (_('While executing deploy step %(step)s on node ' '%(node)s, step returned unexpected state: %(val)s') % {'step': step, 'node': node.uuid, 'val': result}) utils.deploying_error_handler( task, log_msg, _("Failed to deploy: %s") % node.deploy_step) return LOG.info('Node %(node)s finished deploy step %(step)s', {'node': node.uuid, 'step': step}) # Finished executing the steps. Clear deploy_step. node.deploy_step = None utils.wipe_deploy_internal_info(node) node.save() _start_console_in_deploy(task) task.process_event('done') LOG.info('Successfully deployed node %(node)s with ' 'instance %(instance)s.', {'node': node.uuid, 'instance': node.instance_uuid}) def _get_configdrive_obj_name(node): """Generate the object name for the config drive.""" return 'configdrive-%s' % node.uuid def _store_configdrive(node, configdrive): """Handle the storage of the config drive. If configured, the config drive data are uploaded to a swift endpoint. The Node's instance_info is updated to include either the temporary Swift URL from the upload, or if no upload, the actual config drive data. :param node: an Ironic node object. :param configdrive: A gzipped and base64 encoded configdrive. :raises: SwiftOperationError if an error occur when uploading the config drive to the swift endpoint. :raises: ConfigInvalid if required keystone authorization credentials with swift are missing. """ if CONF.deploy.configdrive_use_object_store: # NOTE(lucasagomes): No reason to use a different timeout than # the one used for deploying the node timeout = (CONF.conductor.configdrive_swift_temp_url_duration or CONF.conductor.deploy_callback_timeout # The documented default in ironic.conf.conductor or 1800) container = CONF.conductor.configdrive_swift_container object_name = _get_configdrive_obj_name(node) object_headers = {'X-Delete-After': str(timeout)} with tempfile.NamedTemporaryFile(dir=CONF.tempdir) as fileobj: fileobj.write(configdrive) fileobj.flush() swift_api = swift.SwiftAPI() swift_api.create_object(container, object_name, fileobj.name, object_headers=object_headers) configdrive = swift_api.get_temp_url(container, object_name, timeout) i_info = node.instance_info i_info['configdrive'] = configdrive node.instance_info = i_info node.save() def _start_console_in_deploy(task): """Start console at the end of deployment. Console is stopped at tearing down not to be exposed to an instance user. Then, restart at deployment. :param task: a TaskManager instance with an exclusive lock """ if not task.node.console_enabled: return notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.START) try: task.driver.console.start_console(task) except Exception as err: msg = (_('Failed to start console while deploying the ' 'node %(node)s: %(err)s.') % {'node': task.node.uuid, 'err': err}) LOG.error(msg) task.node.last_error = msg task.node.console_enabled = False task.node.save() notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.ERROR) else: notify_utils.emit_console_notification( task, 'console_restore', fields.NotificationStatus.END) ironic-15.0.0/ironic/conductor/steps.py0000664000175000017500000006103213652514273020107 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_config import cfg from oslo_log import log from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.objects import deploy_template LOG = log.getLogger(__name__) CONF = cfg.CONF CLEANING_INTERFACE_PRIORITY = { # When two clean steps have the same priority, their order is determined # by which interface is implementing the clean step. The clean step of the # interface with the highest value here, will be executed first in that # case. 'power': 5, 'management': 4, 'deploy': 3, 'bios': 2, 'raid': 1, } DEPLOYING_INTERFACE_PRIORITY = { # When two deploy steps have the same priority, their order is determined # by which interface is implementing the step. The step of the interface # with the highest value here, will be executed first in that case. # TODO(rloo): If we think it makes sense to have the interface priorities # the same for cleaning & deploying, replace the two with one e.g. # 'INTERFACE_PRIORITIES'. 'power': 5, 'management': 4, 'deploy': 3, 'bios': 2, 'raid': 1, } def _clean_step_key(step): """Sort by priority, then interface priority in event of tie. :param step: cleaning step dict to get priority for. """ return (step.get('priority'), CLEANING_INTERFACE_PRIORITY[step.get('interface')]) def _deploy_step_key(step): """Sort by priority, then interface priority in event of tie. :param step: deploy step dict to get priority for. """ return (step.get('priority'), DEPLOYING_INTERFACE_PRIORITY[step.get('interface')]) def _sorted_steps(steps, sort_step_key): """Return a sorted list of steps. :param sort_step_key: If set, this is a method (key) used to sort the steps from highest priority to lowest priority. For steps having the same priority, they are sorted from highest interface priority to lowest. :returns: A list of sorted step dictionaries. """ # Sort the steps from higher priority to lower priority return sorted(steps, key=sort_step_key, reverse=True) def is_equivalent(step1, step2): """Compare steps, ignoring their priority.""" return (step1.get('interface') == step2.get('interface') and step1.get('step') == step2.get('step')) def find_step(steps, step): """Find an identical step in the list of steps.""" return next((x for x in steps if is_equivalent(x, step)), None) def _get_steps(task, interfaces, get_method, enabled=False, sort_step_key=None): """Get steps for task.node. :param task: A TaskManager object :param interfaces: A dictionary of (key) interfaces and their (value) priorities. These are the interfaces that will have steps of interest. The priorities are used for deciding the priorities of steps having the same priority. :param get_method: The method used to get the steps from the node's interface; a string. :param enabled: If True, returns only enabled (priority > 0) steps. If False, returns all steps. :param sort_step_key: If set, this is a method (key) used to sort the steps from highest priority to lowest priority. For steps having the same priority, they are sorted from highest interface priority to lowest. :raises: NodeCleaningFailure or InstanceDeployFailure if there was a problem getting the steps. :returns: A list of step dictionaries """ # Get steps from each interface steps = list() for interface in interfaces: interface = getattr(task.driver, interface) if interface: interface_steps = [x for x in getattr(interface, get_method)(task) if not enabled or x['priority'] > 0] steps.extend(interface_steps) if sort_step_key: steps = _sorted_steps(steps, sort_step_key) return steps def _get_cleaning_steps(task, enabled=False, sort=True): """Get cleaning steps for task.node. :param task: A TaskManager object :param enabled: If True, returns only enabled (priority > 0) steps. If False, returns all clean steps. :param sort: If True, the steps are sorted from highest priority to lowest priority. For steps having the same priority, they are sorted from highest interface priority to lowest. :raises: NodeCleaningFailure if there was a problem getting the clean steps. :returns: A list of clean step dictionaries """ sort_key = _clean_step_key if sort else None return _get_steps(task, CLEANING_INTERFACE_PRIORITY, 'get_clean_steps', enabled=enabled, sort_step_key=sort_key) def _get_deployment_steps(task, enabled=False, sort=True): """Get deployment steps for task.node. :param task: A TaskManager object :param enabled: If True, returns only enabled (priority > 0) steps. If False, returns all deploy steps. :param sort: If True, the steps are sorted from highest priority to lowest priority. For steps having the same priority, they are sorted from highest interface priority to lowest. :raises: InstanceDeployFailure if there was a problem getting the deploy steps. :returns: A list of deploy step dictionaries """ sort_key = _deploy_step_key if sort else None return _get_steps(task, DEPLOYING_INTERFACE_PRIORITY, 'get_deploy_steps', enabled=enabled, sort_step_key=sort_key) def set_node_cleaning_steps(task): """Set up the node with clean step information for cleaning. For automated cleaning, get the clean steps from the driver. For manual cleaning, the user's clean steps are known but need to be validated against the driver's clean steps. :raises: InvalidParameterValue if there is a problem with the user's clean steps. :raises: NodeCleaningFailure if there was a problem getting the clean steps. """ node = task.node driver_internal_info = node.driver_internal_info # For manual cleaning, the target provision state is MANAGEABLE, whereas # for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE if not manual_clean: # Get the prioritized steps for automated cleaning driver_internal_info['clean_steps'] = _get_cleaning_steps(task, enabled=True) else: # For manual cleaning, the list of cleaning steps was specified by the # user and already saved in node.driver_internal_info['clean_steps']. # Now that we know what the driver's available clean steps are, we can # do further checks to validate the user's clean steps. steps = node.driver_internal_info['clean_steps'] driver_internal_info['clean_steps'] = ( _validate_user_clean_steps(task, steps)) node.clean_step = {} driver_internal_info['clean_step_index'] = None node.driver_internal_info = driver_internal_info node.save() def _get_deployment_templates(task): """Get deployment templates for task.node. Return deployment templates where the name of the deployment template matches one of the node's instance traits (the subset of the node's traits requested by the user via a flavor or image). :param task: A TaskManager object :returns: a list of DeployTemplate objects. """ node = task.node if not node.instance_info.get('traits'): return [] instance_traits = node.instance_info['traits'] return deploy_template.DeployTemplate.list_by_names(task.context, instance_traits) def _get_steps_from_deployment_templates(task, templates): """Get deployment template steps for task.node. Given a list of deploy template objects, return a list of all deploy steps combined. :param task: A TaskManager object :param templates: a list of deploy templates :returns: A list of deploy step dictionaries """ steps = [] # NOTE(mgoddard): The steps from the object include id, created_at, etc., # which we don't want to include when we assign them to # node.driver_internal_info. Include only the relevant fields. step_fields = ('interface', 'step', 'args', 'priority') for template in templates: steps.extend([{key: step[key] for key in step_fields} for step in template.steps]) return steps def _get_validated_steps_from_templates(task, skip_missing=False): """Return a list of validated deploy steps from deploy templates. Deployment template steps are those steps defined in deployment templates where the name of the deployment template matches one of the node's instance traits (the subset of the node's traits requested by the user via a flavor or image). There may be many such matching templates, each with a list of steps to execute. This method gathers the steps from all matching deploy templates for a node, and validates those steps against the node's driver interfaces, raising an error if validation fails. :param task: A TaskManager object :raises: InvalidParameterValue if validation of steps fails. :raises: InstanceDeployFailure if there was a problem getting the deploy steps. :returns: A list of validated deploy step dictionaries """ # Gather deploy templates matching the node's instance traits. templates = _get_deployment_templates(task) # Gather deploy steps from deploy templates. user_steps = _get_steps_from_deployment_templates(task, templates) # Validate the steps. error_prefix = (_('Validation of deploy steps from deploy templates ' 'matching this node\'s instance traits failed. Matching ' 'deploy templates: %(templates)s. Errors: ') % {'templates': ','.join(t.name for t in templates)}) return _validate_user_deploy_steps(task, user_steps, error_prefix=error_prefix, skip_missing=skip_missing) def _get_all_deployment_steps(task, skip_missing=False): """Get deployment steps for task.node. Deployment steps from matching deployment templates are combined with those from driver interfaces and all enabled steps returned in priority order. :param task: A TaskManager object :raises: InstanceDeployFailure if there was a problem getting the deploy steps. :returns: A list of deploy step dictionaries """ # Gather deploy steps from deploy templates and validate. # NOTE(mgoddard): although we've probably just validated the templates in # do_node_deploy, they may have changed in the DB since we last checked, so # validate again. user_steps = _get_validated_steps_from_templates(task, skip_missing=skip_missing) # Gather enabled deploy steps from drivers. driver_steps = _get_deployment_steps(task, enabled=True, sort=False) # Remove driver steps that have been disabled or overridden by user steps. user_step_keys = {(s['interface'], s['step']) for s in user_steps} steps = [s for s in driver_steps if (s['interface'], s['step']) not in user_step_keys] # Add enabled user steps. enabled_user_steps = [s for s in user_steps if s['priority'] > 0] steps.extend(enabled_user_steps) return _sorted_steps(steps, _deploy_step_key) def set_node_deployment_steps(task, reset_current=True, skip_missing=False): """Set up the node with deployment step information for deploying. Get the deploy steps from the driver. :param reset_current: Whether to reset the current step to the first one. :raises: InstanceDeployFailure if there was a problem getting the deployment steps. """ node = task.node driver_internal_info = node.driver_internal_info driver_internal_info['deploy_steps'] = _get_all_deployment_steps( task, skip_missing=skip_missing) if reset_current: node.deploy_step = {} driver_internal_info['deploy_step_index'] = None node.driver_internal_info = driver_internal_info node.save() def _step_id(step): """Return the 'ID' of a deploy step. The ID is a string, .. :param step: the step dictionary. :return: the step's ID string. """ return '.'.join([step['interface'], step['step']]) def _validate_deploy_steps_unique(user_steps): """Validate that deploy steps from deploy templates are unique. :param user_steps: a list of user steps. A user step is a dictionary with required keys 'interface', 'step', 'args', and 'priority':: { 'interface': , 'step': , 'args': {: , ..., : }, 'priority': } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True}, 'priority': 10 } :return: a list of validation error strings for the steps. """ # Check for duplicate steps. Each interface/step combination can be # specified at most once. errors = [] counter = collections.Counter(_step_id(step) for step in user_steps) duplicates = {step_id for step_id, count in counter.items() if count > 1} if duplicates: err = (_('deploy steps from all deploy templates matching this ' 'node\'s instance traits cannot have the same interface ' 'and step. Duplicate deploy steps for %(duplicates)s') % {'duplicates': ', '.join(duplicates)}) errors.append(err) return errors def _validate_user_step(task, user_step, driver_step, step_type): """Validate a user-specified step. :param task: A TaskManager object :param user_step: a user step dictionary with required keys 'interface' and 'step', and optional keys 'args' and 'priority':: { 'interface': , 'step': , 'args': {: , ..., : }, 'priority': } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :param driver_step: a driver step dictionary:: { 'interface': , 'step': , 'priority': 'abortable': Optional for clean steps, absent for deploy steps. . 'argsinfo': Optional. A dictionary of {:} entries. is a dictionary with { 'description': , 'required': } } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'priority': 10, 'abortable': True, 'argsinfo': { 'force': { 'description': 'Whether to force the upgrade', 'required': False } } } :param step_type: either 'clean' or 'deploy'. :return: a list of validation error strings for the step. """ errors = [] # Check that the user-specified arguments are valid argsinfo = driver_step.get('argsinfo') or {} user_args = user_step.get('args') or {} unexpected = set(user_args) - set(argsinfo) if unexpected: error = (_('%(type)s step %(step)s has these unexpected arguments: ' '%(unexpected)s') % {'type': step_type, 'step': user_step, 'unexpected': ', '.join(unexpected)}) errors.append(error) if step_type == 'clean' or user_step['priority'] > 0: # Check that all required arguments were specified by the user missing = [] for (arg_name, arg_info) in argsinfo.items(): if arg_info.get('required', False) and arg_name not in user_args: msg = arg_name if arg_info.get('description'): msg += ' (%(desc)s)' % {'desc': arg_info['description']} missing.append(msg) if missing: error = (_('%(type)s step %(step)s is missing these required ' 'arguments: %(miss)s') % {'type': step_type, 'step': user_step, 'miss': ', '.join(missing)}) errors.append(error) if step_type == 'clean': # Copy fields that should not be provided by a user user_step['abortable'] = driver_step.get('abortable', False) user_step['priority'] = driver_step.get('priority', 0) elif user_step['priority'] > 0: # 'core' deploy steps can only be disabled. # NOTE(mgoddard): we'll need something a little more sophisticated to # track core steps once we split out the single core step. is_core = (driver_step['interface'] == 'deploy' and driver_step['step'] == 'deploy') if is_core: error = (_('deploy step %(step)s on interface %(interface)s is a ' 'core step and cannot be overridden by user steps. It ' 'may be disabled by setting the priority to 0') % {'step': user_step['step'], 'interface': user_step['interface']}) errors.append(error) return errors def _validate_user_steps(task, user_steps, driver_steps, step_type, error_prefix=None, skip_missing=False): """Validate the user-specified steps. :param task: A TaskManager object :param user_steps: a list of user steps. A user step is a dictionary with required keys 'interface' and 'step', and optional keys 'args' and 'priority':: { 'interface': , 'step': , 'args': {: , ..., : }, 'priority': } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :param driver_steps: a list of driver steps:: { 'interface': , 'step': , 'priority': 'abortable': Optional for clean steps, absent for deploy steps. . 'argsinfo': Optional. A dictionary of {:} entries. is a dictionary with { 'description': , 'required': } } For example:: { 'interface': deploy', 'step': 'upgrade_firmware', 'priority': 10, 'abortable': True, 'argsinfo': { 'force': { 'description': 'Whether to force the upgrade', 'required': False } } } :param step_type: either 'clean' or 'deploy'. :param error_prefix: String to use as a prefix for exception messages, or None. :raises: InvalidParameterValue if validation of steps fails. :raises: NodeCleaningFailure or InstanceDeployFailure if there was a problem getting the steps from the driver. :return: validated steps updated with information from the driver """ errors = [] # Convert driver steps to a dict. driver_steps = {_step_id(s): s for s in driver_steps} result = [] for user_step in user_steps: # Check if this user-specified step isn't supported by the driver try: driver_step = driver_steps[_step_id(user_step)] except KeyError: if skip_missing: LOG.debug('%(type)s step %(step)s is not currently known for ' 'node %(node)s, delaying its validation until ' 'in-band steps are loaded', {'type': step_type.capitalize(), 'step': user_step, 'node': task.node.uuid}) else: error = (_('node does not support this %(type)s step: ' '%(step)s') % {'type': step_type, 'step': user_step}) errors.append(error) continue step_errors = _validate_user_step(task, user_step, driver_step, step_type) errors.extend(step_errors) result.append(user_step) if step_type == 'deploy': # Deploy steps should be unique across all combined templates. dup_errors = _validate_deploy_steps_unique(result) errors.extend(dup_errors) if errors: err = error_prefix or '' err += '; '.join(errors) raise exception.InvalidParameterValue(err=err) return result def _validate_user_clean_steps(task, user_steps): """Validate the user-specified clean steps. :param task: A TaskManager object :param user_steps: a list of clean steps. A clean step is a dictionary with required keys 'interface' and 'step', and optional key 'args':: { 'interface': , 'step': , 'args': {: , ..., : } } For example:: { 'interface': 'deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } :raises: InvalidParameterValue if validation of clean steps fails. :raises: NodeCleaningFailure if there was a problem getting the clean steps from the driver. :return: validated clean steps update with information from the driver """ driver_steps = _get_cleaning_steps(task, enabled=False, sort=False) return _validate_user_steps(task, user_steps, driver_steps, 'clean') def _validate_user_deploy_steps(task, user_steps, error_prefix=None, skip_missing=False): """Validate the user-specified deploy steps. :param task: A TaskManager object :param user_steps: a list of deploy steps. A deploy step is a dictionary with required keys 'interface', 'step', 'args', and 'priority':: { 'interface': , 'step': , 'args': {: , ..., : }, 'priority': } For example:: { 'interface': 'bios', 'step': 'apply_configuration', 'args': { 'settings': [ { 'foo': 'bar' } ] }, 'priority': 150 } :param error_prefix: String to use as a prefix for exception messages, or None. :raises: InvalidParameterValue if validation of deploy steps fails. :raises: InstanceDeployFailure if there was a problem getting the deploy steps from the driver. :return: validated deploy steps update with information from the driver """ driver_steps = _get_deployment_steps(task, enabled=False, sort=False) return _validate_user_steps(task, user_steps, driver_steps, 'deploy', error_prefix=error_prefix, skip_missing=skip_missing) def validate_deploy_templates(task, skip_missing=False): """Validate the deploy templates for a node. :param task: A TaskManager object :raises: InvalidParameterValue if the instance has traits that map to deploy steps that are unsupported by the node's driver interfaces. :raises: InstanceDeployFailure if there was a problem getting the deploy steps from the driver. """ # Gather deploy steps from matching deploy templates and validate them. _get_validated_steps_from_templates(task, skip_missing=skip_missing) ironic-15.0.0/ironic/conductor/base_manager.py0000664000175000017500000006426013652514273021363 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base conductor manager functionality.""" import inspect import threading import eventlet import futurist from futurist import periodics from futurist import rejection from ironic_lib import mdns from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import excutils from oslo_utils import versionutils from ironic.common import context as ironic_context from ironic.common import driver_factory from ironic.common import exception from ironic.common import hash_ring from ironic.common.i18n import _ from ironic.common import release_mappings as versions from ironic.common import rpc from ironic.common import states from ironic.conductor import allocations from ironic.conductor import notification_utils as notify_utils from ironic.conductor import task_manager from ironic.conf import CONF from ironic.db import api as dbapi from ironic.drivers import base as driver_base from ironic.drivers.modules import deploy_utils from ironic import objects from ironic.objects import fields as obj_fields LOG = log.getLogger(__name__) def _check_enabled_interfaces(): """Sanity-check enabled_*_interfaces configs. We do this before we even bother to try to load up drivers. If we have any dynamic drivers enabled, then we need interfaces enabled as well. :raises: ConfigInvalid if an enabled interfaces config option is empty. """ empty_confs = [] iface_types = ['enabled_%s_interfaces' % i for i in driver_base.ALL_INTERFACES] for iface_type in iface_types: conf_value = getattr(CONF, iface_type) if not conf_value: empty_confs.append(iface_type) if empty_confs: msg = (_('Configuration options %s cannot be an empty list.') % ', '.join(empty_confs)) raise exception.ConfigInvalid(error_msg=msg) class BaseConductorManager(object): def __init__(self, host, topic): super(BaseConductorManager, self).__init__() if not host: host = CONF.host self.host = host self.topic = topic self.sensors_notifier = rpc.get_sensors_notifier() self._started = False self._shutdown = None self._zeroconf = None def init_host(self, admin_context=None): """Initialize the conductor host. :param admin_context: the admin context to pass to periodic tasks. :raises: RuntimeError when conductor is already running. :raises: NoDriversLoaded when no drivers are enabled on the conductor. :raises: DriverNotFound if a driver is enabled that does not exist. :raises: DriverLoadError if an enabled driver cannot be loaded. :raises: DriverNameConflict if a classic driver and a dynamic driver are both enabled and have the same name. """ if self._started: raise RuntimeError(_('Attempt to start an already running ' 'conductor manager')) self._shutdown = False self.dbapi = dbapi.get_instance() self._keepalive_evt = threading.Event() """Event for the keepalive thread.""" # TODO(dtantsur): make the threshold configurable? rejection_func = rejection.reject_when_reached( CONF.conductor.workers_pool_size) self._executor = futurist.GreenThreadPoolExecutor( max_workers=CONF.conductor.workers_pool_size, check_and_reject=rejection_func) """Executor for performing tasks async.""" # TODO(jroll) delete the use_groups argument and use the default # in Stein. self.ring_manager = hash_ring.HashRingManager( use_groups=self._use_groups()) """Consistent hash ring which maps drivers to conductors.""" _check_enabled_interfaces() # NOTE(tenbrae): these calls may raise DriverLoadError or # DriverNotFound # NOTE(vdrok): Instantiate network and storage interface factory on # startup so that all the interfaces are loaded at the very # beginning, and failures prevent the conductor from starting. hardware_types = driver_factory.hardware_types() driver_factory.NetworkInterfaceFactory() driver_factory.StorageInterfaceFactory() # NOTE(jroll) this is passed to the dbapi, which requires a list, not # a generator (which keys() returns in py3) hardware_type_names = list(hardware_types) # check that at least one driver is loaded, whether classic or dynamic if not hardware_type_names: msg = ("Conductor %s cannot be started because no hardware types " "were specified in the 'enabled_hardware_types' config " "option.") LOG.error(msg, self.host) raise exception.NoDriversLoaded(conductor=self.host) self._collect_periodic_tasks(admin_context) # clear all target_power_state with locks by this conductor self.dbapi.clear_node_target_power_state(self.host) # clear all locks held by this conductor before registering self.dbapi.clear_node_reservations_for_conductor(self.host) try: # Register this conductor with the cluster self.conductor = objects.Conductor.register( admin_context, self.host, hardware_type_names, CONF.conductor.conductor_group) except exception.ConductorAlreadyRegistered: # This conductor was already registered and did not shut down # properly, so log a warning and update the record. LOG.warning("A conductor with hostname %(hostname)s was " "previously registered. Updating registration", {'hostname': self.host}) self.conductor = objects.Conductor.register( admin_context, self.host, hardware_type_names, CONF.conductor.conductor_group, update_existing=True) # register hardware types and interfaces supported by this conductor # and validate them against other conductors try: self._register_and_validate_hardware_interfaces(hardware_types) except (exception.DriverLoadError, exception.DriverNotFound, exception.ConductorHardwareInterfacesAlreadyRegistered, exception.InterfaceNotFoundInEntrypoint, exception.NoValidDefaultForInterface) as e: with excutils.save_and_reraise_exception(): LOG.error('Failed to register hardware types. %s', e) self.del_host() # Start periodic tasks self._periodic_tasks_worker = self._executor.submit( self._periodic_tasks.start, allow_empty=True) self._periodic_tasks_worker.add_done_callback( self._on_periodic_tasks_stop) for state in states.STUCK_STATES_TREATED_AS_FAIL: self._fail_transient_state( state, _("The %(state)s state can't be resumed by conductor " "%(host)s. Moving to fail state.") % {'state': state, 'host': self.host}) # Start consoles if it set enabled in a greenthread. try: self._spawn_worker(self._start_consoles, ironic_context.get_admin_context()) except exception.NoFreeConductorWorker: LOG.warning('Failed to start worker for restarting consoles.') # Spawn a dedicated greenthread for the keepalive try: self._spawn_worker(self._conductor_service_record_keepalive) LOG.info('Successfully started conductor with hostname ' '%(hostname)s.', {'hostname': self.host}) except exception.NoFreeConductorWorker: with excutils.save_and_reraise_exception(): LOG.critical('Failed to start keepalive') self.del_host() # Resume allocations that started before the restart. try: self._spawn_worker(self._resume_allocations, ironic_context.get_admin_context()) except exception.NoFreeConductorWorker: LOG.warning('Failed to start worker for resuming allocations.') if CONF.conductor.enable_mdns: self._publish_endpoint() self._started = True def _use_groups(self): release_ver = versions.RELEASE_MAPPING.get(CONF.pin_release_version) # NOTE(jroll) self.RPC_API_VERSION is actually defined in a subclass, # but we only use this class from there. version_cap = (release_ver['rpc'] if release_ver else self.RPC_API_VERSION) return versionutils.is_compatible('1.47', version_cap) def _fail_transient_state(self, state, last_error): """Apply "fail" transition to nodes in a transient state. If the conductor server dies abruptly mid deployment or cleaning (OMM Killer, power outage, etc...) we can not resume the process even if the conductor comes back online. Cleaning the reservation of the nodes (dbapi.clear_node_reservations_for_conductor) is not enough to unstick it, so let's gracefully fail the process. """ filters = {'reserved': False, 'provision_state': state} self._fail_if_in_state(ironic_context.get_admin_context(), filters, state, 'provision_updated_at', last_error=last_error) def _collect_periodic_tasks(self, admin_context): """Collect driver-specific periodic tasks. Conductor periodic tasks accept context argument, driver periodic tasks accept this manager and context. We have to ensure that the same driver interface class is not traversed twice, otherwise we'll have several instances of the same task. :param admin_context: Administrator context to pass to tasks. """ LOG.debug('Collecting periodic tasks') # collected callables periodic_task_callables = [] # list of visited classes to avoid adding the same tasks twice periodic_task_classes = set() def _collect_from(obj, args): """Collect tasks from the given object. :param obj: the object to collect tasks from. :param args: a tuple of arguments to pass to tasks. """ if obj and obj.__class__ not in periodic_task_classes: for name, member in inspect.getmembers(obj): if periodics.is_periodic(member): LOG.debug('Found periodic task %(owner)s.%(member)s', {'owner': obj.__class__.__name__, 'member': name}) periodic_task_callables.append((member, args, {})) periodic_task_classes.add(obj.__class__) # First, collect tasks from the conductor itself _collect_from(self, (admin_context,)) # Second, collect tasks from hardware interfaces for ifaces in driver_factory.all_interfaces().values(): for iface in ifaces.values(): _collect_from(iface, args=(self, admin_context)) # TODO(dtantsur): allow periodics on hardware types themselves? if len(periodic_task_callables) > CONF.conductor.workers_pool_size: LOG.warning('This conductor has %(tasks)d periodic tasks ' 'enabled, but only %(workers)d task workers ' 'allowed by [conductor]workers_pool_size option', {'tasks': len(periodic_task_callables), 'workers': CONF.conductor.workers_pool_size}) self._periodic_tasks = periodics.PeriodicWorker( periodic_task_callables, executor_factory=periodics.ExistingExecutor(self._executor)) # This is only used in tests currently. Delete it? self._periodic_task_callables = periodic_task_callables def del_host(self, deregister=True): # Conductor deregistration fails if called on non-initialized # conductor (e.g. when rpc server is unreachable). if not hasattr(self, 'conductor'): return self._shutdown = True self._keepalive_evt.set() # clear all locks held by this conductor before deregistering self.dbapi.clear_node_reservations_for_conductor(self.host) if deregister: try: # Inform the cluster that this conductor is shutting down. # Note that rebalancing will not occur immediately, but when # the periodic sync takes place. self.conductor.unregister() LOG.info('Successfully stopped conductor with hostname ' '%(hostname)s.', {'hostname': self.host}) except exception.ConductorNotFound: pass else: LOG.info('Not deregistering conductor with hostname %(hostname)s.', {'hostname': self.host}) # Waiting here to give workers the chance to finish. This has the # benefit of releasing locks workers placed on nodes, as well as # having work complete normally. self._periodic_tasks.stop() self._periodic_tasks.wait() self._executor.shutdown(wait=True) if self._zeroconf is not None: self._zeroconf.close() self._zeroconf = None self._started = False def _register_and_validate_hardware_interfaces(self, hardware_types): """Register and validate hardware interfaces for this conductor. Registers a row in the database for each combination of (hardware type, interface type, interface) that is supported and enabled. TODO: Validates against other conductors to check if the set of registered hardware interfaces for a given hardware type is the same, and warns if not (we can't error out, otherwise all conductors must be restarted at once to change configuration). :param hardware_types: Dictionary mapping hardware type name to hardware type object. :raises: ConductorHardwareInterfacesAlreadyRegistered :raises: InterfaceNotFoundInEntrypoint :raises: NoValidDefaultForInterface if the default value cannot be calculated and is not provided in the configuration """ # first unregister, in case we have cruft laying around self.conductor.unregister_all_hardware_interfaces() for ht_name, ht in hardware_types.items(): interface_map = driver_factory.enabled_supported_interfaces(ht) for interface_type, interface_names in interface_map.items(): default_interface = driver_factory.default_interface( ht, interface_type, driver_name=ht_name) self.conductor.register_hardware_interfaces(ht_name, interface_type, interface_names, default_interface) # TODO(jroll) validate against other conductor, warn if different # how do we do this performantly? :| def _on_periodic_tasks_stop(self, fut): try: fut.result() except Exception as exc: LOG.critical('Periodic tasks worker has failed: %s', exc) else: LOG.info('Successfully shut down periodic tasks') def iter_nodes(self, fields=None, **kwargs): """Iterate over nodes mapped to this conductor. Requests node set from and filters out nodes that are not mapped to this conductor. Yields tuples (node_uuid, driver, conductor_group, ...) where ... is derived from fields argument, e.g.: fields=None means yielding ('uuid', 'driver', 'conductor_group'), fields=['foo'] means yielding ('uuid', 'driver', 'conductor_group', 'foo'). :param fields: list of fields to fetch in addition to uuid, driver, and conductor_group :param kwargs: additional arguments to pass to dbapi when looking for nodes :return: generator yielding tuples of requested fields """ columns = ['uuid', 'driver', 'conductor_group'] + list(fields or ()) node_list = self.dbapi.get_nodeinfo_list(columns=columns, **kwargs) for result in node_list: if self._shutdown: break if self._mapped_to_this_conductor(*result[:3]): yield result def _spawn_worker(self, func, *args, **kwargs): """Create a greenthread to run func(*args, **kwargs). Spawns a greenthread if there are free slots in pool, otherwise raises exception. Execution control returns immediately to the caller. :returns: Future object. :raises: NoFreeConductorWorker if worker pool is currently full. """ try: return self._executor.submit(func, *args, **kwargs) except futurist.RejectedSubmission: raise exception.NoFreeConductorWorker() def _conductor_service_record_keepalive(self): while not self._keepalive_evt.is_set(): try: self.conductor.touch() except db_exception.DBConnectionError: LOG.warning('Conductor could not connect to database ' 'while heartbeating.') except Exception as e: LOG.exception('Error while heartbeating. Error: %(err)s', {'err': e}) self._keepalive_evt.wait(CONF.conductor.heartbeat_interval) def _mapped_to_this_conductor(self, node_uuid, driver, conductor_group): """Check that node is mapped to this conductor. Note that because mappings are eventually consistent, it is possible for two conductors to simultaneously believe that a node is mapped to them. Any operation that depends on exclusive control of a node should take out a lock. """ try: ring = self.ring_manager.get_ring(driver, conductor_group) except exception.DriverNotFound: return False return self.host in ring.get_nodes(node_uuid.encode('utf-8')) def _fail_if_in_state(self, context, filters, provision_state, sort_key, callback_method=None, err_handler=None, last_error=None, keep_target_state=False): """Fail nodes that are in specified state. Retrieves nodes that satisfy the criteria in 'filters'. If any of these nodes is in 'provision_state', it has failed in whatever provisioning activity it was currently doing. That failure is processed here. :param context: request context :param filters: criteria (as a dictionary) to get the desired list of nodes that satisfy the filter constraints. For example, if filters['provisioned_before'] = 60, this would process nodes whose provision_updated_at field value was 60 or more seconds before 'now'. :param provision_state: provision_state that the node is in, for the provisioning activity to have failed, either one string or a set. :param sort_key: the nodes are sorted based on this key. :param callback_method: the callback method to be invoked in a spawned thread, for a failed node. This method must take a :class:`TaskManager` as the first (and only required) parameter. :param err_handler: for a failed node, the error handler to invoke if an error occurs trying to spawn an thread to do the callback_method. :param last_error: the error message to be updated in node.last_error :param keep_target_state: if True, a failed node will keep the same target provision state it had before the failure. Otherwise, the node's target provision state will be determined by the fsm. """ if isinstance(provision_state, str): provision_state = {provision_state} node_iter = self.iter_nodes(filters=filters, sort_key=sort_key, sort_dir='asc') desired_maintenance = filters.get('maintenance') workers_count = 0 for node_uuid, driver, conductor_group in node_iter: try: with task_manager.acquire(context, node_uuid, purpose='node state check') as task: # Check maintenance value since it could have changed # after the filtering was done. if (desired_maintenance is not None and desired_maintenance != task.node.maintenance): continue if task.node.provision_state not in provision_state: continue target_state = (None if not keep_target_state else task.node.target_provision_state) # timeout has been reached - process the event 'fail' if callback_method: task.process_event('fail', callback=self._spawn_worker, call_args=(callback_method, task), err_handler=err_handler, target_state=target_state) else: task.node.last_error = last_error task.process_event('fail', target_state=target_state) except exception.NoFreeConductorWorker: break except (exception.NodeLocked, exception.NodeNotFound): continue workers_count += 1 if workers_count >= CONF.conductor.periodic_max_workers: break def _start_consoles(self, context): """Start consoles if set enabled. :param context: request context """ filters = {'console_enabled': True} node_iter = self.iter_nodes(filters=filters) for node_uuid, driver, conductor_group in node_iter: try: with task_manager.acquire(context, node_uuid, shared=False, purpose='start console') as task: notify_utils.emit_console_notification( task, 'console_restore', obj_fields.NotificationStatus.START) try: LOG.debug('Trying to start console of node %(node)s', {'node': node_uuid}) task.driver.console.start_console(task) LOG.info('Successfully started console of node ' '%(node)s', {'node': node_uuid}) notify_utils.emit_console_notification( task, 'console_restore', obj_fields.NotificationStatus.END) except Exception as err: msg = (_('Failed to start console of node %(node)s ' 'while starting the conductor, so changing ' 'the console_enabled status to False, error: ' '%(err)s') % {'node': node_uuid, 'err': err}) LOG.error(msg) # If starting console failed, set node console_enabled # back to False and set node's last error. task.node.last_error = msg task.node.console_enabled = False task.node.save() notify_utils.emit_console_notification( task, 'console_restore', obj_fields.NotificationStatus.ERROR) except exception.NodeLocked: LOG.warning('Node %(node)s is locked while trying to ' 'start console on conductor startup', {'node': node_uuid}) continue except exception.NodeNotFound: LOG.warning("During starting console on conductor " "startup, node %(node)s was not found", {'node': node_uuid}) continue finally: # Yield on every iteration eventlet.sleep(0) def _resume_allocations(self, context): """Resume unfinished allocations on restart.""" filters = {'state': states.ALLOCATING, 'conductor_affinity': self.conductor.id} for allocation in objects.Allocation.list(context, filters=filters): LOG.debug('Resuming unfinished allocation %s', allocation.uuid) allocations.do_allocate(context, allocation) def _publish_endpoint(self): params = {} if CONF.debug: params['ipa_debug'] = True self._zeroconf = mdns.Zeroconf() self._zeroconf.register_service('baremetal', deploy_utils.get_ironic_api_url(), params=params) ironic-15.0.0/ironic/conductor/rpcapi.py0000664000175000017500000016050613652514273020235 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the conductor RPC API. """ import random import oslo_messaging as messaging from ironic.common import exception from ironic.common import hash_ring from ironic.common.i18n import _ from ironic.common.json_rpc import client as json_rpc from ironic.common import release_mappings as versions from ironic.common import rpc from ironic.conductor import manager from ironic.conf import CONF from ironic.db import api as dbapi from ironic.objects import base as objects_base class ConductorAPI(object): """Client side of the conductor RPC API. API version history: | 1.0 - Initial version. | Included get_node_power_status | 1.1 - Added update_node and start_power_state_change. | 1.2 - Added vendor_passthru. | 1.3 - Rename start_power_state_change to change_node_power_state. | 1.4 - Added do_node_deploy and do_node_tear_down. | 1.5 - Added validate_driver_interfaces. | 1.6 - change_node_power_state, do_node_deploy and do_node_tear_down | accept node id instead of node object. | 1.7 - Added topic parameter to RPC methods. | 1.8 - Added change_node_maintenance_mode. | 1.9 - Added destroy_node. | 1.10 - Remove get_node_power_state | 1.11 - Added get_console_information, set_console_mode. | 1.12 - validate_vendor_action, do_vendor_action replaced by single | vendor_passthru method. | 1.13 - Added update_port. | 1.14 - Added driver_vendor_passthru. | 1.15 - Added rebuild parameter to do_node_deploy. | 1.16 - Added get_driver_properties. | 1.17 - Added set_boot_device, get_boot_device and | get_supported_boot_devices. | 1.18 - Remove change_node_maintenance_mode. | 1.19 - Change return value of vendor_passthru and | driver_vendor_passthru | 1.20 - Added http_method parameter to vendor_passthru and | driver_vendor_passthru | 1.21 - Added get_node_vendor_passthru_methods and | get_driver_vendor_passthru_methods | 1.22 - Added configdrive parameter to do_node_deploy. | 1.23 - Added do_provisioning_action | 1.24 - Added inspect_hardware method | 1.25 - Added destroy_port | 1.26 - Added continue_node_clean | 1.27 - Convert continue_node_clean to cast | 1.28 - Change exceptions raised by destroy_node | 1.29 - Change return value of vendor_passthru and | driver_vendor_passthru to a dictionary | 1.30 - Added set_target_raid_config and | get_raid_logical_disk_properties | 1.31 - Added Versioned Objects indirection API methods: | object_class_action_versions, object_action and | object_backport_versions | 1.32 - Add do_node_clean | 1.33 - Added update and destroy portgroup. | 1.34 - Added heartbeat | 1.35 - Added destroy_volume_connector and update_volume_connector | 1.36 - Added create_node | 1.37 - Added destroy_volume_target and update_volume_target | 1.38 - Added vif_attach, vif_detach, vif_list | 1.39 - Added timeout optional parameter to change_node_power_state | 1.40 - Added inject_nmi | 1.41 - Added create_port | 1.42 - Added optional agent_version to heartbeat | 1.43 - Added do_node_rescue, do_node_unrescue and can_send_rescue | 1.44 - Added add_node_traits and remove_node_traits. | 1.45 - Added continue_node_deploy | 1.46 - Added reset_interfaces to update_node | 1.47 - Added support for conductor groups | 1.48 - Added allocation API | 1.49 - Added get_node_with_token and agent_token argument to heartbeat | 1.50 - Added set_indicator_state, get_indicator_state and | get_supported_indicators. """ # NOTE(rloo): This must be in sync with manager.ConductorManager's. # NOTE(pas-ha): This also must be in sync with # ironic.common.release_mappings.RELEASE_MAPPING['master'] RPC_API_VERSION = '1.50' def __init__(self, topic=None): super(ConductorAPI, self).__init__() self.topic = topic if self.topic is None: self.topic = manager.MANAGER_TOPIC serializer = objects_base.IronicObjectSerializer() release_ver = versions.RELEASE_MAPPING.get(CONF.pin_release_version) version_cap = (release_ver['rpc'] if release_ver else self.RPC_API_VERSION) if CONF.rpc_transport == 'json-rpc': self.client = json_rpc.Client(serializer=serializer, version_cap=version_cap) self.topic = '' else: target = messaging.Target(topic=self.topic, version='1.0') self.client = rpc.get_client(target, version_cap=version_cap, serializer=serializer) use_groups = self.client.can_send_version('1.47') # NOTE(tenbrae): this is going to be buggy self.ring_manager = hash_ring.HashRingManager(use_groups=use_groups) def get_conductor_for(self, node): """Get the conductor which the node is mapped to. :param node: a node object. :returns: the conductor hostname. :raises: NoValidHost """ try: ring = self.ring_manager.get_ring(node.driver, node.conductor_group) dest = ring.get_nodes(node.uuid.encode('utf-8')) return dest.pop() except exception.DriverNotFound: reason = (_('No conductor service registered which supports ' 'driver %(driver)s for conductor group "%(group)s".') % {'driver': node.driver, 'group': node.conductor_group}) raise exception.NoValidHost(reason=reason) def get_topic_for(self, node): """Get the RPC topic for the conductor service the node is mapped to. :param node: a node object. :returns: an RPC topic string. :raises: NoValidHost """ hostname = self.get_conductor_for(node) return '%s.%s' % (self.topic, hostname) def get_random_topic(self): """Get an RPC topic for a random conductor service.""" conductors = dbapi.get_instance().get_online_conductors() try: hostname = random.choice(conductors) except IndexError: # There are no conductors - return 503 Service Unavailable raise exception.TemporaryFailure() return '%s.%s' % (self.topic, hostname) def get_topic_for_driver(self, driver_name): """Get RPC topic name for a conductor supporting the given driver. The topic is used to route messages to the conductor supporting the specified driver. A conductor is selected at random from the set of qualified conductors. :param driver_name: the name of the driver to route to. :returns: an RPC topic string. :raises: DriverNotFound """ # NOTE(jroll) we want to be able to route this to any conductor, # regardless of groupings. We use a fresh, uncached hash ring that # does not take groups into account. local_ring_manager = hash_ring.HashRingManager(use_groups=False, cache=False) try: ring = local_ring_manager.get_ring(driver_name, '') except exception.TemporaryFailure: # NOTE(dtantsur): even if no conductors are registered, it makes # sense to report 404 on any driver request. raise exception.DriverNotFound(_("No conductors registered.")) host = random.choice(list(ring.nodes)) return self.topic + "." + host def can_send_create_port(self): """Return whether the RPCAPI supports the create_port method.""" return self.client.can_send_version("1.41") def can_send_rescue(self): """Return whether the RPCAPI supports node rescue methods.""" return self.client.can_send_version("1.43") def create_node(self, context, node_obj, topic=None): """Synchronously, have a conductor validate and create a node. Create the node's information in the database and return a node object. :param context: request context. :param node_obj: a created (but not saved) node object. :param topic: RPC topic. Defaults to self.topic. :returns: created node object. :raises: InterfaceNotFoundInEntrypoint if validation fails for any dynamic interfaces (e.g. network_interface). :raises: NoValidDefaultForInterface if no default can be calculated for some interfaces, and explicit values must be provided. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.36') return cctxt.call(context, 'create_node', node_obj=node_obj) def update_node(self, context, node_obj, topic=None, reset_interfaces=False): """Synchronously, have a conductor update the node's information. Update the node's information in the database and return a node object. The conductor will lock the node while it validates the supplied information. If driver_info is passed, it will be validated by the core drivers. If instance_uuid is passed, it will be set or unset only if the node is properly configured. Note that power_state should not be passed via this method. Use change_node_power_state for initiating driver actions. :param context: request context. :param node_obj: a changed (but not saved) node object. :param topic: RPC topic. Defaults to self.topic. :param reset_interfaces: whether to reset hardware interfaces to their defaults. :returns: updated node object, including all fields. :raises: NoValidDefaultForInterface if no default can be calculated for some interfaces, and explicit values must be provided. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.1') return cctxt.call(context, 'update_node', node_obj=node_obj, reset_interfaces=reset_interfaces) def change_node_power_state(self, context, node_id, new_state, topic=None, timeout=None): """Change a node's power state. Synchronously, acquire lock and start the conductor background task to change power state of a node. :param context: request context. :param node_id: node id or uuid. :param new_state: one of ironic.common.states power state values :param timeout: timeout (in seconds) positive integer (> 0) for any power state. ``None`` indicates to use default timeout. :param topic: RPC topic. Defaults to self.topic. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.39') return cctxt.call(context, 'change_node_power_state', node_id=node_id, new_state=new_state, timeout=timeout) def vendor_passthru(self, context, node_id, driver_method, http_method, info, topic=None): """Receive requests for vendor-specific actions. Synchronously validate driver specific info or get driver status, and if successful invokes the vendor method. If the method mode is async the conductor will start background worker to perform vendor action. :param context: request context. :param node_id: node id or uuid. :param driver_method: name of method for driver. :param http_method: the HTTP method used for the request. :param info: info for node driver. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if supplied info is not valid. :raises: MissingParameterValue if a required parameter is missing :raises: UnsupportedDriverExtension if current driver does not have vendor interface. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: NodeLocked if node is locked by another conductor. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.20') return cctxt.call(context, 'vendor_passthru', node_id=node_id, driver_method=driver_method, http_method=http_method, info=info) def driver_vendor_passthru(self, context, driver_name, driver_method, http_method, info, topic=None): """Pass vendor-specific calls which don't specify a node to a driver. Handles driver-level vendor passthru calls. These calls don't require a node UUID and are executed on a random conductor with the specified driver. If the method mode is async the conductor will start background worker to perform vendor action. :param context: request context. :param driver_name: name of the driver on which to call the method. :param driver_method: name of the vendor method, for use by the driver. :param http_method: the HTTP method used for the request. :param info: data to pass through to the driver. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue for parameter errors. :raises: MissingParameterValue if a required parameter is missing :raises: UnsupportedDriverExtension if the driver doesn't have a vendor interface, or if the vendor interface does not support the specified driver_method. :raises: DriverNotFound if the supplied driver is not loaded. :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's vendor interface. :returns: A dictionary containing: :return: The response of the invoked vendor method :async: Boolean value. Whether the method was invoked asynchronously (True) or synchronously (False). When invoked asynchronously the response will be always None. :attach: Boolean value. Whether to attach the response of the invoked vendor method to the HTTP response object (True) or return it in the response body (False). """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.20') return cctxt.call(context, 'driver_vendor_passthru', driver_name=driver_name, driver_method=driver_method, http_method=http_method, info=info) def get_node_vendor_passthru_methods(self, context, node_id, topic=None): """Retrieve information about vendor methods of the given node. :param context: an admin context. :param node_id: the id or uuid of a node. :param topic: RPC topic. Defaults to self.topic. :returns: dictionary of : entries. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.21') return cctxt.call(context, 'get_node_vendor_passthru_methods', node_id=node_id) def get_driver_vendor_passthru_methods(self, context, driver_name, topic=None): """Retrieve information about vendor methods of the given driver. :param context: an admin context. :param driver_name: name of the driver. :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if current driver does not have vendor interface. :raises: DriverNotFound if the supplied driver is not loaded. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's vendor interface. :returns: dictionary of : entries. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.21') return cctxt.call(context, 'get_driver_vendor_passthru_methods', driver_name=driver_name) def do_node_deploy(self, context, node_id, rebuild, configdrive, topic=None): """Signal to conductor service to perform a deployment. :param context: request context. :param node_id: node id or uuid. :param rebuild: True if this is a rebuild request. :param configdrive: A gzipped and base64 encoded configdrive. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceDeployFailure :raises: InvalidParameterValue if validation fails :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate undeployed state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.22') return cctxt.call(context, 'do_node_deploy', node_id=node_id, rebuild=rebuild, configdrive=configdrive) def do_node_tear_down(self, context, node_id, topic=None): """Signal to conductor service to tear down a deployment. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceDeployFailure :raises: InvalidParameterValue if validation fails :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate deployed state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.6') return cctxt.call(context, 'do_node_tear_down', node_id=node_id) def do_provisioning_action(self, context, node_id, action, topic=None): """Signal to conductor service to perform the given action on a node. :param context: request context. :param node_id: node id or uuid. :param action: an action. One of ironic.common.states.VERBS :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: InvalidStateRequested if the requested action can not be performed. This encapsulates some provisioning actions in a single call. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.23') return cctxt.call(context, 'do_provisioning_action', node_id=node_id, action=action) def continue_node_clean(self, context, node_id, topic=None): """Signal to conductor service to start the next cleaning action. NOTE(JoshNang) this is an RPC cast, there will be no response or exception raised by the conductor for this RPC. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.27') return cctxt.cast(context, 'continue_node_clean', node_id=node_id) def continue_node_deploy(self, context, node_id, topic=None): """Signal to conductor service to start the next deployment action. NOTE(rloo): this is an RPC cast, there will be no response or exception raised by the conductor for this RPC. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.45') return cctxt.cast(context, 'continue_node_deploy', node_id=node_id) def validate_driver_interfaces(self, context, node_id, topic=None): """Validate the `core` and `standardized` interfaces for drivers. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :returns: a dictionary containing the results of each interface validation. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.5') return cctxt.call(context, 'validate_driver_interfaces', node_id=node_id) def destroy_node(self, context, node_id, topic=None): """Delete a node. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeAssociated if the node contains an instance associated with it. :raises: InvalidState if the node is in the wrong provision state to perform deletion. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.9') return cctxt.call(context, 'destroy_node', node_id=node_id) def get_console_information(self, context, node_id, topic=None): """Get connection information about the console. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if a required parameter is missing """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.11') return cctxt.call(context, 'get_console_information', node_id=node_id) def set_console_mode(self, context, node_id, enabled, topic=None): """Enable/Disable the console. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :param enabled: Boolean value; whether the console is enabled or disabled. :raises: UnsupportedDriverExtension if the node's driver doesn't support console. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if a required parameter is missing :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.11') return cctxt.call(context, 'set_console_mode', node_id=node_id, enabled=enabled) def create_port(self, context, port_obj, topic=None): """Synchronously, have a conductor validate and create a port. Create the port's information in the database and return a port object. The conductor will lock related node and trigger specific driver actions if they are needed. :param context: request context. :param port_obj: a created (but not saved) port object. :param topic: RPC topic. Defaults to self.topic. :returns: created port object. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.41') return cctxt.call(context, 'create_port', port_obj=port_obj) def update_port(self, context, port_obj, topic=None): """Synchronously, have a conductor update the port's information. Update the port's information in the database and return a port object. The conductor will lock related node and trigger specific driver actions if they are needed. :param context: request context. :param port_obj: a changed (but not saved) port object. :param topic: RPC topic. Defaults to self.topic. :returns: updated port object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.13') return cctxt.call(context, 'update_port', port_obj=port_obj) def update_portgroup(self, context, portgroup_obj, topic=None): """Synchronously, have a conductor update the portgroup's information. Update the portgroup's information in the database and return a portgroup object. The conductor will lock related node and trigger specific driver actions if they are needed. :param context: request context. :param portgroup_obj: a changed (but not saved) portgroup object. :param topic: RPC topic. Defaults to self.topic. :returns: updated portgroup object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.33') return cctxt.call(context, 'update_portgroup', portgroup_obj=portgroup_obj) def destroy_portgroup(self, context, portgroup, topic=None): """Delete a portgroup. :param context: request context. :param portgroup: portgroup object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the portgroup does not exist. :raises: PortgroupNotEmpty if portgroup is not empty """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.33') return cctxt.call(context, 'destroy_portgroup', portgroup=portgroup) def get_driver_properties(self, context, driver_name, topic=None): """Get the properties of the driver. :param context: request context. :param driver_name: name of the driver. :param topic: RPC topic. Defaults to self.topic. :returns: a dictionary with : entries. :raises: DriverNotFound. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.16') return cctxt.call(context, 'get_driver_properties', driver_name=driver_name) def set_boot_device(self, context, node_id, device, persistent=False, topic=None): """Set the boot device for a node. Set the boot device to use on next reboot of the node. Be aware that not all drivers support this. :param context: request context. :param node_id: node id or uuid. :param device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'set_boot_device', node_id=node_id, device=device, persistent=persistent) def get_boot_device(self, context, node_id, topic=None): """Get the current boot device. Returns the current boot device of a node. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: a dictionary containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'get_boot_device', node_id=node_id) def inject_nmi(self, context, node_id, topic=None): """Inject NMI for a node. Inject NMI (Non Maskable Interrupt) for a node immediately. Be aware that not all drivers support this. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management or management.inject_nmi. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.40') return cctxt.call(context, 'inject_nmi', node_id=node_id) def get_supported_boot_devices(self, context, node_id, topic=None): """Get the list of supported devices. Returns the list of supported boot devices of a node. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A list with the supported boot devices defined in :mod:`ironic.common.boot_devices`. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.17') return cctxt.call(context, 'get_supported_boot_devices', node_id=node_id) def set_indicator_state(self, context, node_id, component, indicator, state, topic=None): """Set node hardware components indicator to the desired state. :param context: request context. :param node_id: node id or uuid. :param component: The hardware component, one of :mod:`ironic.common.components`. :param indicator: Indicator IDs, as reported by `get_supported_indicators`) :param state: Indicator state, one of mod:`ironic.common.indicator_states`. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.50') return cctxt.call(context, 'set_indicator_state', node_id=node_id, component=component, indicator=indicator, state=state) def get_indicator_state(self, context, node_id, component, indicator, topic=None): """Get node hardware component indicator state. :param context: request context. :param node_id: node id or uuid. :param component: The hardware component, one of :mod:`ironic.common.components`. :param indicator: Indicator IDs, as reported by `get_supported_indicators`) :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: Indicator state, one of mod:`ironic.common.indicator_states`. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.50') return cctxt.call(context, 'get_indicator_state', node_id=node_id, component=component, indicator=indicator) def get_supported_indicators(self, context, node_id, component=None, topic=None): """Get node hardware components and their indicators. :param context: request context. :param node_id: node id or uuid. :param component: The hardware component, one of :mod:`ironic.common.components`. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management. :raises: InvalidParameterValue when the wrong driver info is specified. :raises: MissingParameterValue if missing supplied info. :returns: A dictionary of hardware components (:mod:`ironic.common.components`) as keys with indicator IDs as values. :: { 'chassis': ['enclosure-0'], 'system': ['blade-A'] 'drive': ['ssd0'] } """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.50') return cctxt.call(context, 'get_supported_indicators', node_id=node_id, component=component) def inspect_hardware(self, context, node_id, topic=None): """Signals the conductor service to perform hardware introspection. :param context: request context. :param node_id: node id or uuid. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: HardwareInspectionFailure :raises: NoFreeConductorWorker when there is no free worker to start async task. :raises: UnsupportedDriverExtension if the node's driver doesn't support inspection. :raises: InvalidStateRequested if 'inspect' is not a valid action to do in the current state. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.24') return cctxt.call(context, 'inspect_hardware', node_id=node_id) def destroy_port(self, context, port, topic=None): """Delete a port. :param context: request context. :param port: port object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node associated with the port does not exist. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.25') return cctxt.call(context, 'destroy_port', port=port) def set_target_raid_config(self, context, node_id, target_raid_config, topic=None): """Stores the target RAID configuration on the node. Stores the target RAID configuration on node.target_raid_config :param context: request context. :param node_id: node id or uuid. :param target_raid_config: Dictionary containing the target RAID configuration. It may be an empty dictionary as well. :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: MissingParameterValue, if some required parameters are missing. :raises: NodeLocked if node is locked by another conductor. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.30') return cctxt.call(context, 'set_target_raid_config', node_id=node_id, target_raid_config=target_raid_config) def get_raid_logical_disk_properties(self, context, driver_name, topic=None): """Get the logical disk properties for RAID configuration. Gets the information about logical disk properties which can be specified in the input RAID configuration. :param context: request context. :param driver_name: name of the driver :param topic: RPC topic. Defaults to self.topic. :raises: UnsupportedDriverExtension if the driver doesn't support RAID configuration. :raises: InterfaceNotFoundInEntrypoint if the default interface for a hardware type is invalid. :raises: NoValidDefaultForInterface if no default interface implementation can be found for this driver's RAID interface. :returns: A dictionary containing the properties that can be mentioned for logical disks and a textual description for them. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.30') return cctxt.call(context, 'get_raid_logical_disk_properties', driver_name=driver_name) def do_node_clean(self, context, node_id, clean_steps, topic=None): """Signal to conductor service to perform manual cleaning on a node. :param context: request context. :param node_id: node ID or UUID. :param clean_steps: a list of clean step dictionaries. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if validation of power driver interface failed. :raises: InvalidStateRequested if cleaning can not be performed. :raises: NodeInMaintenance if node is in maintenance mode. :raises: NodeLocked if node is locked by another conductor. :raises: NoFreeConductorWorker when there is no free worker to start async task. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.32') return cctxt.call(context, 'do_node_clean', node_id=node_id, clean_steps=clean_steps) def heartbeat(self, context, node_id, callback_url, agent_version, agent_token=None, topic=None): """Process a node heartbeat. :param context: request context. :param node_id: node ID or UUID. :param callback_url: URL to reach back to the ramdisk. :param topic: RPC topic. Defaults to self.topic. :param agent_version: the version of the agent that is heartbeating """ new_kws = {} version = '1.34' if self.client.can_send_version('1.42'): version = '1.42' new_kws['agent_version'] = agent_version if self.client.can_send_version('1.49'): version = '1.49' new_kws['agent_token'] = agent_token cctxt = self.client.prepare(topic=topic or self.topic, version=version) return cctxt.call(context, 'heartbeat', node_id=node_id, callback_url=callback_url, **new_kws) def object_class_action_versions(self, context, objname, objmethod, object_versions, args, kwargs): """Perform an action on a VersionedObject class. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the action :param objname: The registry name of the object :param objmethod: The name of the action method to call :param object_versions: A dict of {objname: version} mappings :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :raises: NotImplementedError when an operator makes an error during upgrade :returns: The result of the action method, which may (or may not) be an instance of the implementing VersionedObject class. """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_class_action_versions', objname=objname, objmethod=objmethod, object_versions=object_versions, args=args, kwargs=kwargs) def object_action(self, context, objinst, objmethod, args, kwargs): """Perform an action on a VersionedObject instance. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the action :param objinst: The object instance on which to perform the action :param objmethod: The name of the action method to call :param args: The positional arguments to the action method :param kwargs: The keyword arguments to the action method :raises: NotImplementedError when an operator makes an error during upgrade :returns: A tuple with the updates made to the object and the result of the action method """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_action', objinst=objinst, objmethod=objmethod, args=args, kwargs=kwargs) def object_backport_versions(self, context, objinst, object_versions): """Perform a backport of an object instance. The default behavior of the base VersionedObjectSerializer, upon receiving an object with a version newer than what is in the local registry, is to call this method to request a backport of the object. We want any conductor to handle this, so it is intentional that there is no topic argument for this method. :param context: The context within which to perform the backport :param objinst: An instance of a VersionedObject to be backported :param object_versions: A dict of {objname: version} mappings :raises: NotImplementedError when an operator makes an error during upgrade :returns: The downgraded instance of objinst """ if not self.client.can_send_version('1.31'): raise NotImplementedError(_('Incompatible conductor version - ' 'please upgrade ironic-conductor ' 'first')) cctxt = self.client.prepare(topic=self.topic, version='1.31') return cctxt.call(context, 'object_backport_versions', objinst=objinst, object_versions=object_versions) def destroy_volume_connector(self, context, connector, topic=None): """Delete a volume connector. Delete the volume connector. The conductor will lock the related node during this operation. :param context: request context :param connector: volume connector object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the connector does not exist :raises: VolumeConnectorNotFound if the volume connector cannot be found """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.35') return cctxt.call(context, 'destroy_volume_connector', connector=connector) def update_volume_connector(self, context, connector, topic=None): """Update the volume connector's information. Update the volume connector's information in the database and return a volume connector object. The conductor will lock the related node during this operation. :param context: request context :param connector: a changed (but not saved) volume connector object :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if the volume connector's UUID is being changed :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the connector does not exist :raises: VolumeConnectorNotFound if the volume connector cannot be found :raises: VolumeConnectorTypeAndIdAlreadyExists if another connector already exists with the same values for type and connector_id fields :returns: updated volume connector object, including all fields. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.35') return cctxt.call(context, 'update_volume_connector', connector=connector) def destroy_volume_target(self, context, target, topic=None): """Delete a volume target. :param context: request context :param target: volume target object :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.37') return cctxt.call(context, 'destroy_volume_target', target=target) def update_volume_target(self, context, target, topic=None): """Update the volume target's information. Update the volume target's information in the database and return a volume target object. The conductor will lock the related node during this operation. :param context: request context :param target: a changed (but not saved) volume target object :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if the volume target's UUID is being changed :raises: NodeLocked if the node is already locked :raises: NodeNotFound if the node associated with the volume target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same node ID and boot index values :returns: updated volume target object, including all fields """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.37') return cctxt.call(context, 'update_volume_target', target=target) def vif_attach(self, context, node_id, vif_info, topic=None): """Attach VIF to a node :param context: request context. :param node_id: node ID or UUID. :param vif_info: a dictionary representing VIF object. It must have an 'id' key, whose value is a unique identifier for that VIF. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked, if node has an exclusive lock held on it :raises: NetworkError, if an error occurs during attaching the VIF. :raises: InvalidParameterValue, if a parameter that's required for VIF attach is wrong/missing. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.38') return cctxt.call(context, 'vif_attach', node_id=node_id, vif_info=vif_info) def vif_detach(self, context, node_id, vif_id, topic=None): """Detach VIF from a node :param context: request context. :param node_id: node ID or UUID. :param vif_id: an ID of a VIF. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked, if node has an exclusive lock held on it :raises: NetworkError, if an error occurs during detaching the VIF. :raises: InvalidParameterValue, if a parameter that's required for VIF detach is wrong/missing. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.38') return cctxt.call(context, 'vif_detach', node_id=node_id, vif_id=vif_id) def vif_list(self, context, node_id, topic=None): """List attached VIFs for a node :param context: request context. :param node_id: node ID or UUID. :param topic: RPC topic. Defaults to self.topic. :returns: List of VIF dictionaries, each dictionary will have an 'id' entry with the ID of the VIF. :raises: NetworkError, if an error occurs during listing the VIFs. :raises: InvalidParameterValue, if a parameter that's required for VIF list is wrong/missing. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.38') return cctxt.call(context, 'vif_list', node_id=node_id) def do_node_rescue(self, context, node_id, rescue_password, topic=None): """Signal to conductor service to perform a rescue. :param context: request context. :param node_id: node ID or UUID. :param rescue_password: A string representing the password to be set inside the rescue environment. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceRescueFailure :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.43') return cctxt.call(context, 'do_node_rescue', node_id=node_id, rescue_password=rescue_password) def do_node_unrescue(self, context, node_id, topic=None): """Signal to conductor service to perform an unrescue. :param context: request context. :param node_id: node ID or UUID. :param topic: RPC topic. Defaults to self.topic. :raises: InstanceUnrescueFailure :raises: NoFreeConductorWorker when there is no free worker to start async task. The node must already be configured and in the appropriate state before this method is called. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.43') return cctxt.call(context, 'do_node_unrescue', node_id=node_id) def add_node_traits(self, context, node_id, traits, replace=False, topic=None): """Add or replace traits for a node. :param context: request context. :param node_id: node ID or UUID. :param traits: a list of traits to add to the node. :param replace: True to replace all of the node's traits. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidParameterValue if adding the traits would exceed the per-node traits limit. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node does not exist. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.44') return cctxt.call(context, 'add_node_traits', node_id=node_id, traits=traits, replace=replace) def remove_node_traits(self, context, node_id, traits, topic=None): """Remove some or all traits from a node. :param context: request context. :param node_id: node ID or UUID. :param traits: a list of traits to remove from the node, or None. If None, all traits will be removed from the node. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :raises: NodeNotFound if the node does not exist. :raises: NodeTraitNotFound if one of the traits is not found. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.44') return cctxt.call(context, 'remove_node_traits', node_id=node_id, traits=traits) def create_allocation(self, context, allocation, topic=None): """Create an allocation. :param context: request context. :param allocation: an allocation object. :param topic: RPC topic. Defaults to self.topic. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.48') return cctxt.call(context, 'create_allocation', allocation=allocation) def destroy_allocation(self, context, allocation, topic=None): """Delete an allocation. :param context: request context. :param allocation: an allocation object. :param topic: RPC topic. Defaults to self.topic. :raises: InvalidState if the associated node is in the wrong provision state to perform deallocation. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.48') return cctxt.call(context, 'destroy_allocation', allocation=allocation) def get_node_with_token(self, context, node_id, topic=None): """Request the node from the conductor with an agent token :param context: request context. :param node_id: node ID or UUID. :param topic: RPC topic. Defaults to self.topic. :raises: NodeLocked if node is locked by another conductor. :returns: A Node object with agent token. """ cctxt = self.client.prepare(topic=topic or self.topic, version='1.49') return cctxt.call(context, 'get_node_with_token', node_id=node_id) ironic-15.0.0/ironic/conductor/task_manager.py0000664000175000017500000005403413652514273021411 0ustar zuulzuul00000000000000# coding=utf-8 # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A context manager to perform a series of tasks on a set of resources. :class:`TaskManager` is a context manager, created on-demand to allow synchronized access to a node and its resources. The :class:`TaskManager` will, by default, acquire an exclusive lock on a node for the duration that the TaskManager instance exists. You may create a TaskManager instance without locking by passing "shared=True" when creating it, but certain operations on the resources held by such an instance of TaskManager will not be possible. Requiring this exclusive lock guards against parallel operations interfering with each other. A shared lock is useful when performing non-interfering operations, such as validating the driver interfaces. An exclusive lock is stored in the database to coordinate between :class:`ironic.conductor.manager` instances, that are typically deployed on different hosts. :class:`TaskManager` methods, as well as driver methods, may be decorated to determine whether their invocation requires an exclusive lock. The TaskManager instance exposes certain node resources and properties as attributes that you may access: task.context The context passed to TaskManager() task.shared False if Node is locked, True if it is not locked. (The 'shared' kwarg arg of TaskManager()) task.node The Node object task.ports Ports belonging to the Node task.portgroups Portgroups belonging to the Node task.volume_connectors Storage connectors belonging to the Node task.volume_targets Storage targets assigned to the Node task.driver The Driver for the Node, or the Driver based on the 'driver_name' kwarg of TaskManager(). Example usage: :: with task_manager.acquire(context, node_id, purpose='power on') as task: task.driver.power.power_on(task.node) If you need to execute task-requiring code in a background thread, the TaskManager instance provides an interface to handle this for you, making sure to release resources when the thread finishes (successfully or if an exception occurs). Common use of this is within the Manager like so: :: with task_manager.acquire(context, node_id, purpose='some work') as task: task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state) All exceptions that occur in the current GreenThread as part of the spawn handling are re-raised. You can specify a hook to execute custom code when such exceptions occur. For example, the hook is a more elegant solution than wrapping the "with task_manager.acquire()" with a try..exception block. (Note that this hook does not handle exceptions raised in the background thread.): :: def on_error(e): if isinstance(e, Exception): ... with task_manager.acquire(context, node_id, purpose='some work') as task: task.set_spawn_error_hook(on_error) task.spawn_after(self._spawn_worker, utils.node_power_action, task, new_state) """ import copy import functools import futurist from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import timeutils import retrying from ironic.common import driver_factory from ironic.common import exception from ironic.common.i18n import _ from ironic.common import states from ironic.conductor import notification_utils as notify from ironic import objects from ironic.objects import fields LOG = logging.getLogger(__name__) CONF = cfg.CONF def require_exclusive_lock(f): """Decorator to require an exclusive lock. Decorated functions must take a :class:`TaskManager` as the first parameter. Decorated class methods should take a :class:`TaskManager` as the first parameter after "self". """ @functools.wraps(f) def wrapper(*args, **kwargs): # NOTE(dtantsur): this code could be written simpler, but then unit # testing decorated functions is pretty hard, as we usually pass a Mock # object instead of TaskManager there. if len(args) > 1: task = args[1] if isinstance(args[1], TaskManager) else args[0] else: task = args[0] if task.shared: raise exception.ExclusiveLockRequired() # NOTE(lintan): This is a workaround to set the context of async tasks, # which should contain an exclusive lock. task.context.ensure_thread_contain_context() return f(*args, **kwargs) return wrapper def acquire(context, *args, **kwargs): """Shortcut for acquiring a lock on a Node. :param context: Request context. :returns: An instance of :class:`TaskManager`. """ # NOTE(lintan): This is a workaround to set the context of periodic tasks. context.ensure_thread_contain_context() return TaskManager(context, *args, **kwargs) class TaskManager(object): """Context manager for tasks. This class wraps the locking, driver loading, and acquisition of related resources (eg, Node and Ports) when beginning a unit of work. """ def __init__(self, context, node_id, shared=False, purpose='unspecified action', retry=True, load_driver=True): """Create a new TaskManager. Acquire a lock on a node. The lock can be either shared or exclusive. Shared locks may be used for read-only or non-disruptive actions only, and must be considerate to what other threads may be doing on the same node at the same time. :param context: request context :param node_id: ID or UUID of node to lock. :param shared: Boolean indicating whether to take a shared or exclusive lock. Default: False. :param purpose: human-readable purpose to put to debug logs. :param retry: whether to retry locking if it fails. Default: True. :param load_driver: whether to load the ``driver`` object. Set this to False if loading the driver is undesired or impossible. :raises: DriverNotFound :raises: InterfaceNotFoundInEntrypoint :raises: NodeNotFound :raises: NodeLocked """ self._spawn_method = None self._on_error_method = None self.context = context self._node = None self.node_id = node_id self.shared = shared self._retry = retry self.fsm = states.machine.copy() self._purpose = purpose self._debug_timer = timeutils.StopWatch() # states and event for notification self._prev_provision_state = None self._prev_target_provision_state = None self._event = None self._saved_node = None try: node = objects.Node.get(context, node_id) LOG.debug("Attempting to get %(type)s lock on node %(node)s (for " "%(purpose)s)", {'type': 'shared' if shared else 'exclusive', 'node': node.uuid, 'purpose': purpose}) if not self.shared: self._lock() else: self._debug_timer.restart() self.node = node self.ports = objects.Port.list_by_node_id(context, self.node.id) self.portgroups = objects.Portgroup.list_by_node_id(context, self.node.id) self.volume_connectors = objects.VolumeConnector.list_by_node_id( context, self.node.id) self.volume_targets = objects.VolumeTarget.list_by_node_id( context, self.node.id) if load_driver: self.driver = driver_factory.build_driver_for_task(self) else: self.driver = None except Exception: with excutils.save_and_reraise_exception(): self.release_resources() @property def node(self): return self._node @node.setter def node(self, node): self._node = node if node is not None: self.fsm.initialize(start_state=self.node.provision_state, target_state=self.node.target_provision_state) def _lock(self): self._debug_timer.restart() if self._retry: attempts = CONF.conductor.node_locked_retry_attempts else: attempts = 1 # NodeLocked exceptions can be annoying. Let's try to alleviate # some of that pain by retrying our lock attempts. The retrying # module expects a wait_fixed value in milliseconds. @retrying.retry( retry_on_exception=lambda e: isinstance(e, exception.NodeLocked), stop_max_attempt_number=attempts, wait_fixed=CONF.conductor.node_locked_retry_interval * 1000) def reserve_node(): self.node = objects.Node.reserve(self.context, CONF.host, self.node_id) LOG.debug("Node %(node)s successfully reserved for %(purpose)s " "(took %(time).2f seconds)", {'node': self.node.uuid, 'purpose': self._purpose, 'time': self._debug_timer.elapsed()}) self._debug_timer.restart() reserve_node() def upgrade_lock(self, purpose=None): """Upgrade a shared lock to an exclusive lock. Also reloads node object from the database. If lock is already exclusive only changes the lock purpose when provided with one. :param purpose: optionally change the purpose of the lock :raises: NodeLocked if an exclusive lock remains on the node after "node_locked_retry_attempts" """ if purpose is not None: self._purpose = purpose if self.shared: LOG.debug('Upgrading shared lock on node %(uuid)s for %(purpose)s ' 'to an exclusive one (shared lock was held %(time).2f ' 'seconds)', {'uuid': self.node.uuid, 'purpose': self._purpose, 'time': self._debug_timer.elapsed()}) self._lock() self.shared = False def spawn_after(self, _spawn_method, *args, **kwargs): """Call this to spawn a thread to complete the task. The specified method will be called when the TaskManager instance exits. :param _spawn_method: a method that returns a GreenThread object :param args: args passed to the method. :param kwargs: additional kwargs passed to the method. """ self._spawn_method = _spawn_method self._spawn_args = args self._spawn_kwargs = kwargs def set_spawn_error_hook(self, _on_error_method, *args, **kwargs): """Create a hook to handle exceptions when spawning a task. Create a hook that gets called upon an exception being raised from spawning a background thread to do a task. :param _on_error_method: a callable object, it's first parameter should accept the Exception object that was raised. :param args: additional args passed to the callable object. :param kwargs: additional kwargs passed to the callable object. """ self._on_error_method = _on_error_method self._on_error_args = args self._on_error_kwargs = kwargs def release_resources(self): """Unlock a node and release resources. If an exclusive lock is held, unlock the node. Reset attributes to make it clear that this instance of TaskManager should no longer be accessed. """ if not self.shared: try: if self.node: objects.Node.release(self.context, CONF.host, self.node.id) except exception.NodeNotFound: # squelch the exception if the node was deleted # within the task's context. pass if self.node: LOG.debug("Successfully released %(type)s lock for %(purpose)s " "on node %(node)s (lock was held %(time).2f sec)", {'type': 'shared' if self.shared else 'exclusive', 'purpose': self._purpose, 'node': self.node.uuid, 'time': self._debug_timer.elapsed()}) self.node = None self.driver = None self.ports = None self.portgroups = None self.volume_connectors = None self.volume_targets = None self.fsm = None def _write_exception(self, future): """Set node last_error if exception raised in thread.""" node = self.node # do not rewrite existing error if node and node.last_error is None: method = self._spawn_args[0].__name__ try: exc = future.exception() except futurist.CancelledError: LOG.exception("Execution of %(method)s for node %(node)s " "was canceled.", {'method': method, 'node': node.uuid}) else: if exc is not None: msg = _("Async execution of %(method)s failed with error: " "%(error)s") % {'method': method, 'error': str(exc)} node.last_error = msg try: node.save() except exception.NodeNotFound: pass def _notify_provision_state_change(self): """Emit notification about change of the node provision state.""" if self._event is None: return if self.node is None: # Rare case if resource released before notification task = copy.copy(self) task.fsm = states.machine.copy() task.node = self._saved_node else: task = self node = task.node state = node.provision_state prev_state = self._prev_provision_state new_unstable = state in states.UNSTABLE_STATES prev_unstable = prev_state in states.UNSTABLE_STATES level = fields.NotificationLevel.INFO if self._event in ('fail', 'error'): status = fields.NotificationStatus.ERROR level = fields.NotificationLevel.ERROR elif (prev_unstable, new_unstable) == (False, True): status = fields.NotificationStatus.START elif (prev_unstable, new_unstable) == (True, False): status = fields.NotificationStatus.END else: status = fields.NotificationStatus.SUCCESS notify.emit_provision_set_notification( task, level, status, self._prev_provision_state, self._prev_target_provision_state, self._event) # reset saved event, avoiding duplicate notification self._event = None def _thread_release_resources(self, fut): """Thread callback to release resources.""" try: self._write_exception(fut) finally: self.release_resources() def process_event(self, event, callback=None, call_args=None, call_kwargs=None, err_handler=None, target_state=None): """Process the given event for the task's current state. :param event: the name of the event to process :param callback: optional callback to invoke upon event transition :param call_args: optional args to pass to the callback method :param call_kwargs: optional kwargs to pass to the callback method :param err_handler: optional error handler to invoke if the callback fails, eg. because there are no workers available (err_handler should accept arguments node, prev_prov_state, and prev_target_state) :param target_state: if specified, the target provision state for the node. Otherwise, use the target state from the fsm :raises: InvalidState if the event is not allowed by the associated state machine """ # save previous states and event self._prev_provision_state = self.node.provision_state self._prev_target_provision_state = self.node.target_provision_state self._event = event # Advance the state model for the given event. Note that this doesn't # alter the node in any way. This may raise InvalidState, if this event # is not allowed in the current state. self.fsm.process_event(event, target_state=target_state) # stash current states in the error handler if callback is set, # in case we fail to get a worker from the pool if err_handler and callback: self.set_spawn_error_hook(err_handler, self.node, self.node.provision_state, self.node.target_provision_state) self.node.provision_state = self.fsm.current_state # NOTE(lucasagomes): If there's no extra processing # (callback) and we've moved to a stable state, make sure the # target_provision_state is cleared if not callback and self.fsm.is_stable(self.node.provision_state): self.node.target_provision_state = states.NOSTATE else: self.node.target_provision_state = self.fsm.target_state # set up the async worker if callback: # clear the error if we're going to start work in a callback self.node.last_error = None if call_args is None: call_args = () if call_kwargs is None: call_kwargs = {} self.spawn_after(callback, *call_args, **call_kwargs) # publish the state transition by saving the Node self.node.save() log_message = ('Node %(node)s moved to provision state "%(state)s" ' 'from state "%(previous)s"; target provision state is ' '"%(target)s"' % {'node': self.node.uuid, 'state': self.node.provision_state, 'target': self.node.target_provision_state, 'previous': self._prev_provision_state}) if (self.node.provision_state.endswith('failed') or self.node.provision_state == 'error'): LOG.error(log_message) else: LOG.info(log_message) if callback is None: self._notify_provision_state_change() else: # save the node, in case it is released before a notification is # emitted at __exit__(). self._saved_node = self.node def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is None and self._spawn_method is not None: # Spawn a worker to complete the task # The linked callback below will be called whenever: # - background task finished with no errors. # - background task has crashed with exception. # - callback was added after the background task has # finished or crashed. While eventlet currently doesn't # schedule the new thread until the current thread blocks # for some reason, this is true. # All of the above are asserted in tests such that we'll # catch if eventlet ever changes this behavior. fut = None try: fut = self._spawn_method(*self._spawn_args, **self._spawn_kwargs) # NOTE(comstud): Trying to use a lambda here causes # the callback to not occur for some reason. This # also makes it easier to test. fut.add_done_callback(self._thread_release_resources) # Don't unlock! The unlock will occur when the # thread finishes. # NOTE(yuriyz): A race condition with process_event() # in callback is possible here if eventlet changes behavior. # E.g., if the execution of the new thread (that handles the # event processing) finishes before we get here, that new # thread may emit the "end" notification before we emit the # following "start" notification. self._notify_provision_state_change() return except Exception as e: with excutils.save_and_reraise_exception(): try: # Execute the on_error hook if set if self._on_error_method: self._on_error_method(e, *self._on_error_args, **self._on_error_kwargs) except Exception: LOG.warning("Task's on_error hook failed to " "call %(method)s on node %(node)s", {'method': self._on_error_method.__name__, 'node': self.node.uuid}) if fut is not None: # This means the add_done_callback() failed for some # reason. Nuke the thread. fut.cancel() self.release_resources() self.release_resources() ironic-15.0.0/ironic/conductor/notification_utils.py0000664000175000017500000001650413652514273022663 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log from oslo_messaging import exceptions as oslo_msg_exc from oslo_versionedobjects import exception as oslo_vo_exc from ironic.common import exception from ironic.common.i18n import _ from ironic.objects import fields from ironic.objects import node as node_objects from ironic.objects import notification LOG = log.getLogger(__name__) CONF = cfg.CONF def _emit_conductor_node_notification(task, notification_method, payload_method, action, level, status, **kwargs): """Helper for emitting a conductor notification about a node. :param task: a TaskManager instance. :param notification_method: Constructor for the notification itself. :param payload_method: Constructor for the notification payload. Node should be first argument of the method. :param action: Action string to go in the EventType. :param level: Notification level. One of `ironic.objects.fields.NotificationLevel.ALL` :param status: Status to go in the EventType. One of `ironic.objects.fields.NotificationStatus.ALL` :param **kwargs: kwargs to use when creating the notification payload. Passed to the payload_method. """ try: # Prepare our exception message just in case exception_values = {"node": task.node.uuid, "action": action, "status": status, "level": level, "notification_method": notification_method.__name__, "payload_method": payload_method.__name__} exception_message = (_("Failed to send baremetal.node." "%(action)s.%(status)s notification for node " "%(node)s with level %(level)s, " "notification_method %(notification_method)s, " "payload_method %(payload_method)s, error " "%(error)s")) payload = payload_method(task.node, **kwargs) notification.mask_secrets(payload) notification_method( publisher=notification.NotificationPublisher( service='ironic-conductor', host=CONF.host), event_type=notification.EventType( object='node', action=action, status=status), level=level, payload=payload).emit(task.context) except (exception.NotificationSchemaObjectError, exception.NotificationSchemaKeyError, exception.NotificationPayloadError, oslo_msg_exc.MessageDeliveryFailure, oslo_vo_exc.VersionedObjectsException) as e: exception_values['error'] = e LOG.warning(exception_message, exception_values) except Exception as e: # NOTE(mariojv) For unknown exceptions, also log the traceback. exception_values['error'] = e LOG.exception(exception_message, exception_values) def emit_power_set_notification(task, level, status, to_power): """Helper for conductor sending a set power state notification. :param task: a TaskManager instance. :param level: Notification level. One of `ironic.objects.fields.NotificationLevel.ALL` :param status: Status to go in the EventType. One of `ironic.objects.fields.NotificationStatus.SUCCESS` or ERROR. ERROR indicates that ironic-conductor couldn't retrieve the power state for this node, or that it couldn't set the power state of the node. :param to_power: the power state the conductor is attempting to set on the node. This is used instead of the node's target_power_state attribute since the "baremetal.node.power_set.start" notification is sent early, before target_power_state is set on the node. """ _emit_conductor_node_notification( task, node_objects.NodeSetPowerStateNotification, node_objects.NodeSetPowerStatePayload, 'power_set', level, status, to_power=to_power ) def emit_power_state_corrected_notification(task, from_power): """Helper for conductor sending a node power state corrected notification. When ironic detects that the actual power state on a bare metal hardware is different from the power state on an ironic node (DB), the ironic node's power state is corrected to be that of the bare metal hardware. A notification is emitted about this after the database is updated to reflect this correction. :param task: a TaskManager instance. :param from_power: the power state of the node before this change was detected """ _emit_conductor_node_notification( task, node_objects.NodeCorrectedPowerStateNotification, node_objects.NodeCorrectedPowerStatePayload, 'power_state_corrected', fields.NotificationLevel.INFO, fields.NotificationStatus.SUCCESS, from_power=from_power ) def emit_provision_set_notification(task, level, status, prev_state, prev_target, event): """Helper for conductor sending a set provision state notification. :param task: a TaskManager instance. :param level: One of fields.NotificationLevel. :param status: One of fields.NotificationStatus. :param prev_state: Previous provision state. :param prev_target: Previous target provision state. :param event: FSM event that triggered provision state change. """ _emit_conductor_node_notification( task, node_objects.NodeSetProvisionStateNotification, node_objects.NodeSetProvisionStatePayload, 'provision_set', level, status, prev_state=prev_state, prev_target=prev_target, event=event ) def emit_console_notification(task, action, status): """Helper for conductor sending a set console state notification. :param task: a TaskManager instance. :param action: Action string to go in the EventType. Must be either 'console_set' or 'console_restore'. :param status: One of `ironic.objects.fields.NotificationStatus.START`, END or ERROR. """ if status == fields.NotificationStatus.ERROR: level = fields.NotificationLevel.ERROR else: level = fields.NotificationLevel.INFO _emit_conductor_node_notification( task, node_objects.NodeConsoleNotification, node_objects.NodePayload, action, level, status, ) ironic-15.0.0/ironic/conductor/utils.py0000664000175000017500000013035413652514273020115 0ustar zuulzuul00000000000000# coding=utf-8 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import crypt import datetime from distutils.version import StrictVersion import secrets import time from openstack.baremetal import configdrive as os_configdrive from oslo_config import cfg from oslo_log import log from oslo_serialization import jsonutils from oslo_service import loopingcall from oslo_utils import excutils from oslo_utils import timeutils from ironic.common import boot_devices from ironic.common import exception from ironic.common import faults from ironic.common.i18n import _ from ironic.common import network from ironic.common import nova from ironic.common import states from ironic.conductor import notification_utils as notify_utils from ironic.conductor import task_manager from ironic.objects import fields LOG = log.getLogger(__name__) CONF = cfg.CONF PASSWORD_HASH_FORMAT = { 'sha256': crypt.METHOD_SHA256, 'sha512': crypt.METHOD_SHA512, } @task_manager.require_exclusive_lock def node_set_boot_device(task, device, persistent=False): """Set the boot device for a node. If the node that the boot device change is being requested for is in ADOPTING state, the boot device will not be set as that change could potentially result in the future running state of an adopted node being modified erroneously. :param task: a TaskManager instance. :param device: Boot device. Values are vendor-specific. :param persistent: Whether to set next-boot, or make the change permanent. Default: False. :raises: InvalidParameterValue if the validation of the ManagementInterface fails. """ task.driver.management.validate(task) if task.node.provision_state != states.ADOPTING: task.driver.management.set_boot_device(task, device=device, persistent=persistent) def node_get_boot_mode(task): """Read currently set boot mode from a node. Reads the boot mode for a node. If boot mode can't be discovered, `None` is returned. :param task: a TaskManager instance. :raises: DriverOperationError or its derivative in case of driver runtime error. :raises: UnsupportedDriverExtension if current driver does not have management interface or `get_boot_mode()` method is not supported. :returns: Boot mode. One of :mod:`ironic.common.boot_mode` or `None` if boot mode can't be discovered """ task.driver.management.validate(task) return task.driver.management.get_boot_mode(task) # TODO(ietingof): remove `Sets the boot mode...` from the docstring # once classic drivers are gone @task_manager.require_exclusive_lock def node_set_boot_mode(task, mode): """Set the boot mode for a node. Sets the boot mode for a node if the node's driver interface contains a 'management' interface. If the node that the boot mode change is being requested for is in ADOPTING state, the boot mode will not be set as that change could potentially result in the future running state of an adopted node being modified erroneously. :param task: a TaskManager instance. :param mode: Boot mode. Values are one of :mod:`ironic.common.boot_modes` :raises: InvalidParameterValue if the validation of the ManagementInterface fails. :raises: DriverOperationError or its derivative in case of driver runtime error. :raises: UnsupportedDriverExtension if current driver does not have vendor interface or method is unsupported. """ if task.node.provision_state == states.ADOPTING: return task.driver.management.validate(task) boot_modes = task.driver.management.get_supported_boot_modes(task) if mode not in boot_modes: msg = _("Unsupported boot mode %(mode)s specified for " "node %(node_id)s. Supported boot modes are: " "%(modes)s") % {'mode': mode, 'modes': ', '.join(boot_modes), 'node_id': task.node.uuid} raise exception.InvalidParameterValue(msg) task.driver.management.set_boot_mode(task, mode=mode) def node_wait_for_power_state(task, new_state, timeout=None): """Wait for node to be in new power state. :param task: a TaskManager instance. :param new_state: the desired new power state, one of the power states in :mod:`ironic.common.states`. :param timeout: number of seconds to wait before giving up. If not specified, uses the conductor.power_state_change_timeout config value. :raises: PowerStateFailure if timed out """ retry_timeout = (timeout or CONF.conductor.power_state_change_timeout) def _wait(): status = task.driver.power.get_power_state(task) if status == new_state: raise loopingcall.LoopingCallDone(retvalue=status) # NOTE(sambetts): Return False to trigger BackOffLoopingCall to start # backing off. return False try: timer = loopingcall.BackOffLoopingCall(_wait) return timer.start(initial_delay=1, timeout=retry_timeout).wait() except loopingcall.LoopingCallTimeOut: LOG.error('Timed out after %(retry_timeout)s secs waiting for ' '%(state)s on node %(node_id)s.', {'retry_timeout': retry_timeout, 'state': new_state, 'node_id': task.node.uuid}) raise exception.PowerStateFailure(pstate=new_state) def _calculate_target_state(new_state): if new_state in (states.POWER_ON, states.REBOOT, states.SOFT_REBOOT): target_state = states.POWER_ON elif new_state in (states.POWER_OFF, states.SOFT_POWER_OFF): target_state = states.POWER_OFF else: target_state = None return target_state def _can_skip_state_change(task, new_state): """Check if we can ignore the power state change request for the node. Check if we should ignore the requested power state change. This can occur if the requested power state is already the same as our current state. This only works for power on and power off state changes. More complex power state changes, like reboot, are not skipped. :param task: a TaskManager instance containing the node to act on. :param new_state: The requested power state to change to. This can be any power state from ironic.common.states. :returns: True if should ignore the requested power state change. False otherwise """ # We only ignore certain state changes. So if the desired new_state is not # one of them, then we can return early and not do an un-needed # get_power_state() call if new_state not in (states.POWER_ON, states.POWER_OFF, states.SOFT_POWER_OFF): return False node = task.node def _not_going_to_change(): # Neither the ironic service nor the hardware has erred. The # node is, for some reason, already in the requested state, # though we don't know why. eg, perhaps the user previously # requested the node POWER_ON, the network delayed those IPMI # packets, and they are trying again -- but the node finally # responds to the first request, and so the second request # gets to this check and stops. # This isn't an error, so we'll clear last_error field # (from previous operation), log a warning, and return. node['last_error'] = None # NOTE(dtantsur): under rare conditions we can get out of sync here node['power_state'] = curr_state node['target_power_state'] = states.NOSTATE node.save() notify_utils.emit_power_set_notification( task, fields.NotificationLevel.INFO, fields.NotificationStatus.END, new_state) LOG.warning("Not going to change node %(node)s power state because " "current state = requested state = '%(state)s'.", {'node': node.uuid, 'state': curr_state}) try: curr_state = task.driver.power.get_power_state(task) except Exception as e: with excutils.save_and_reraise_exception(): node['last_error'] = _( "Failed to change power state to '%(target)s'. " "Error: %(error)s") % {'target': new_state, 'error': e} node['target_power_state'] = states.NOSTATE node.save() notify_utils.emit_power_set_notification( task, fields.NotificationLevel.ERROR, fields.NotificationStatus.ERROR, new_state) if curr_state == states.POWER_ON: if new_state == states.POWER_ON: _not_going_to_change() return True elif curr_state == states.POWER_OFF: if new_state in (states.POWER_OFF, states.SOFT_POWER_OFF): _not_going_to_change() return True else: # if curr_state == states.ERROR: # be optimistic and continue action LOG.warning("Driver returns ERROR power state for node %s.", node.uuid) return False @task_manager.require_exclusive_lock def node_power_action(task, new_state, timeout=None): """Change power state or reset for a node. Perform the requested power action if the transition is required. :param task: a TaskManager instance containing the node to act on. :param new_state: Any power state from ironic.common.states. :param timeout: timeout (in seconds) positive integer (> 0) for any power state. ``None`` indicates to use default timeout. :raises: InvalidParameterValue when the wrong state is specified or the wrong driver info is specified. :raises: StorageError when a failure occurs updating the node's storage interface upon setting power on. :raises: other exceptions by the node's power driver if something wrong occurred during the power action. """ notify_utils.emit_power_set_notification( task, fields.NotificationLevel.INFO, fields.NotificationStatus.START, new_state) node = task.node if _can_skip_state_change(task, new_state): return target_state = _calculate_target_state(new_state) # Set the target_power_state and clear any last_error, if we're # starting a new operation. This will expose to other processes # and clients that work is in progress. if node['target_power_state'] != target_state: node['target_power_state'] = target_state node['last_error'] = None driver_internal_info = node.driver_internal_info driver_internal_info['last_power_state_change'] = str( timeutils.utcnow().isoformat()) node.driver_internal_info = driver_internal_info node.save() # take power action try: if (target_state == states.POWER_ON and node.provision_state == states.ACTIVE): task.driver.storage.attach_volumes(task) if new_state != states.REBOOT: task.driver.power.set_power_state(task, new_state, timeout=timeout) else: # TODO(TheJulia): We likely ought to consider toggling # volume attachments, although we have no mechanism to # really verify what cinder has connector wise. task.driver.power.reboot(task, timeout=timeout) except Exception as e: with excutils.save_and_reraise_exception(): node['target_power_state'] = states.NOSTATE node['last_error'] = _( "Failed to change power state to '%(target_state)s' " "by '%(new_state)s'. Error: %(error)s") % { 'target_state': target_state, 'new_state': new_state, 'error': e} node.save() notify_utils.emit_power_set_notification( task, fields.NotificationLevel.ERROR, fields.NotificationStatus.ERROR, new_state) else: # success! node['power_state'] = target_state node['target_power_state'] = states.NOSTATE node.save() if node.instance_uuid: nova.power_update( task.context, node.instance_uuid, target_state) notify_utils.emit_power_set_notification( task, fields.NotificationLevel.INFO, fields.NotificationStatus.END, new_state) LOG.info('Successfully set node %(node)s power state to ' '%(target_state)s by %(new_state)s.', {'node': node.uuid, 'target_state': target_state, 'new_state': new_state}) # NOTE(TheJulia): Similarly to power-on, when we power-off # a node, we should detach any volume attachments. if (target_state == states.POWER_OFF and node.provision_state == states.ACTIVE): try: task.driver.storage.detach_volumes(task) except exception.StorageError as e: LOG.warning("Volume detachment for node %(node)s " "failed. Error: %(error)s", {'node': node.uuid, 'error': e}) @task_manager.require_exclusive_lock def cleanup_after_timeout(task): """Cleanup deploy task after timeout. :param task: a TaskManager instance. """ msg = (_('Timeout reached while waiting for callback for node %s') % task.node.uuid) deploying_error_handler(task, msg, msg) def provisioning_error_handler(e, node, provision_state, target_provision_state): """Set the node's provisioning states if error occurs. This hook gets called upon an exception being raised when spawning the worker to do some provisioning to a node like deployment, tear down, or cleaning. :param e: the exception object that was raised. :param node: an Ironic node object. :param provision_state: the provision state to be set on the node. :param target_provision_state: the target provision state to be set on the node. """ if isinstance(e, exception.NoFreeConductorWorker): # NOTE(tenbrae): there is no need to clear conductor_affinity # because it isn't updated on a failed deploy node.provision_state = provision_state node.target_provision_state = target_provision_state node.last_error = (_("No free conductor workers available")) node.save() LOG.warning("No free conductor workers available to perform " "an action on node %(node)s, setting node's " "provision_state back to %(prov_state)s and " "target_provision_state to %(tgt_prov_state)s.", {'node': node.uuid, 'prov_state': provision_state, 'tgt_prov_state': target_provision_state}) def cleanup_cleanwait_timeout(task): """Cleanup a cleaning task after timeout. :param task: a TaskManager instance. """ last_error = (_("Timeout reached while cleaning the node. Please " "check if the ramdisk responsible for the cleaning is " "running on the node. Failed on step %(step)s.") % {'step': task.node.clean_step}) # NOTE(rloo): this is called from the periodic task for cleanwait timeouts, # via the task manager's process_event(). The node has already been moved # to CLEANFAIL, so the error handler doesn't need to set the fail state. cleaning_error_handler(task, msg=last_error, set_fail_state=False) def cleaning_error_handler(task, msg, tear_down_cleaning=True, set_fail_state=True): """Put a failed node in CLEANFAIL and maintenance.""" node = task.node node.fault = faults.CLEAN_FAILURE node.maintenance = True if tear_down_cleaning: try: task.driver.deploy.tear_down_cleaning(task) except Exception as e: msg2 = ('Failed to tear down cleaning on node %(uuid)s, ' 'reason: %(err)s' % {'err': e, 'uuid': node.uuid}) LOG.exception(msg2) msg = _('%s. Also failed to tear down cleaning.') % msg if node.provision_state in ( states.CLEANING, states.CLEANWAIT, states.CLEANFAIL): # Clear clean step, msg should already include current step node.clean_step = {} info = node.driver_internal_info # Clear any leftover metadata about cleaning info.pop('clean_step_index', None) info.pop('cleaning_reboot', None) info.pop('cleaning_polling', None) info.pop('skip_current_clean_step', None) # We don't need to keep the old agent URL # as it should change upon the next cleaning attempt. info.pop('agent_url', None) node.driver_internal_info = info # For manual cleaning, the target provision state is MANAGEABLE, whereas # for automated cleaning, it is AVAILABLE. manual_clean = node.target_provision_state == states.MANAGEABLE node.last_error = msg # NOTE(dtantsur): avoid overwriting existing maintenance_reason if not node.maintenance_reason: node.maintenance_reason = msg node.save() if set_fail_state and node.provision_state != states.CLEANFAIL: target_state = states.MANAGEABLE if manual_clean else None task.process_event('fail', target_state=target_state) def wipe_deploy_internal_info(node): """Remove temporary deployment fields from driver_internal_info.""" info = node.driver_internal_info info.pop('agent_secret_token', None) info.pop('agent_secret_token_pregenerated', None) # Clear any leftover metadata about deployment. info['deploy_steps'] = None info.pop('agent_cached_deploy_steps', None) info.pop('deploy_step_index', None) info.pop('deployment_reboot', None) info.pop('deployment_polling', None) info.pop('skip_current_deploy_step', None) info.pop('steps_validated', None) # Remove agent_url since it will be re-asserted # upon the next deployment attempt. info.pop('agent_url', None) node.driver_internal_info = info def deploying_error_handler(task, logmsg, errmsg=None, traceback=False, clean_up=True): """Put a failed node in DEPLOYFAIL. :param task: the task :param logmsg: message to be logged :param errmsg: message for the user :param traceback: Boolean; True to log a traceback :param clean_up: Boolean; True to clean up """ errmsg = errmsg or logmsg node = task.node LOG.error(logmsg, exc_info=traceback) node.last_error = errmsg node.save() cleanup_err = None if clean_up: try: task.driver.deploy.clean_up(task) except Exception as e: msg = ('Cleanup failed for node %(node)s; reason: %(err)s' % {'node': node.uuid, 'err': e}) LOG.exception(msg) if isinstance(e, exception.IronicException): addl = _('Also failed to clean up due to: %s') % e else: addl = _('An unhandled exception was encountered while ' 'aborting. More information may be found in the log ' 'file.') cleanup_err = '%(err)s. %(add)s' % {'err': errmsg, 'add': addl} node.refresh() if node.provision_state in ( states.DEPLOYING, states.DEPLOYWAIT, states.DEPLOYFAIL): # Clear deploy step; we leave the list of deploy steps # in node.driver_internal_info for debugging purposes. node.deploy_step = {} wipe_deploy_internal_info(node) if cleanup_err: node.last_error = cleanup_err node.save() # NOTE(tenbrae): there is no need to clear conductor_affinity task.process_event('fail') @task_manager.require_exclusive_lock def abort_on_conductor_take_over(task): """Set node's state when a task was aborted due to conductor take over. :param task: a TaskManager instance. """ msg = _('Operation was aborted due to conductor take over') # By this time the "fail" even was processed, so we cannot end up in # CLEANING or CLEAN WAIT, only in CLEAN FAIL. if task.node.provision_state == states.CLEANFAIL: cleaning_error_handler(task, msg, set_fail_state=False) else: # For aborted deployment (and potentially other operations), just set # the last_error accordingly. task.node.last_error = msg task.node.save() LOG.warning('Aborted the current operation on node %s due to ' 'conductor take over', task.node.uuid) def rescuing_error_handler(task, msg, set_fail_state=True): """Cleanup rescue task after timeout or failure. :param task: a TaskManager instance. :param msg: a message to set into node's last_error field :param set_fail_state: a boolean flag to indicate if node needs to be transitioned to a failed state. By default node would be transitioned to a failed state. """ node = task.node try: node_power_action(task, states.POWER_OFF) task.driver.rescue.clean_up(task) remove_agent_url(node) node.last_error = msg except exception.IronicException as e: node.last_error = (_('Rescue operation was unsuccessful, clean up ' 'failed for node: %(error)s') % {'error': e}) LOG.error(('Rescue operation was unsuccessful, clean up failed for ' 'node %(node)s: %(error)s'), {'node': node.uuid, 'error': e}) except Exception as e: node.last_error = (_('Rescue failed, but an unhandled exception was ' 'encountered while aborting: %(error)s') % {'error': e}) LOG.exception('Rescue failed for node %(node)s, an exception was ' 'encountered while aborting.', {'node': node.uuid}) finally: remove_agent_url(node) node.save() if set_fail_state: try: task.process_event('fail') except exception.InvalidState: node = task.node LOG.error('Internal error. Node %(node)s in provision state ' '"%(state)s" could not transition to a failed state.', {'node': node.uuid, 'state': node.provision_state}) @task_manager.require_exclusive_lock def cleanup_rescuewait_timeout(task): """Cleanup rescue task after timeout. :param task: a TaskManager instance. """ msg = _('Timeout reached while waiting for rescue ramdisk callback ' 'for node') errmsg = msg + ' %(node)s' LOG.error(errmsg, {'node': task.node.uuid}) rescuing_error_handler(task, msg, set_fail_state=False) def _spawn_error_handler(e, node, operation): """Handle error while trying to spawn a process. Handle error while trying to spawn a process to perform an operation on a node. :param e: the exception object that was raised. :param node: an Ironic node object. :param operation: the operation being performed on the node. """ if isinstance(e, exception.NoFreeConductorWorker): node.last_error = (_("No free conductor workers available")) node.save() LOG.warning("No free conductor workers available to perform " "%(operation)s on node %(node)s", {'operation': operation, 'node': node.uuid}) def spawn_cleaning_error_handler(e, node): """Handle spawning error for node cleaning.""" _spawn_error_handler(e, node, states.CLEANING) def spawn_deploying_error_handler(e, node): """Handle spawning error for node deploying.""" _spawn_error_handler(e, node, states.DEPLOYING) def spawn_rescue_error_handler(e, node): """Handle spawning error for node rescue.""" if isinstance(e, exception.NoFreeConductorWorker): remove_node_rescue_password(node, save=False) _spawn_error_handler(e, node, states.RESCUE) def power_state_error_handler(e, node, power_state): """Set the node's power states if error occurs. This hook gets called upon an exception being raised when spawning the worker thread to change the power state of a node. :param e: the exception object that was raised. :param node: an Ironic node object. :param power_state: the power state to set on the node. """ # NOTE This error will not emit a power state change notification since # this is related to spawning the worker thread, not the power state change # itself. if isinstance(e, exception.NoFreeConductorWorker): node.power_state = power_state node.target_power_state = states.NOSTATE node.last_error = (_("No free conductor workers available")) node.save() LOG.warning("No free conductor workers available to perform " "an action on node %(node)s, setting node's " "power state back to %(power_state)s.", {'node': node.uuid, 'power_state': power_state}) @task_manager.require_exclusive_lock def validate_port_physnet(task, port_obj): """Validate the consistency of physical networks of ports in a portgroup. Validate the consistency of a port's physical network with other ports in the same portgroup. All ports in a portgroup should have the same value (which may be None) for their physical_network field. During creation or update of a port in a portgroup we apply the following validation criteria: - If the portgroup has existing ports with different physical networks, we raise PortgroupPhysnetInconsistent. This shouldn't ever happen. - If the port has a physical network that is inconsistent with other ports in the portgroup, we raise exception.Conflict. If a port's physical network is None, this indicates that ironic's VIF attachment mapping algorithm should operate in a legacy (physical network unaware) mode for this port or portgroup. This allows existing ironic nodes to continue to function after an upgrade to a release including physical network support. :param task: a TaskManager instance :param port_obj: a port object to be validated. :raises: Conflict if the port is a member of a portgroup which is on a different physical network. :raises: PortgroupPhysnetInconsistent if the port's portgroup has ports which are not all assigned the same physical network. """ if 'portgroup_id' not in port_obj or not port_obj.portgroup_id: return delta = port_obj.obj_what_changed() # We can skip this step if the port's portgroup membership or physical # network assignment is not being changed (during creation these will # appear changed). if not (delta & {'portgroup_id', 'physical_network'}): return # Determine the current physical network of the portgroup. pg_physnets = network.get_physnets_by_portgroup_id(task, port_obj.portgroup_id, exclude_port=port_obj) if not pg_physnets: return # Check that the port has the same physical network as any existing # member ports. pg_physnet = pg_physnets.pop() port_physnet = (port_obj.physical_network if 'physical_network' in port_obj else None) if port_physnet != pg_physnet: portgroup = network.get_portgroup_by_id(task, port_obj.portgroup_id) msg = _("Port with physical network %(physnet)s cannot become a " "member of port group %(portgroup)s which has ports in " "physical network %(pg_physnet)s.") raise exception.Conflict( msg % {'portgroup': portgroup.uuid, 'physnet': port_physnet, 'pg_physnet': pg_physnet}) def remove_node_rescue_password(node, save=True): """Helper to remove rescue password from a node. Removes rescue password from node. It saves node by default. If node should not be saved, then caller needs to explicitly indicate it. :param node: an Ironic node object. :param save: Boolean; True (default) to save the node; False otherwise. """ instance_info = node.instance_info if 'rescue_password' in instance_info: del instance_info['rescue_password'] if 'hashed_rescue_password' in instance_info: del instance_info['hashed_rescue_password'] node.instance_info = instance_info if save: node.save() def validate_instance_info_traits(node): """Validate traits in instance_info. All traits in instance_info must also exist as node traits. :param node: an Ironic node object. :raises: InvalidParameterValue if the instance traits are badly formatted, or contain traits that are not set on the node. """ def invalid(): err = (_("Error parsing traits from Node %(node)s instance_info " "field. A list of strings is expected.") % {"node": node.uuid}) raise exception.InvalidParameterValue(err) if not node.instance_info.get('traits'): return instance_traits = node.instance_info['traits'] if not isinstance(instance_traits, list): invalid() if not all(isinstance(t, str) for t in instance_traits): invalid() node_traits = node.traits.get_trait_names() missing = set(instance_traits) - set(node_traits) if missing: err = (_("Cannot specify instance traits that are not also set on the " "node. Node %(node)s is missing traits %(traits)s") % {"node": node.uuid, "traits": ", ".join(missing)}) raise exception.InvalidParameterValue(err) def notify_conductor_resume_operation(task, operation): """Notify the conductor to resume an operation. :param task: the task :param operation: the operation, a string """ LOG.debug('Sending RPC to conductor to resume %(op)s steps for node ' '%(node)s', {'op': operation, 'node': task.node.uuid}) method = 'continue_node_%s' % operation from ironic.conductor import rpcapi uuid = task.node.uuid rpc = rpcapi.ConductorAPI() topic = rpc.get_topic_for(task.node) # Need to release the lock to let the conductor take it task.release_resources() getattr(rpc, method)(task.context, uuid, topic=topic) def notify_conductor_resume_clean(task): notify_conductor_resume_operation(task, 'clean') def notify_conductor_resume_deploy(task): notify_conductor_resume_operation(task, 'deploy') def skip_automated_cleaning(node): """Checks if node cleaning needs to be skipped for an specific node. :param node: the node to consider """ return not CONF.conductor.automated_clean and not node.automated_clean def power_on_node_if_needed(task): """Powers on node if it is powered off and has a Smart NIC port :param task: A TaskManager object :returns: the previous power state or None if no changes were made :raises: exception.NetworkError if agent status didn't match the required status after max retry attempts. """ if not task.driver.network.need_power_on(task): return previous_power_state = task.driver.power.get_power_state(task) if previous_power_state == states.POWER_OFF: node_set_boot_device( task, boot_devices.BIOS, persistent=False) node_power_action(task, states.POWER_ON) # local import is necessary to avoid circular import from ironic.common import neutron host_id = None for port in task.ports: if neutron.is_smartnic_port(port): link_info = port.local_link_connection host_id = link_info['hostname'] break if host_id: LOG.debug('Waiting for host %(host)s agent to be down', {'host': host_id}) client = neutron.get_client(context=task.context) neutron.wait_for_host_agent( client, host_id, target_state='down') return previous_power_state def restore_power_state_if_needed(task, power_state_to_restore): """Change the node's power state if power_state_to_restore is not None :param task: A TaskManager object :param power_state_to_restore: power state """ if power_state_to_restore: # Sleep is required here in order to give neutron agent # a chance to apply the changes before powering off. # Using twice the polling interval of the agent # "CONF.AGENT.polling_interval" would give the agent # enough time to apply network changes. time.sleep(CONF.agent.neutron_agent_poll_interval * 2) node_power_action(task, power_state_to_restore) @contextlib.contextmanager def power_state_for_network_configuration(task): """Handle the power state for a node reconfiguration. Powers the node on if and only if it has a Smart NIC port. Yields for the actual reconfiguration, then restores the power state. :param task: A TaskManager object. """ previous = power_on_node_if_needed(task) yield task restore_power_state_if_needed(task, previous) def build_configdrive(node, configdrive): """Build a configdrive from provided meta_data, network_data and user_data. If uuid or name are not provided in the meta_data, they're defauled to the node's uuid and name accordingly. :param node: an Ironic node object. :param configdrive: A configdrive as a dict with keys ``meta_data``, ``network_data``, ``user_data`` and ``vendor_data`` (all optional). :returns: A gzipped and base64 encoded configdrive as a string. """ meta_data = configdrive.setdefault('meta_data', {}) meta_data.setdefault('uuid', node.uuid) if node.name: meta_data.setdefault('name', node.name) user_data = configdrive.get('user_data') if isinstance(user_data, (dict, list)): user_data = jsonutils.dump_as_bytes(user_data) elif user_data: user_data = user_data.encode('utf-8') LOG.debug('Building a configdrive for node %s', node.uuid) return os_configdrive.build(meta_data, user_data=user_data, network_data=configdrive.get('network_data'), vendor_data=configdrive.get('vendor_data')) def fast_track_able(task): """Checks if the operation can be a streamlined deployment sequence. This is mainly focused on ensuring that we are able to quickly sequence through operations if we already have a ramdisk heartbeating through external means. :param task: Taskmanager object :returns: True if [deploy]fast_track is set to True, no iSCSI boot configuration is present, and no last_error is present for the node indicating that there was a recent failure. """ return (CONF.deploy.fast_track # TODO(TheJulia): Network model aside, we should be able to # fast-track through initial sequence to complete deployment. # This needs to be validated. # TODO(TheJulia): Do we need a secondary guard? To prevent # driving through this we could query the API endpoint of # the agent with a short timeout such as 10 seconds, which # would help verify if the node is online. # TODO(TheJulia): Should we check the provisioning/deployment # networks match config wise? Do we care? #decisionsdecisions and task.driver.storage.should_write_image(task) and task.node.last_error is None) def value_within_timeout(value, timeout): """Checks if the time is within the previous timeout seconds from now. :param value: a string representing date and time or None. :param timeout: timeout in seconds. """ # use native datetime objects for conversion and compare # slightly odd because py2 compatability :( last = datetime.datetime.strptime(value or '1970-01-01T00:00:00.000000', "%Y-%m-%dT%H:%M:%S.%f") # If we found nothing, we assume that the time is essentially epoch. time_delta = datetime.timedelta(seconds=timeout) last_valid = timeutils.utcnow() - time_delta return last_valid <= last def is_fast_track(task): """Checks a fast track is available. This method first ensures that the node and conductor configuration is valid to perform a fast track sequence meaning that we already have a ramdisk running through another means like discovery. If not valid, False is returned. The method then checks for the last agent heartbeat, and if it occured within the timeout set by [deploy]fast_track_timeout and the power state for the machine is POWER_ON, then fast track is permitted. :param task: Taskmanager object :returns: True if the last heartbeat that was recorded was within the [deploy]fast_track_timeout setting. """ return (fast_track_able(task) and value_within_timeout( task.node.driver_internal_info.get('agent_last_heartbeat'), CONF.deploy.fast_track_timeout) and task.driver.power.get_power_state(task) == states.POWER_ON) def remove_agent_url(node): """Helper to remove the agent_url record.""" info = node.driver_internal_info info.pop('agent_url', None) node.driver_internal_info = info def _get_node_next_steps(task, step_type, skip_current_step=True): """Get the task's node's next steps. This determines what the next (remaining) steps are, and returns the index into the steps list that corresponds to the next step. The remaining steps are determined as follows: * If no steps have been started yet, all the steps must be executed * If skip_current_step is False, the remaining steps start with the current step. Otherwise, the remaining steps start with the step after the current one. All the steps are in node.driver_internal_info['_steps']. node._step is the current step that was just executed (or None, {} if no steps have been executed yet). node.driver_internal_info['_step_index'] is the index index into the steps list (or None, doesn't exist if no steps have been executed yet) and corresponds to node._step. :param task: A TaskManager object :param step_type: The type of steps to process: 'clean' or 'deploy'. :param skip_current_step: True to skip the current step; False to include it. :returns: index of the next step; None if there are none to execute. """ valid_types = set(['clean', 'deploy']) if step_type not in valid_types: # NOTE(rloo): No need to i18n this, since this would be a # developer error; it isn't user-facing. raise exception.Invalid( 'step_type must be one of %(valid)s, not %(step)s' % {'valid': valid_types, 'step': step_type}) node = task.node if not getattr(node, '%s_step' % step_type): # first time through, all steps need to be done. Return the # index of the first step in the list. return 0 ind = node.driver_internal_info.get('%s_step_index' % step_type) if ind is None: return None if skip_current_step: ind += 1 if ind >= len(node.driver_internal_info['%s_steps' % step_type]): # no steps left to do ind = None return ind def get_node_next_clean_steps(task, skip_current_step=True): return _get_node_next_steps(task, 'clean', skip_current_step=skip_current_step) def get_node_next_deploy_steps(task, skip_current_step=True): return _get_node_next_steps(task, 'deploy', skip_current_step=skip_current_step) def add_secret_token(node, pregenerated=False): """Adds a secret token to driver_internal_info for IPA verification. :param node: Node object :param pregenerated: Boolean value, default False, which indicates if the token should be marked as "pregenerated" in order to facilitate virtual media booting where the token is embedded into the configuration. """ token = secrets.token_urlsafe() i_info = node.driver_internal_info i_info['agent_secret_token'] = token if pregenerated: i_info['agent_secret_token_pregenerated'] = True node.driver_internal_info = i_info def del_secret_token(node): """Deletes the IPA agent secret token. Removes the agent token secret from the driver_internal_info field from the Node object. :param node: Node object """ i_info = node.driver_internal_info i_info.pop('agent_secret_token', None) node.driver_internal_info = i_info def is_agent_token_present(node): """Determines if an agent token is present upon a node. :param node: Node object :returns: True if an agent_secret_token value is present in a node driver_internal_info field. """ # TODO(TheJulia): we should likely record the time when we add the token # and then compare if it was in the last ?hour? to act as an additional # guard rail, but if we do that we will want to check the last heartbeat # because the heartbeat overrides the age of the token. # We may want to do this elsewhere or nowhere, just a thought for the # future. return node.driver_internal_info.get( 'agent_secret_token', None) is not None def is_agent_token_valid(node, token): """Validates if a supplied token is valid for the node. :param node: Node object :token: A token value to validate against the driver_internal_info field agent_sercret_token. :returns: True if the supplied token matches the token recorded in the supplied node object. """ if token is None: # No token is never valid. return False known_token = node.driver_internal_info.get('agent_secret_token', None) return known_token == token def is_agent_token_supported(agent_version): # NOTE(TheJulia): This is hoped that 6.x supports # agent token capabilities and realistically needs to be updated # once that version of IPA is out there in some shape or form. # This allows us to gracefully allow older agent's that were # launched via pre-generated agent_tokens, to still work # and could likely be removed at some point down the road. version = str(agent_version).replace('.dev', 'b', 1) return StrictVersion(version) > StrictVersion('6.1.0') def is_agent_token_pregenerated(node): """Determines if the token was generated for out of band configuration. Ironic supports the ability to provide configuration data to the agent through the a virtual floppy or as part of the virtual media image which is attached to the BMC. This method helps us identify WHEN we did so as we don't need to remove records of the token prior to rebooting the token. This is important as tokens provided through out of band means presist in the virtual media image, are loaded as part of the agent ramdisk, and do not require regeneration of the token upon the initial lookup, ultimately making the overall usage of virtual media and pregenerated tokens far more secure. :param node: Node Object :returns: True if the token was pregenerated as indicated by the node's driver_internal_info field. False in all other cases. """ return node.driver_internal_info.get( 'agent_secret_token_pregenerated', False) def make_salt(): """Generate a random salt with the indicator tag for password type. :returns: a valid salt for use with crypt.crypt """ return crypt.mksalt( method=PASSWORD_HASH_FORMAT[ CONF.conductor.rescue_password_hash_algorithm]) def hash_password(password=''): """Hashes a supplied password. :param value: Value to be hashed """ return crypt.crypt(password, make_salt()) ironic-15.0.0/ironic/conductor/__init__.py0000664000175000017500000000000013652514273020474 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/0000775000175000017500000000000013652514443015145 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/hooks.py0000664000175000017500000001555213652514273016653 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client import re from oslo_config import cfg from oslo_log import log from pecan import hooks from ironic.common import context from ironic.common import policy from ironic.conductor import rpcapi from ironic.db import api as dbapi LOG = log.getLogger(__name__) CHECKED_DEPRECATED_POLICY_ARGS = False INBOUND_HEADER = 'X-Openstack-Request-Id' GLOBAL_REQ_ID = 'openstack.global_request_id' ID_FORMAT = (r'^req-[a-f0-9]{8}-[a-f0-9]{4}-' r'[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$') def policy_deprecation_check(): global CHECKED_DEPRECATED_POLICY_ARGS if not CHECKED_DEPRECATED_POLICY_ARGS: enforcer = policy.get_enforcer() substitution_dict = { 'user': 'user_id', 'domain_id': 'user_domain_id', 'domain_name': 'user_domain_id', 'tenant': 'project_name', } policy_rules = enforcer.file_rules.values() for rule in policy_rules: str_rule = str(rule) for deprecated, replacement in substitution_dict.items(): if re.search(r'\b%s\b' % deprecated, str_rule): LOG.warning( "Deprecated argument %(deprecated)s is used in policy " "file rule (%(rule)s), please use %(replacement)s " "argument instead. The possibility to use deprecated " "arguments will be removed in the Pike release.", {'deprecated': deprecated, 'replacement': replacement, 'rule': str_rule}) if deprecated == 'domain_name': LOG.warning( "Please note that user_domain_id is an ID of the " "user domain, while the deprecated domain_name is " "its name. The policy rule has to be updated " "accordingly.") CHECKED_DEPRECATED_POLICY_ARGS = True class ConfigHook(hooks.PecanHook): """Attach the config object to the request so controllers can get to it.""" def before(self, state): state.request.cfg = cfg.CONF class DBHook(hooks.PecanHook): """Attach the dbapi object to the request so controllers can get to it.""" def before(self, state): state.request.dbapi = dbapi.get_instance() class ContextHook(hooks.PecanHook): """Configures a request context and attaches it to the request.""" def __init__(self, public_api_routes): self.public_api_routes = public_api_routes super(ContextHook, self).__init__() def before(self, state): is_public_api = state.request.environ.get('is_public_api', False) # set the global_request_id if we have an inbound request id gr_id = state.request.headers.get(INBOUND_HEADER, "") if re.match(ID_FORMAT, gr_id): state.request.environ[GLOBAL_REQ_ID] = gr_id ctx = context.RequestContext.from_environ(state.request.environ, is_public_api=is_public_api) # Do not pass any token with context for noauth mode if cfg.CONF.auth_strategy == 'noauth': ctx.auth_token = None creds = ctx.to_policy_values() is_admin = policy.check('is_admin', creds, creds) ctx.is_admin = is_admin policy_deprecation_check() state.request.context = ctx def after(self, state): if state.request.context == {}: # An incorrect url path will not create RequestContext return # NOTE(lintan): RequestContext will generate a request_id if no one # passing outside, so it always contain a request_id. request_id = state.request.context.request_id state.response.headers['Openstack-Request-Id'] = request_id class RPCHook(hooks.PecanHook): """Attach the rpcapi object to the request so controllers can get to it.""" def before(self, state): state.request.rpcapi = rpcapi.ConductorAPI() class NoExceptionTracebackHook(hooks.PecanHook): """Workaround rpc.common: deserialize_remote_exception. deserialize_remote_exception builds rpc exception traceback into error message which is then sent to the client. Such behavior is a security concern so this hook is aimed to cut-off traceback from the error message. """ # NOTE(max_lobur): 'after' hook used instead of 'on_error' because # 'on_error' never fired for wsme+pecan pair. wsme @wsexpose decorator # catches and handles all the errors, so 'on_error' dedicated for unhandled # exceptions never fired. def after(self, state): # Omit empty body. Some errors may not have body at this level yet. if not state.response.body: return # Do nothing if there is no error. # Status codes in the range 200 (OK) to 399 (400 = BAD_REQUEST) are not # an error. if (http_client.OK <= state.response.status_int < http_client.BAD_REQUEST): return json_body = state.response.json # Do not remove traceback when traceback config is set if cfg.CONF.debug_tracebacks_in_api: return faultstring = json_body.get('faultstring') traceback_marker = 'Traceback (most recent call last):' if faultstring and traceback_marker in faultstring: # Cut-off traceback. faultstring = faultstring.split(traceback_marker, 1)[0] # Remove trailing newlines and spaces if any. json_body['faultstring'] = faultstring.rstrip() # Replace the whole json. Cannot change original one because it's # generated on the fly. state.response.json = json_body class PublicUrlHook(hooks.PecanHook): """Attach the right public_url to the request. Attach the right public_url to the request so resources can create links even when the API service is behind a proxy or SSL terminator. """ def before(self, state): if cfg.CONF.oslo_middleware.enable_proxy_headers_parsing: state.request.public_url = state.request.application_url else: state.request.public_url = (cfg.CONF.api.public_endpoint or state.request.host_url) ironic-15.0.0/ironic/api/expose.py0000664000175000017500000000160113652514273017021 0ustar zuulzuul00000000000000# # Copyright 2015 Rackspace, Inc # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import wsmeext.pecan as wsme_pecan def expose(*args, **kwargs): """Ensure that only JSON, and not XML, is supported.""" if 'rest_content_types' not in kwargs: kwargs['rest_content_types'] = ('json',) return wsme_pecan.wsexpose(*args, **kwargs) ironic-15.0.0/ironic/api/controllers/0000775000175000017500000000000013652514443017513 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/controllers/v1/0000775000175000017500000000000013652514443020041 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/controllers/v1/deploy_template.py0000664000175000017500000004233713652514273023614 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import pecan from pecan import rest from webob import exc as webob_exc import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.conductor import steps as conductor_steps import ironic.conf from ironic import objects CONF = ironic.conf.CONF LOG = log.getLogger(__name__) METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'name') _DEPLOY_INTERFACE_TYPE = atypes.Enum( str, *conductor_steps.DEPLOYING_INTERFACE_PRIORITY) class DeployStepType(atypes.Base, base.AsDictMixin): """A type describing a deployment step.""" interface = atypes.wsattr(_DEPLOY_INTERFACE_TYPE, mandatory=True) step = atypes.wsattr(str, mandatory=True) args = atypes.wsattr({str: types.jsontype}, mandatory=True) priority = atypes.wsattr(atypes.IntegerType(0), mandatory=True) def __init__(self, **kwargs): self.fields = ['interface', 'step', 'args', 'priority'] for field in self.fields: value = kwargs.get(field, atypes.Unset) setattr(self, field, value) def sanitize(self): """Removes sensitive data.""" if self.args != atypes.Unset: self.args = strutils.mask_dict_password(self.args, "******") class DeployTemplate(base.APIBase): """API representation of a deploy template.""" uuid = types.uuid """Unique UUID for this deploy template.""" name = atypes.wsattr(str, mandatory=True) """The logical name for this deploy template.""" steps = atypes.wsattr([DeployStepType], mandatory=True) """The deploy steps of this deploy template.""" links = atypes.wsattr([link.Link]) """A list containing a self link and associated deploy template links.""" extra = {str: types.jsontype} """This deploy template's meta data""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.DeployTemplate.fields) for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue value = kwargs.get(field, atypes.Unset) if field == 'steps' and value != atypes.Unset: value = [DeployStepType(**step) for step in value] self.fields.append(field) setattr(self, field, value) @staticmethod def validate(value): if value is None: return # The name is mandatory, but the 'mandatory' attribute support in # wsattr allows None. if value.name is None: err = _("Deploy template name cannot be None") raise exception.InvalidDeployTemplate(err=err) # The name must also be a valid trait. api_utils.validate_trait( value.name, error_prefix=_("Deploy template name must be a valid trait")) # There must be at least one step. if not value.steps: err = _("No deploy steps specified. A deploy template must have " "at least one deploy step.") raise exception.InvalidDeployTemplate(err=err) # TODO(mgoddard): Determine the consequences of allowing duplicate # steps. # * What if one step has zero priority and another non-zero? # * What if a step that is enabled by default is included in a # template? Do we override the default or add a second invocation? # Check for duplicate steps. Each interface/step combination can be # specified at most once. counter = collections.Counter((step.interface, step.step) for step in value.steps) duplicates = {key for key, count in counter.items() if count > 1} if duplicates: duplicates = {"interface: %s, step: %s" % (interface, step) for interface, step in duplicates} err = _("Duplicate deploy steps. A deploy template cannot have " "multiple deploy steps with the same interface and step. " "Duplicates: %s") % "; ".join(duplicates) raise exception.InvalidDeployTemplate(err=err) return value @staticmethod def _convert_with_links(template, url, fields=None): template.links = [ link.Link.make_link('self', url, 'deploy_templates', template.uuid), link.Link.make_link('bookmark', url, 'deploy_templates', template.uuid, bookmark=True) ] return template @classmethod def convert_with_links(cls, rpc_template, fields=None, sanitize=True): """Add links to the deploy template.""" template = DeployTemplate(**rpc_template.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, template.as_dict()) template = cls._convert_with_links(template, api.request.public_url, fields=fields) if sanitize: template.sanitize(fields) return template def sanitize(self, fields): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if self.steps != atypes.Unset: for step in self.steps: step.sanitize() if fields is not None: self.unset_fields_except(fields) @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) template_uuid = '534e73fa-1014-4e58-969a-814cc0cb9d43' template_name = 'CUSTOM_RAID1' template_steps = [{ "interface": "raid", "step": "create_configuration", "args": { "logical_disks": [{ "size_gb": "MAX", "raid_level": "1", "is_root_volume": True }], "delete_configuration": True }, "priority": 10 }] template_extra = {'foo': 'bar'} sample = cls(uuid=template_uuid, name=template_name, steps=template_steps, extra=template_extra, created_at=time, updated_at=time) fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class DeployTemplatePatchType(types.JsonPatchType): _api_base = DeployTemplate class DeployTemplateCollection(collection.Collection): """API representation of a collection of deploy templates.""" _type = 'deploy_templates' deploy_templates = [DeployTemplate] """A list containing deploy template objects""" @staticmethod def convert_with_links(templates, limit, fields=None, **kwargs): collection = DeployTemplateCollection() collection.deploy_templates = [ DeployTemplate.convert_with_links(t, fields=fields, sanitize=False) for t in templates] collection.next = collection.get_next(limit, fields=fields, **kwargs) for template in collection.deploy_templates: template.sanitize(fields) return collection @classmethod def sample(cls): sample = cls() template = DeployTemplate.sample(expand=False) sample.deploy_templates = [template] return sample class DeployTemplatesController(rest.RestController): """REST controller for deploy templates.""" invalid_sort_key_list = ['extra', 'steps'] @pecan.expose() def _route(self, args, request=None): if not api_utils.allow_deploy_templates(): msg = _("The API version does not allow deploy templates") if api.request.method == "GET": raise webob_exc.HTTPNotFound(msg) else: raise webob_exc.HTTPMethodNotAllowed(msg) return super(DeployTemplatesController, self)._route(args, request) def _update_changed_fields(self, template, rpc_template): """Update rpc_template based on changed fields in a template.""" for field in objects.DeployTemplate.fields: try: patch_val = getattr(template, field) except AttributeError: # Ignore fields that aren't exposed in the API. continue if patch_val == atypes.Unset: patch_val = None if rpc_template[field] != patch_val: if field == 'steps' and patch_val is not None: # Convert from DeployStepType to dict. patch_val = [s.as_dict() for s in patch_val] rpc_template[field] = patch_val @METRICS.timer('DeployTemplatesController.get_all') @expose.expose(DeployTemplateCollection, types.name, int, str, str, types.listtype, types.boolean) def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of deploy templates. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param detail: Optional, boolean to indicate whether retrieve a list of deploy templates with detail. """ api_utils.check_policy('baremetal:deploy_template:get') api_utils.check_allowed_fields(fields) api_utils.check_allowed_fields([sort_key]) fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) marker_obj = None if marker: marker_obj = objects.DeployTemplate.get_by_uuid( api.request.context, marker) templates = objects.DeployTemplate.list( api.request.context, limit=limit, marker=marker_obj, sort_key=sort_key, sort_dir=sort_dir) parameters = {'sort_key': sort_key, 'sort_dir': sort_dir} if detail is not None: parameters['detail'] = detail return DeployTemplateCollection.convert_with_links( templates, limit, fields=fields, **parameters) @METRICS.timer('DeployTemplatesController.get_one') @expose.expose(DeployTemplate, types.uuid_or_name, types.listtype) def get_one(self, template_ident, fields=None): """Retrieve information about the given deploy template. :param template_ident: UUID or logical name of a deploy template. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ api_utils.check_policy('baremetal:deploy_template:get') api_utils.check_allowed_fields(fields) rpc_template = api_utils.get_rpc_deploy_template_with_suffix( template_ident) return DeployTemplate.convert_with_links(rpc_template, fields=fields) @METRICS.timer('DeployTemplatesController.post') @expose.expose(DeployTemplate, body=DeployTemplate, status_code=http_client.CREATED) def post(self, template): """Create a new deploy template. :param template: a deploy template within the request body. """ api_utils.check_policy('baremetal:deploy_template:create') context = api.request.context tdict = template.as_dict() # NOTE(mgoddard): UUID is mandatory for notifications payload if not tdict.get('uuid'): tdict['uuid'] = uuidutils.generate_uuid() new_template = objects.DeployTemplate(context, **tdict) notify.emit_start_notification(context, new_template, 'create') with notify.handle_error_notification(context, new_template, 'create'): new_template.create() # Set the HTTP Location Header api.response.location = link.build_url('deploy_templates', new_template.uuid) api_template = DeployTemplate.convert_with_links(new_template) notify.emit_end_notification(context, new_template, 'create') return api_template @METRICS.timer('DeployTemplatesController.patch') @wsme.validate(types.uuid, types.boolean, [DeployTemplatePatchType]) @expose.expose(DeployTemplate, types.uuid_or_name, types.boolean, body=[DeployTemplatePatchType]) def patch(self, template_ident, patch=None): """Update an existing deploy template. :param template_ident: UUID or logical name of a deploy template. :param patch: a json PATCH document to apply to this deploy template. """ api_utils.check_policy('baremetal:deploy_template:update') context = api.request.context rpc_template = api_utils.get_rpc_deploy_template_with_suffix( template_ident) template_dict = rpc_template.as_dict() template = DeployTemplate( **api_utils.apply_jsonpatch(template_dict, patch)) template.validate(template) self._update_changed_fields(template, rpc_template) # NOTE(mgoddard): There could be issues with concurrent updates of a # template. This is particularly true for the complex 'steps' field, # where operations such as modifying a single step could result in # changes being lost, e.g. two requests concurrently appending a step # to the same template could result in only one of the steps being # added, due to the read/modify/write nature of this patch operation. # This issue should not be present for 'simple' string fields, or # complete replacement of the steps (the only operation supported by # the openstack baremetal CLI). It's likely that this is an issue for # other resources, even those modified in the conductor under a lock. # This is due to the fact that the patch operation is always applied in # the API. Ways to avoid this include passing the patch to the # conductor to apply while holding a lock, or a collision detection # & retry mechansim using e.g. the updated_at field. notify.emit_start_notification(context, rpc_template, 'update') with notify.handle_error_notification(context, rpc_template, 'update'): rpc_template.save() api_template = DeployTemplate.convert_with_links(rpc_template) notify.emit_end_notification(context, rpc_template, 'update') return api_template @METRICS.timer('DeployTemplatesController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, template_ident): """Delete a deploy template. :param template_ident: UUID or logical name of a deploy template. """ api_utils.check_policy('baremetal:deploy_template:delete') context = api.request.context rpc_template = api_utils.get_rpc_deploy_template_with_suffix( template_ident) notify.emit_start_notification(context, rpc_template, 'delete') with notify.handle_error_notification(context, rpc_template, 'delete'): rpc_template.destroy() notify.emit_end_notification(context, rpc_template, 'delete') ironic-15.0.0/ironic/api/controllers/v1/bios.py0000664000175000017500000001101713652514273021350 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic_lib import metrics_utils from pecan import rest from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common import policy from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) class BIOSSetting(base.APIBase): """API representation of a BIOS setting.""" name = atypes.wsattr(str) value = atypes.wsattr(str) links = atypes.wsattr([link.Link], readonly=True) def __init__(self, **kwargs): self.fields = [] fields = list(objects.BIOSSetting.fields) for k in fields: if hasattr(self, k): self.fields.append(k) value = kwargs.get(k, atypes.Unset) setattr(self, k, value) @staticmethod def _convert_with_links(bios, node_uuid, url): """Add links to the bios setting.""" name = bios.name bios.links = [link.Link.make_link('self', url, 'nodes', "%s/bios/%s" % (node_uuid, name)), link.Link.make_link('bookmark', url, 'nodes', "%s/bios/%s" % (node_uuid, name), bookmark=True)] return bios @classmethod def convert_with_links(cls, rpc_bios, node_uuid): """Add links to the bios setting.""" bios = BIOSSetting(**rpc_bios.as_dict()) return cls._convert_with_links(bios, node_uuid, api.request.host_url) class BIOSSettingsCollection(base.Base): """API representation of the bios settings for a node.""" bios = [BIOSSetting] """Node bios settings list""" @staticmethod def collection_from_list(node_ident, bios_settings): col = BIOSSettingsCollection() bios_list = [] for bios_setting in bios_settings: bios_list.append(BIOSSetting.convert_with_links(bios_setting, node_ident)) col.bios = bios_list return col class NodeBiosController(rest.RestController): """REST controller for bios.""" def __init__(self, node_ident=None): super(NodeBiosController, self).__init__() self.node_ident = node_ident @METRICS.timer('NodeBiosController.get_all') @expose.expose(BIOSSettingsCollection) def get_all(self): """List node bios settings.""" cdict = api.request.context.to_policy_values() policy.authorize('baremetal:node:bios:get', cdict, cdict) node = api_utils.get_rpc_node(self.node_ident) settings = objects.BIOSSettingList.get_by_node_id( api.request.context, node.id) return BIOSSettingsCollection.collection_from_list(self.node_ident, settings) @METRICS.timer('NodeBiosController.get_one') @expose.expose({str: BIOSSetting}, types.name) def get_one(self, setting_name): """Retrieve information about the given bios setting. :param setting_name: Logical name of the setting to retrieve. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:node:bios:get', cdict, cdict) node = api_utils.get_rpc_node(self.node_ident) try: setting = objects.BIOSSetting.get(api.request.context, node.id, setting_name) except exception.BIOSSettingNotFound: raise exception.BIOSSettingNotFound(node=node.uuid, name=setting_name) return {setting_name: BIOSSetting.convert_with_links(setting, node.uuid)} ironic-15.0.0/ironic/api/controllers/v1/collection.py0000664000175000017500000000402513652514273022550 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api import types as atypes class Collection(base.Base): next = str """A link to retrieve the next subset of the collection""" @property def collection(self): return getattr(self, self._type) @classmethod def get_key_field(cls): return 'uuid' def has_next(self, limit): """Return whether collection has more items.""" return len(self.collection) and len(self.collection) == limit def get_next(self, limit, url=None, **kwargs): """Return a link to the next subset of the collection.""" if not self.has_next(limit): return atypes.Unset resource_url = url or self._type fields = kwargs.pop('fields', None) # NOTE(saga): If fields argument is present in kwargs and not None. It # is a list so convert it into a comma seperated string. if fields: kwargs['fields'] = ','.join(fields) q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs]) next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % { 'args': q_args, 'limit': limit, 'marker': getattr(self.collection[-1], self.get_key_field())} return link.Link.make_link('next', api.request.public_url, resource_url, next_args).href ironic-15.0.0/ironic/api/controllers/v1/node.py0000664000175000017500000031760113652514273021351 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils import jsonschema from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import pecan from pecan import rest import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import allocation from ironic.api.controllers.v1 import bios from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import port from ironic.api.controllers.v1 import portgroup from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import versions from ironic.api.controllers.v1 import volume from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states as ir_states from ironic.conductor import steps as conductor_steps import ironic.conf from ironic import objects CONF = ironic.conf.CONF LOG = log.getLogger(__name__) _CLEAN_STEPS_SCHEMA = { "$schema": "http://json-schema.org/schema#", "title": "Clean steps schema", "type": "array", # list of clean steps "items": { "type": "object", # args is optional "required": ["interface", "step"], "properties": { "interface": { "description": "driver interface", "enum": list(conductor_steps.CLEANING_INTERFACE_PRIORITY) # interface value must be one of the valid interfaces }, "step": { "description": "name of clean step", "type": "string", "minLength": 1 }, "args": { "description": "additional args", "type": "object", "properties": {} }, }, # interface, step and args are the only expected keys "additionalProperties": False } } METRICS = metrics_utils.get_metrics_logger(__name__) # Vendor information for node's driver: # key = driver name; # value = dictionary of node vendor methods of that driver: # key = method name. # value = dictionary with the metadata of that method. # NOTE(lucasagomes). This is cached for the lifetime of the API # service. If one or more conductor services are restarted with new driver # versions, the API service should be restarted. _VENDOR_METHODS = {} _DEFAULT_RETURN_FIELDS = ('instance_uuid', 'maintenance', 'power_state', 'provision_state', 'uuid', 'name') # States where calling do_provisioning_action makes sense PROVISION_ACTION_STATES = (ir_states.VERBS['manage'], ir_states.VERBS['provide'], ir_states.VERBS['abort'], ir_states.VERBS['adopt']) _NODES_CONTROLLER_RESERVED_WORDS = None ALLOWED_TARGET_POWER_STATES = (ir_states.POWER_ON, ir_states.POWER_OFF, ir_states.REBOOT, ir_states.SOFT_REBOOT, ir_states.SOFT_POWER_OFF) _NODE_DESCRIPTION_MAX_LENGTH = 4096 def get_nodes_controller_reserved_names(): global _NODES_CONTROLLER_RESERVED_WORDS if _NODES_CONTROLLER_RESERVED_WORDS is None: _NODES_CONTROLLER_RESERVED_WORDS = ( api_utils.get_controller_reserved_names(NodesController)) return _NODES_CONTROLLER_RESERVED_WORDS def hide_fields_in_newer_versions(obj): """This method hides fields that were added in newer API versions. Certain node fields were introduced at certain API versions. These fields are only made available when the request's API version matches or exceeds the versions when these fields were introduced. """ for field in api_utils.disallowed_fields(): setattr(obj, field, atypes.Unset) def reject_fields_in_newer_versions(obj): """When creating an object, reject fields that appear in newer versions.""" for field in api_utils.disallowed_fields(): if field == 'conductor_group': # NOTE(jroll) this is special-cased to "" and not Unset, # because it is used in hash ring calculations empty_value = '' elif field == 'name' and obj.name is None: # NOTE(dtantsur): for some reason we allow specifying name=None # explicitly even in old API versions.. continue else: empty_value = atypes.Unset if getattr(obj, field, empty_value) != empty_value: LOG.debug('Field %(field)s is not acceptable in version %(ver)s', {'field': field, 'ver': api.request.version}) raise exception.NotAcceptable() def reject_patch_in_newer_versions(patch): for field in api_utils.disallowed_fields(): value = api_utils.get_patch_values(patch, '/%s' % field) if value: LOG.debug('Field %(field)s is not acceptable in version %(ver)s', {'field': field, 'ver': api.request.version}) raise exception.NotAcceptable() def update_state_in_older_versions(obj): """Change provision state names for API backwards compatibility. :param obj: The object being returned to the API client that is to be updated by this method. """ # if requested version is < 1.2, convert AVAILABLE to the old NOSTATE if (api.request.version.minor < versions.MINOR_2_AVAILABLE_STATE and obj.provision_state == ir_states.AVAILABLE): obj.provision_state = ir_states.NOSTATE # if requested version < 1.39, convert INSPECTWAIT to INSPECTING if (not api_utils.allow_inspect_wait_state() and obj.provision_state == ir_states.INSPECTWAIT): obj.provision_state = ir_states.INSPECTING class BootDeviceController(rest.RestController): _custom_actions = { 'supported': ['GET'], } def _get_boot_device(self, rpc_node, supported=False): """Get the current boot device or a list of supported devices. :param rpc_node: RPC Node object. :param supported: Boolean value. If true return a list of supported boot devices, if false return the current boot device. Default: False. :returns: The current boot device or a list of the supported boot devices. """ topic = api.request.rpcapi.get_topic_for(rpc_node) if supported: return api.request.rpcapi.get_supported_boot_devices( api.request.context, rpc_node.uuid, topic) else: return api.request.rpcapi.get_boot_device(api.request.context, rpc_node.uuid, topic) @METRICS.timer('BootDeviceController.put') @expose.expose(None, types.uuid_or_name, str, types.boolean, status_code=http_client.NO_CONTENT) def put(self, node_ident, boot_device, persistent=False): """Set the boot device for a node. Set the boot device to use on next reboot of the node. :param node_ident: the UUID or logical name of a node. :param boot_device: the boot device, one of :mod:`ironic.common.boot_devices`. :param persistent: Boolean value. True if the boot device will persist to all future boots, False if not. Default: False. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_boot_device', node_ident) topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.set_boot_device(api.request.context, rpc_node.uuid, boot_device, persistent=persistent, topic=topic) @METRICS.timer('BootDeviceController.get') @expose.expose(str, types.uuid_or_name) def get(self, node_ident): """Get the current boot device for a node. :param node_ident: the UUID or logical name of a node. :returns: a json object containing: :boot_device: the boot device, one of :mod:`ironic.common.boot_devices` or None if it is unknown. :persistent: Whether the boot device will persist to all future boots or not, None if it is unknown. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:get_boot_device', node_ident) return self._get_boot_device(rpc_node) @METRICS.timer('BootDeviceController.supported') @expose.expose(str, types.uuid_or_name) def supported(self, node_ident): """Get a list of the supported boot devices. :param node_ident: the UUID or logical name of a node. :returns: A json object with the list of supported boot devices. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:get_boot_device', node_ident) boot_devices = self._get_boot_device(rpc_node, supported=True) return {'supported_boot_devices': boot_devices} class IndicatorAtComponent(object): def __init__(self, **kwargs): name = kwargs.get('name') component = kwargs.get('component') unique_name = kwargs.get('unique_name') if name and component: self.unique_name = name + '@' + component self.name = name self.component = component elif unique_name: try: index = unique_name.index('@') except ValueError: raise exception.InvalidParameterValue( _('Malformed indicator name "%s"') % unique_name) self.component = unique_name[index + 1:] self.name = unique_name[:index] self.unique_name = unique_name else: raise exception.MissingParameterValue( _('Missing indicator name "%s"')) class IndicatorState(base.APIBase): """API representation of indicator state.""" state = atypes.wsattr(str) def __init__(self, **kwargs): self.state = kwargs.get('state') class Indicator(base.APIBase): """API representation of an indicator.""" name = atypes.wsattr(str) component = atypes.wsattr(str) readonly = types.BooleanType() states = atypes.ArrayType(str) links = atypes.wsattr([link.Link], readonly=True) def __init__(self, **kwargs): self.name = kwargs.get('name') self.component = kwargs.get('component') self.readonly = kwargs.get('readonly', True) self.states = kwargs.get('states', []) @staticmethod def _convert_with_links(node_uuid, indicator, url): """Add links to the indicator.""" indicator.links = [ link.Link.make_link( 'self', url, 'nodes', '%s/management/indicators/%s' % ( node_uuid, indicator.name)), link.Link.make_link( 'bookmark', url, 'nodes', '%s/management/indicators/%s' % ( node_uuid, indicator.name), bookmark=True)] return indicator @classmethod def convert_with_links(cls, node_uuid, rpc_component, rpc_name, **rpc_fields): """Add links to the indicator.""" indicator = Indicator( component=rpc_component, name=rpc_name, **rpc_fields) return cls._convert_with_links( node_uuid, indicator, pecan.request.host_url) class IndicatorsCollection(atypes.Base): """API representation of the indicators for a node.""" indicators = [Indicator] """Node indicators list""" @staticmethod def collection_from_dict(node_ident, indicators): col = IndicatorsCollection() indicator_list = [] for component, names in indicators.items(): for name, fields in names.items(): indicator_at_component = IndicatorAtComponent( component=component, name=name) indicator = Indicator.convert_with_links( node_ident, component, indicator_at_component.unique_name, **fields) indicator_list.append(indicator) col.indicators = indicator_list return col class IndicatorController(rest.RestController): @METRICS.timer('IndicatorController.put') @expose.expose(None, types.uuid_or_name, str, str, status_code=http_client.NO_CONTENT) def put(self, node_ident, indicator, state): """Set node hardware component indicator to the desired state. :param node_ident: the UUID or logical name of a node. :param indicator: Indicator ID (as reported by `get_supported_indicators`). :param state: Indicator state, one of mod:`ironic.common.indicator_states`. """ cdict = pecan.request.context.to_policy_values() policy.authorize('baremetal:node:set_indicator_state', cdict, cdict) rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) indicator_at_component = IndicatorAtComponent(unique_name=indicator) pecan.request.rpcapi.set_indicator_state( pecan.request.context, rpc_node.uuid, indicator_at_component.component, indicator_at_component.name, state, topic=topic) @METRICS.timer('IndicatorController.get_one') @expose.expose(IndicatorState, types.uuid_or_name, str) def get_one(self, node_ident, indicator): """Get node hardware component indicator and its state. :param node_ident: the UUID or logical name of a node. :param indicator: Indicator ID (as reported by `get_supported_indicators`). :returns: a dict with the "state" key and one of mod:`ironic.common.indicator_states` as a value. """ cdict = pecan.request.context.to_policy_values() policy.authorize('baremetal:node:get_indicator_state', cdict, cdict) rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) indicator_at_component = IndicatorAtComponent(unique_name=indicator) state = pecan.request.rpcapi.get_indicator_state( pecan.request.context, rpc_node.uuid, indicator_at_component.component, indicator_at_component.name, topic=topic) return IndicatorState(state=state) @METRICS.timer('IndicatorController.get_all') @expose.expose(IndicatorsCollection, types.uuid_or_name, str, ignore_extra_args=True) def get_all(self, node_ident): """Get node hardware components and their indicators. :param node_ident: the UUID or logical name of a node. :returns: A json object of hardware components (:mod:`ironic.common.components`) as keys with indicator IDs (from `get_supported_indicators`) as values. """ cdict = pecan.request.context.to_policy_values() policy.authorize('baremetal:node:get_indicator_state', cdict, cdict) rpc_node = api_utils.get_rpc_node(node_ident) topic = pecan.request.rpcapi.get_topic_for(rpc_node) indicators = pecan.request.rpcapi.get_supported_indicators( pecan.request.context, rpc_node.uuid, topic=topic) return IndicatorsCollection.collection_from_dict( node_ident, indicators) class InjectNmiController(rest.RestController): @METRICS.timer('InjectNmiController.put') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def put(self, node_ident): """Inject NMI for a node. Inject NMI (Non Maskable Interrupt) for a node immediately. :param node_ident: the UUID or logical name of a node. :raises: NotFound if requested version of the API doesn't support inject nmi. :raises: HTTPForbidden if the policy is not authorized. :raises: NodeNotFound if the node is not found. :raises: NodeLocked if the node is locked by another conductor. :raises: UnsupportedDriverExtension if the node's driver doesn't support management or management.inject_nmi. :raises: InvalidParameterValue when the wrong driver info is specified or an invalid boot device is specified. :raises: MissingParameterValue if missing supplied info. """ if not api_utils.allow_inject_nmi(): raise exception.NotFound() rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:inject_nmi', node_ident) topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.inject_nmi(api.request.context, rpc_node.uuid, topic=topic) class NodeManagementController(rest.RestController): boot_device = BootDeviceController() """Expose boot_device as a sub-element of management""" inject_nmi = InjectNmiController() """Expose inject_nmi as a sub-element of management""" indicators = IndicatorController() """Expose indicators as a sub-element of management""" class ConsoleInfo(base.Base): """API representation of the console information for a node.""" console_enabled = types.boolean """The console state: if the console is enabled or not.""" console_info = {str: types.jsontype} """The console information. It typically includes the url to access the console and the type of the application that hosts the console.""" @classmethod def sample(cls): console = {'type': 'shellinabox', 'url': 'http://:4201'} return cls(console_enabled=True, console_info=console) class NodeConsoleController(rest.RestController): @METRICS.timer('NodeConsoleController.get') @expose.expose(ConsoleInfo, types.uuid_or_name) def get(self, node_ident): """Get connection information about the console. :param node_ident: UUID or logical name of a node. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:get_console', node_ident) topic = api.request.rpcapi.get_topic_for(rpc_node) try: console = api.request.rpcapi.get_console_information( api.request.context, rpc_node.uuid, topic) console_state = True except exception.NodeConsoleNotEnabled: console = None console_state = False return ConsoleInfo(console_enabled=console_state, console_info=console) @METRICS.timer('NodeConsoleController.put') @expose.expose(None, types.uuid_or_name, types.boolean, status_code=http_client.ACCEPTED) def put(self, node_ident, enabled): """Start and stop the node console. :param node_ident: UUID or logical name of a node. :param enabled: Boolean value; whether to enable or disable the console. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_console_state', node_ident) topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.set_console_mode(api.request.context, rpc_node.uuid, enabled, topic) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states', 'console']) api.response.location = link.build_url('nodes', url_args) class NodeStates(base.APIBase): """API representation of the states of a node.""" console_enabled = types.boolean """Indicates whether the console access is enabled or disabled on the node.""" power_state = str """Represent the current (not transition) power state of the node""" provision_state = str """Represent the current (not transition) provision state of the node""" provision_updated_at = datetime.datetime """The UTC date and time of the last provision state change""" target_power_state = str """The user modified desired power state of the node.""" target_provision_state = str """The user modified desired provision state of the node.""" last_error = str """Any error from the most recent (last) asynchronous transaction that started but failed to finish.""" raid_config = atypes.wsattr({str: types.jsontype}, readonly=True) """Represents the RAID configuration that the node is configured with.""" target_raid_config = atypes.wsattr({str: types.jsontype}, readonly=True) """The desired RAID configuration, to be used the next time the node is configured.""" @staticmethod def convert(rpc_node): attr_list = ['console_enabled', 'last_error', 'power_state', 'provision_state', 'target_power_state', 'target_provision_state', 'provision_updated_at'] if api_utils.allow_raid_config(): attr_list.extend(['raid_config', 'target_raid_config']) states = NodeStates() for attr in attr_list: setattr(states, attr, getattr(rpc_node, attr)) update_state_in_older_versions(states) return states @classmethod def sample(cls): sample = cls(target_power_state=ir_states.POWER_ON, target_provision_state=ir_states.ACTIVE, last_error=None, console_enabled=False, provision_updated_at=None, power_state=ir_states.POWER_ON, provision_state=None, raid_config=None, target_raid_config=None) return sample class NodeStatesController(rest.RestController): _custom_actions = { 'power': ['PUT'], 'provision': ['PUT'], 'raid': ['PUT'], } console = NodeConsoleController() """Expose console as a sub-element of states""" @METRICS.timer('NodeStatesController.get') @expose.expose(NodeStates, types.uuid_or_name) def get(self, node_ident): """List the states of the node. :param node_ident: the UUID or logical_name of a node. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:get_states', node_ident) # NOTE(lucasagomes): All these state values come from the # DB. Ironic counts with a periodic task that verify the current # power states of the nodes and update the DB accordingly. return NodeStates.convert(rpc_node) @METRICS.timer('NodeStatesController.raid') @expose.expose(None, types.uuid_or_name, body=types.jsontype) def raid(self, node_ident, target_raid_config): """Set the target raid config of the node. :param node_ident: the UUID or logical name of a node. :param target_raid_config: Desired target RAID configuration of the node. It may be an empty dictionary as well. :raises: UnsupportedDriverExtension, if the node's driver doesn't support RAID configuration. :raises: InvalidParameterValue, if validation of target raid config fails. :raises: NotAcceptable, if requested version of the API is less than 1.12. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_raid_state', node_ident) if not api_utils.allow_raid_config(): raise exception.NotAcceptable() topic = api.request.rpcapi.get_topic_for(rpc_node) try: api.request.rpcapi.set_target_raid_config( api.request.context, rpc_node.uuid, target_raid_config, topic=topic) except exception.UnsupportedDriverExtension as e: # Change error code as 404 seems appropriate because RAID is a # standard interface and all drivers might not have it. e.code = http_client.NOT_FOUND raise @METRICS.timer('NodeStatesController.power') @expose.expose(None, types.uuid_or_name, str, atypes.IntegerType(minimum=1), status_code=http_client.ACCEPTED) def power(self, node_ident, target, timeout=None): """Set the power state of the node. :param node_ident: the UUID or logical name of a node. :param target: The desired power state of the node. :param timeout: timeout (in seconds) positive integer (> 0) for any power state. ``None`` indicates to use default timeout. :raises: ClientSideError (HTTP 409) if a power operation is already in progress. :raises: InvalidStateRequested (HTTP 400) if the requested target state is not valid or if the node is in CLEANING state. :raises: NotAcceptable (HTTP 406) for soft reboot, soft power off or timeout parameter, if requested version of the API is less than 1.27. :raises: Invalid (HTTP 400) if timeout value is less than 1. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_power_state', node_ident) # TODO(lucasagomes): Test if it's able to transition to the # target state from the current one topic = api.request.rpcapi.get_topic_for(rpc_node) if ((target in [ir_states.SOFT_REBOOT, ir_states.SOFT_POWER_OFF] or timeout) and not api_utils.allow_soft_power_off()): raise exception.NotAcceptable() # FIXME(naohirot): This check is workaround because # atypes.IntegerType(minimum=1) is not effective if timeout is not None and timeout < 1: raise exception.Invalid( _("timeout has to be positive integer")) if target not in ALLOWED_TARGET_POWER_STATES: raise exception.InvalidStateRequested( action=target, node=node_ident, state=rpc_node.power_state) # Don't change power state for nodes being cleaned elif rpc_node.provision_state in (ir_states.CLEANWAIT, ir_states.CLEANING): raise exception.InvalidStateRequested( action=target, node=node_ident, state=rpc_node.provision_state) api.request.rpcapi.change_node_power_state(api.request.context, rpc_node.uuid, target, timeout=timeout, topic=topic) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states']) api.response.location = link.build_url('nodes', url_args) def _do_provision_action(self, rpc_node, target, configdrive=None, clean_steps=None, rescue_password=None): topic = api.request.rpcapi.get_topic_for(rpc_node) # Note that there is a race condition. The node state(s) could change # by the time the RPC call is made and the TaskManager manager gets a # lock. if target in (ir_states.ACTIVE, ir_states.REBUILD): rebuild = (target == ir_states.REBUILD) api.request.rpcapi.do_node_deploy(context=api.request.context, node_id=rpc_node.uuid, rebuild=rebuild, configdrive=configdrive, topic=topic) elif (target == ir_states.VERBS['unrescue']): api.request.rpcapi.do_node_unrescue( api.request.context, rpc_node.uuid, topic) elif (target == ir_states.VERBS['rescue']): if not (rescue_password and rescue_password.strip()): msg = (_('A non-empty "rescue_password" is required when ' 'setting target provision state to %s') % ir_states.VERBS['rescue']) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) api.request.rpcapi.do_node_rescue( api.request.context, rpc_node.uuid, rescue_password, topic) elif target == ir_states.DELETED: api.request.rpcapi.do_node_tear_down( api.request.context, rpc_node.uuid, topic) elif target == ir_states.VERBS['inspect']: api.request.rpcapi.inspect_hardware( api.request.context, rpc_node.uuid, topic=topic) elif target == ir_states.VERBS['clean']: if not clean_steps: msg = (_('"clean_steps" is required when setting target ' 'provision state to %s') % ir_states.VERBS['clean']) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) _check_clean_steps(clean_steps) api.request.rpcapi.do_node_clean( api.request.context, rpc_node.uuid, clean_steps, topic) elif target in PROVISION_ACTION_STATES: api.request.rpcapi.do_provisioning_action( api.request.context, rpc_node.uuid, target, topic) else: msg = (_('The requested action "%(action)s" could not be ' 'understood.') % {'action': target}) raise exception.InvalidStateRequested(message=msg) @METRICS.timer('NodeStatesController.provision') @expose.expose(None, types.uuid_or_name, str, types.jsontype, types.jsontype, str, status_code=http_client.ACCEPTED) def provision(self, node_ident, target, configdrive=None, clean_steps=None, rescue_password=None): """Asynchronous trigger the provisioning of the node. This will set the target provision state of the node, and a background task will begin which actually applies the state change. This call will return a 202 (Accepted) indicating the request was accepted and is in progress; the client should continue to GET the status of this node to observe the status of the requested action. :param node_ident: UUID or logical name of a node. :param target: The desired provision state of the node or verb. :param configdrive: Optional. A gzipped and base64 encoded configdrive or a dict to build a configdrive from. Only valid when setting provision state to "active" or "rebuild". :param clean_steps: An ordered list of cleaning steps that will be performed on the node. A cleaning step is a dictionary with required keys 'interface' and 'step', and optional key 'args'. If specified, the value for 'args' is a keyword variable argument dictionary that is passed to the cleaning step method.:: { 'interface': , 'step': , 'args': {: , ..., : } } For example (this isn't a real example, this cleaning step doesn't exist):: { 'interface': 'deploy', 'step': 'upgrade_firmware', 'args': {'force': True} } This is required (and only valid) when target is "clean". :param rescue_password: A string representing the password to be set inside the rescue environment. This is required (and only valid), when target is "rescue". :raises: NodeLocked (HTTP 409) if the node is currently locked. :raises: ClientSideError (HTTP 409) if the node is already being provisioned. :raises: InvalidParameterValue (HTTP 400), if validation of clean_steps or power driver interface fails. :raises: InvalidStateRequested (HTTP 400) if the requested transition is not possible from the current state. :raises: NodeInMaintenance (HTTP 400), if operation cannot be performed because the node is in maintenance mode. :raises: NoFreeConductorWorker (HTTP 503) if no workers are available. :raises: NotAcceptable (HTTP 406) if the API version specified does not allow the requested state transition. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_provision_state', node_ident) api_utils.check_allow_management_verbs(target) if (target in (ir_states.ACTIVE, ir_states.REBUILD) and rpc_node.maintenance): raise exception.NodeInMaintenance(op=_('provisioning'), node=rpc_node.uuid) m = ir_states.machine.copy() m.initialize(rpc_node.provision_state) if not m.is_actionable_event(ir_states.VERBS.get(target, target)): # Normally, we let the task manager recognize and deal with # NodeLocked exceptions. However, that isn't done until the RPC # calls below. # In order to main backward compatibility with our API HTTP # response codes, we have this check here to deal with cases where # a node is already being operated on (DEPLOYING or such) and we # want to continue returning 409. Without it, we'd return 400. if rpc_node.reservation: raise exception.NodeLocked(node=rpc_node.uuid, host=rpc_node.reservation) raise exception.InvalidStateRequested( action=target, node=rpc_node.uuid, state=rpc_node.provision_state) api_utils.check_allow_configdrive(target, configdrive) if clean_steps and target != ir_states.VERBS['clean']: msg = (_('"clean_steps" is only valid when setting target ' 'provision state to %s') % ir_states.VERBS['clean']) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) if (rescue_password is not None and target != ir_states.VERBS['rescue']): msg = (_('"rescue_password" is only valid when setting target ' 'provision state to %s') % ir_states.VERBS['rescue']) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) if (rpc_node.provision_state == ir_states.INSPECTWAIT and target == ir_states.VERBS['abort']): if not api_utils.allow_inspect_abort(): raise exception.NotAcceptable() self._do_provision_action(rpc_node, target, configdrive, clean_steps, rescue_password) # Set the HTTP Location Header url_args = '/'.join([node_ident, 'states']) api.response.location = link.build_url('nodes', url_args) def _check_clean_steps(clean_steps): """Ensure all necessary keys are present and correct in clean steps. Check that the user-specified clean steps are in the expected format and include the required information. :param clean_steps: a list of clean steps. For more details, see the clean_steps parameter of :func:`NodeStatesController.provision`. :raises: InvalidParameterValue if validation of clean steps fails. """ try: jsonschema.validate(clean_steps, _CLEAN_STEPS_SCHEMA) except jsonschema.ValidationError as exc: raise exception.InvalidParameterValue(_('Invalid clean_steps: %s') % exc) class Traits(base.APIBase): """API representation of the traits for a node.""" traits = atypes.ArrayType(str) """node traits""" @classmethod def sample(cls): traits = ["CUSTOM_TRAIT1", "CUSTOM_TRAIT2"] return cls(traits=traits) def _get_chassis_uuid(node): """Return the UUID of a node's chassis, or None. :param node: a Node object. :returns: the UUID of the node's chassis, or None if the node has no chassis set. """ if not node.chassis_id: return chassis = objects.Chassis.get_by_id(api.request.context, node.chassis_id) return chassis.uuid def _make_trait_list(context, node_id, traits): """Return a TraitList object for the specified node and traits. The Trait objects will not be created in the database. :param context: a request context. :param node_id: the ID of a node. :param traits: a list of trait strings to add to the TraitList. :returns: a TraitList object. """ trait_objs = [objects.Trait(context, node_id=node_id, trait=t) for t in traits] return objects.TraitList(context, objects=trait_objs) class NodeTraitsController(rest.RestController): def __init__(self, node_ident): super(NodeTraitsController, self).__init__() self.node_ident = node_ident @METRICS.timer('NodeTraitsController.get_all') @expose.expose(Traits) def get_all(self): """List node traits.""" node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:traits:list', self.node_ident) traits = objects.TraitList.get_by_node_id(api.request.context, node.id) return Traits(traits=traits.get_trait_names()) @METRICS.timer('NodeTraitsController.put') @expose.expose(None, str, atypes.ArrayType(str), status_code=http_client.NO_CONTENT) def put(self, trait=None, traits=None): """Add a trait to a node. :param trait: String value; trait to add to a node, or None. Mutually exclusive with 'traits'. If not None, adds this trait to the node. :param traits: List of Strings; traits to set for a node, or None. Mutually exclusive with 'trait'. If not None, replaces the node's traits with this list. """ context = api.request.context node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:traits:set', self.node_ident) if (trait and traits is not None) or not (trait or traits is not None): msg = _("A single node trait may be added via PUT " "/v1/nodes//traits/ with no body, " "or all node traits may be replaced via PUT " "/v1/nodes//traits with the list of " "traits specified in the request body.") raise exception.Invalid(msg) if trait: if api.request.body and api.request.json_body: # Ensure PUT nodes/uuid1/traits/trait1 with a non-empty body # fails. msg = _("No body should be provided when adding a trait") raise exception.Invalid(msg) traits = [trait] replace = False new_traits = {t.trait for t in node.traits} | {trait} else: replace = True new_traits = set(traits) for trait in traits: api_utils.validate_trait(trait) # Update the node's traits to reflect the desired state. node.traits = _make_trait_list(context, node.id, sorted(new_traits)) node.obj_reset_changes() chassis_uuid = _get_chassis_uuid(node) notify.emit_start_notification(context, node, 'update', chassis_uuid=chassis_uuid) with notify.handle_error_notification(context, node, 'update', chassis_uuid=chassis_uuid): topic = api.request.rpcapi.get_topic_for(node) api.request.rpcapi.add_node_traits( context, node.id, traits, replace=replace, topic=topic) notify.emit_end_notification(context, node, 'update', chassis_uuid=chassis_uuid) if not replace: # For single traits, set the HTTP Location Header. url_args = '/'.join((self.node_ident, 'traits', trait)) api.response.location = link.build_url('nodes', url_args) @METRICS.timer('NodeTraitsController.delete') @expose.expose(None, str, status_code=http_client.NO_CONTENT) def delete(self, trait=None): """Remove one or all traits from a node. :param trait: String value; trait to remove from a node, or None. If None, all traits are removed. """ context = api.request.context node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:traits:delete', self.node_ident) if trait: traits = [trait] new_traits = {t.trait for t in node.traits} - {trait} else: traits = None new_traits = set() # Update the node's traits to reflect the desired state. node.traits = _make_trait_list(context, node.id, sorted(new_traits)) node.obj_reset_changes() chassis_uuid = _get_chassis_uuid(node) notify.emit_start_notification(context, node, 'update', chassis_uuid=chassis_uuid) with notify.handle_error_notification(context, node, 'update', chassis_uuid=chassis_uuid): topic = api.request.rpcapi.get_topic_for(node) try: api.request.rpcapi.remove_node_traits( context, node.id, traits, topic=topic) except exception.NodeTraitNotFound: # NOTE(hshiina): Internal node ID should not be exposed. raise exception.NodeTraitNotFound(node_id=node.uuid, trait=trait) notify.emit_end_notification(context, node, 'update', chassis_uuid=chassis_uuid) class Node(base.APIBase): """API representation of a bare metal node. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a node. """ _chassis_uuid = None def _get_chassis_uuid(self): return self._chassis_uuid def _set_chassis_uuid(self, value): if value in (atypes.Unset, None): self._chassis_uuid = value elif self._chassis_uuid != value: try: chassis = objects.Chassis.get(api.request.context, value) self._chassis_uuid = chassis.uuid # NOTE(lucasagomes): Create the chassis_id attribute on-the-fly # to satisfy the api -> rpc object # conversion. self.chassis_id = chassis.id except exception.ChassisNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Port e.code = http_client.BAD_REQUEST raise uuid = types.uuid """Unique UUID for this node""" instance_uuid = types.uuid """The UUID of the instance in nova-compute""" name = atypes.wsattr(str) """The logical name for this node""" power_state = atypes.wsattr(str, readonly=True) """Represent the current (not transition) power state of the node""" target_power_state = atypes.wsattr(str, readonly=True) """The user modified desired power state of the node.""" last_error = atypes.wsattr(str, readonly=True) """Any error from the most recent (last) asynchronous transaction that started but failed to finish.""" provision_state = atypes.wsattr(str, readonly=True) """Represent the current (not transition) provision state of the node""" reservation = atypes.wsattr(str, readonly=True) """The hostname of the conductor that holds an exclusive lock on the node.""" provision_updated_at = datetime.datetime """The UTC date and time of the last provision state change""" inspection_finished_at = datetime.datetime """The UTC date and time when the last hardware inspection finished successfully.""" inspection_started_at = datetime.datetime """The UTC date and time when the hardware inspection was started""" maintenance = types.boolean """Indicates whether the node is in maintenance mode.""" maintenance_reason = atypes.wsattr(str, readonly=True) """Indicates reason for putting a node in maintenance mode.""" fault = atypes.wsattr(str, readonly=True) """Indicates the active fault of a node.""" target_provision_state = atypes.wsattr(str, readonly=True) """The user modified desired provision state of the node.""" console_enabled = types.boolean """Indicates whether the console access is enabled or disabled on the node.""" instance_info = {str: types.jsontype} """This node's instance info.""" driver = atypes.wsattr(str, mandatory=True) """The driver responsible for controlling the node""" driver_info = {str: types.jsontype} """This node's driver configuration""" driver_internal_info = atypes.wsattr({str: types.jsontype}, readonly=True) """This driver's internal configuration""" clean_step = atypes.wsattr({str: types.jsontype}, readonly=True) """The current clean step""" deploy_step = atypes.wsattr({str: types.jsontype}, readonly=True) """The current deploy step""" raid_config = atypes.wsattr({str: types.jsontype}, readonly=True) """Represents the current RAID configuration of the node """ target_raid_config = atypes.wsattr({str: types.jsontype}, readonly=True) """The user modified RAID configuration of the node """ extra = {str: types.jsontype} """This node's meta data""" resource_class = atypes.wsattr(atypes.StringType(max_length=80)) """The resource class for the node, useful for classifying or grouping nodes. Used, for example, to classify nodes in Nova's placement engine.""" # NOTE: properties should use a class to enforce required properties # current list: arch, cpus, disk, ram, image properties = {str: types.jsontype} """The physical characteristics of this node""" chassis_uuid = atypes.wsproperty(types.uuid, _get_chassis_uuid, _set_chassis_uuid) """The UUID of the chassis this node belongs""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated node links""" ports = atypes.wsattr([link.Link], readonly=True) """Links to the collection of ports on this node""" portgroups = atypes.wsattr([link.Link], readonly=True) """Links to the collection of portgroups on this node""" volume = atypes.wsattr([link.Link], readonly=True) """Links to endpoint for retrieving volume resources on this node""" states = atypes.wsattr([link.Link], readonly=True) """Links to endpoint for retrieving and setting node states""" boot_interface = atypes.wsattr(str) """The boot interface to be used for this node""" console_interface = atypes.wsattr(str) """The console interface to be used for this node""" deploy_interface = atypes.wsattr(str) """The deploy interface to be used for this node""" inspect_interface = atypes.wsattr(str) """The inspect interface to be used for this node""" management_interface = atypes.wsattr(str) """The management interface to be used for this node""" network_interface = atypes.wsattr(str) """The network interface to be used for this node""" power_interface = atypes.wsattr(str) """The power interface to be used for this node""" raid_interface = atypes.wsattr(str) """The raid interface to be used for this node""" rescue_interface = atypes.wsattr(str) """The rescue interface to be used for this node""" storage_interface = atypes.wsattr(str) """The storage interface to be used for this node""" vendor_interface = atypes.wsattr(str) """The vendor interface to be used for this node""" traits = atypes.ArrayType(str) """The traits associated with this node""" bios_interface = atypes.wsattr(str) """The bios interface to be used for this node""" conductor_group = atypes.wsattr(str) """The conductor group to manage this node""" automated_clean = types.boolean """Indicates whether the node will perform automated clean or not.""" protected = types.boolean """Indicates whether the node is protected from undeploying/rebuilding.""" protected_reason = atypes.wsattr(str) """Indicates reason for protecting the node.""" conductor = atypes.wsattr(str, readonly=True) """Represent the conductor currently serving the node""" owner = atypes.wsattr(str) """Field for storage of physical node owner""" lessee = atypes.wsattr(str) """Field for storage of physical node lessee""" description = atypes.wsattr(str) """Field for node description""" allocation_uuid = atypes.wsattr(types.uuid, readonly=True) """The UUID of the allocation this node belongs""" retired = types.boolean """Indicates whether the node is marked for retirement.""" retired_reason = atypes.wsattr(str) """Indicates the reason for a node's retirement.""" # NOTE(tenbrae): "conductor_affinity" shouldn't be presented on the # API because it's an internal value. Don't add it here. def __init__(self, **kwargs): self.fields = [] fields = list(objects.Node.fields) # NOTE(lucasagomes): chassis_uuid is not part of objects.Node.fields # because it's an API-only attribute. fields.append('chassis_uuid') # NOTE(kaifeng) conductor is not part of objects.Node.fields too. fields.append('conductor') for k in fields: # Add fields we expose. if hasattr(self, k): self.fields.append(k) # TODO(jroll) is there a less hacky way to do this? if k == 'traits' and kwargs.get('traits') is not None: value = [t['trait'] for t in kwargs['traits']['objects']] # NOTE(jroll) this is special-cased to "" and not Unset, # because it is used in hash ring calculations elif (k == 'conductor_group' and (k not in kwargs or kwargs[k] is atypes.Unset)): value = '' else: value = kwargs.get(k, atypes.Unset) setattr(self, k, value) # NOTE(lucasagomes): chassis_id is an attribute created on-the-fly # by _set_chassis_uuid(), it needs to be present in the fields so # that as_dict() will contain chassis_id field when converting it # before saving it in the database. self.fields.append('chassis_id') if 'chassis_uuid' not in kwargs: setattr(self, 'chassis_uuid', kwargs.get('chassis_id', atypes.Unset)) @staticmethod def _convert_with_links(node, url, fields=None, show_states_links=True, show_portgroups=True, show_volume=True): if fields is None: node.ports = [link.Link.make_link('self', url, 'nodes', node.uuid + "/ports"), link.Link.make_link('bookmark', url, 'nodes', node.uuid + "/ports", bookmark=True) ] if show_states_links: node.states = [link.Link.make_link('self', url, 'nodes', node.uuid + "/states"), link.Link.make_link('bookmark', url, 'nodes', node.uuid + "/states", bookmark=True)] if show_portgroups: node.portgroups = [ link.Link.make_link('self', url, 'nodes', node.uuid + "/portgroups"), link.Link.make_link('bookmark', url, 'nodes', node.uuid + "/portgroups", bookmark=True)] if show_volume: node.volume = [ link.Link.make_link('self', url, 'nodes', node.uuid + "/volume"), link.Link.make_link('bookmark', url, 'nodes', node.uuid + "/volume", bookmark=True)] node.links = [link.Link.make_link('self', url, 'nodes', node.uuid), link.Link.make_link('bookmark', url, 'nodes', node.uuid, bookmark=True) ] return node @classmethod def convert_with_links(cls, rpc_node, fields=None, sanitize=True): node = Node(**rpc_node.as_dict()) if (api_utils.allow_expose_conductors() and (fields is None or 'conductor' in fields)): # NOTE(kaifeng) It is possible a node gets orphaned in certain # circumstances, set conductor to None in such case. try: host = api.request.rpcapi.get_conductor_for(rpc_node) node.conductor = host except (exception.NoValidHost, exception.TemporaryFailure): LOG.debug('Currently there is no conductor servicing node ' '%(node)s.', {'node': rpc_node.uuid}) node.conductor = None if (api_utils.allow_allocations() and (fields is None or 'allocation_uuid' in fields)): node.allocation_uuid = None if rpc_node.allocation_id: try: allocation = objects.Allocation.get_by_id( api.request.context, rpc_node.allocation_id) node.allocation_uuid = allocation.uuid except exception.AllocationNotFound: pass if fields is not None: api_utils.check_for_invalid_fields( fields, set(node.as_dict()) | {'allocation_uuid'}) show_states_links = ( api_utils.allow_links_node_states_and_driver_properties()) show_portgroups = api_utils.allow_portgroups_subcontrollers() show_volume = api_utils.allow_volume() node = cls._convert_with_links(node, api.request.public_url, fields=fields, show_states_links=show_states_links, show_portgroups=show_portgroups, show_volume=show_volume) if not sanitize: return node node.sanitize(fields) return node def sanitize(self, fields): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ cdict = api.request.context.to_policy_values() # NOTE(tenbrae): the 'show_password' policy setting name exists for # legacy purposes and can not be changed. Changing it will # cause upgrade problems for any operators who have # customized the value of this field show_driver_secrets = policy.check("show_password", cdict, cdict) show_instance_secrets = policy.check("show_instance_secrets", cdict, cdict) if not show_driver_secrets and self.driver_info != atypes.Unset: self.driver_info = strutils.mask_dict_password( self.driver_info, "******") # NOTE(derekh): mask ssh keys for the ssh power driver. # As this driver is deprecated masking here (opposed to strutils) # is simpler, and easier to backport. This can be removed along # with support for the ssh power driver. if self.driver_info.get('ssh_key_contents'): self.driver_info['ssh_key_contents'] = "******" if not show_instance_secrets and self.instance_info != atypes.Unset: self.instance_info = strutils.mask_dict_password( self.instance_info, "******") # NOTE(tenbrae): agent driver may store a swift temp_url on the # instance_info, which shouldn't be exposed to non-admin users. # Now that ironic supports additional policies, we need to hide # it here, based on this policy. # Related to bug #1613903 if self.instance_info.get('image_url'): self.instance_info['image_url'] = "******" if self.driver_internal_info.get('agent_secret_token'): self.driver_internal_info['agent_secret_token'] = "******" update_state_in_older_versions(self) hide_fields_in_newer_versions(self) if fields is not None: self.unset_fields_except(fields) # NOTE(lucasagomes): The numeric ID should not be exposed to # the user, it's internal only. self.chassis_id = atypes.Unset show_states_links = ( api_utils.allow_links_node_states_and_driver_properties()) show_portgroups = api_utils.allow_portgroups_subcontrollers() show_volume = api_utils.allow_volume() if not show_volume: self.volume = atypes.Unset if not show_portgroups: self.portgroups = atypes.Unset if not show_states_links: self.states = atypes.Unset @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) node_uuid = '1be26c0b-03f2-4d2e-ae87-c02d7f33c123' instance_uuid = 'dcf1fbc5-93fc-4596-9395-b80572f6267b' name = 'database16-dc02' sample = cls(uuid=node_uuid, instance_uuid=instance_uuid, name=name, power_state=ir_states.POWER_ON, target_power_state=ir_states.NOSTATE, last_error=None, provision_state=ir_states.ACTIVE, target_provision_state=ir_states.NOSTATE, reservation=None, driver='fake', driver_info={}, driver_internal_info={}, extra={}, properties={ 'memory_mb': '1024', 'local_gb': '10', 'cpus': '1'}, updated_at=time, created_at=time, provision_updated_at=time, instance_info={}, maintenance=False, maintenance_reason=None, fault=None, inspection_finished_at=None, inspection_started_at=time, console_enabled=False, clean_step={}, deploy_step={}, raid_config=None, target_raid_config=None, network_interface='flat', resource_class='baremetal-gold', boot_interface=None, console_interface=None, deploy_interface=None, inspect_interface=None, management_interface=None, power_interface=None, raid_interface=None, vendor_interface=None, storage_interface=None, traits=[], rescue_interface=None, bios_interface=None, conductor_group="", automated_clean=None, protected=False, protected_reason=None, owner=None, allocation_uuid='982ddb5b-bce5-4d23-8fb8-7f710f648cd5', retired=False, retired_reason=None, lessee=None) # NOTE(matty_dubs): The chassis_uuid getter() is based on the # _chassis_uuid variable: sample._chassis_uuid = 'edcad704-b2da-41d5-96d9-afd580ecfa12' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class NodePatchType(types.JsonPatchType): _api_base = Node @staticmethod def internal_attrs(): defaults = types.JsonPatchType.internal_attrs() # TODO(lucasagomes): Include maintenance once the endpoint # v1/nodes//maintenance do more things than updating the DB. return defaults + ['/console_enabled', '/last_error', '/power_state', '/provision_state', '/reservation', '/target_power_state', '/target_provision_state', '/provision_updated_at', '/maintenance_reason', '/driver_internal_info', '/inspection_finished_at', '/inspection_started_at', '/clean_step', '/deploy_step', '/raid_config', '/target_raid_config', '/fault', '/conductor', '/allocation_uuid'] class NodeCollection(collection.Collection): """API representation of a collection of nodes.""" nodes = [Node] """A list containing nodes objects""" def __init__(self, **kwargs): self._type = 'nodes' @staticmethod def convert_with_links(nodes, limit, url=None, fields=None, **kwargs): collection = NodeCollection() collection.nodes = [Node.convert_with_links(n, fields=fields, sanitize=False) for n in nodes] collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for node in collection.nodes: node.sanitize(fields) return collection @classmethod def sample(cls): sample = cls() node = Node.sample(expand=False) sample.nodes = [node] return sample class NodeVendorPassthruController(rest.RestController): """REST controller for VendorPassthru. This controller allow vendors to expose a custom functionality in the Ironic API. Ironic will merely relay the message from here to the appropriate driver, no introspection will be made in the message body. """ _custom_actions = { 'methods': ['GET'] } @METRICS.timer('NodeVendorPassthruController.methods') @expose.expose(str, types.uuid_or_name) def methods(self, node_ident): """Retrieve information about vendor methods of the given node. :param node_ident: UUID or logical name of a node. :returns: dictionary with : entries. :raises: NodeNotFound if the node is not found. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:vendor_passthru', node_ident) # Raise an exception if node is not found if rpc_node.driver not in _VENDOR_METHODS: topic = api.request.rpcapi.get_topic_for(rpc_node) ret = api.request.rpcapi.get_node_vendor_passthru_methods( api.request.context, rpc_node.uuid, topic=topic) _VENDOR_METHODS[rpc_node.driver] = ret return _VENDOR_METHODS[rpc_node.driver] @METRICS.timer('NodeVendorPassthruController._default') @expose.expose(str, types.uuid_or_name, str, body=str) def _default(self, node_ident, method, data=None): """Call a vendor extension. :param node_ident: UUID or logical name of a node. :param method: name of the method in vendor driver. :param data: body of data to supply to the specified method. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:vendor_passthru', node_ident) # Raise an exception if node is not found topic = api.request.rpcapi.get_topic_for(rpc_node) return api_utils.vendor_passthru(rpc_node.uuid, method, topic, data=data) class NodeMaintenanceController(rest.RestController): def _set_maintenance(self, rpc_node, maintenance_mode, reason=None): context = api.request.context rpc_node.maintenance = maintenance_mode rpc_node.maintenance_reason = reason notify.emit_start_notification(context, rpc_node, 'maintenance_set') with notify.handle_error_notification(context, rpc_node, 'maintenance_set'): try: topic = api.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise new_node = api.request.rpcapi.update_node(context, rpc_node, topic=topic) notify.emit_end_notification(context, new_node, 'maintenance_set') @METRICS.timer('NodeMaintenanceController.put') @expose.expose(None, types.uuid_or_name, str, status_code=http_client.ACCEPTED) def put(self, node_ident, reason=None): """Put the node in maintenance mode. :param node_ident: the UUID or logical_name of a node. :param reason: Optional, the reason why it's in maintenance. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:set_maintenance', node_ident) self._set_maintenance(rpc_node, True, reason=reason) @METRICS.timer('NodeMaintenanceController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.ACCEPTED) def delete(self, node_ident): """Remove the node from maintenance mode. :param node_ident: the UUID or logical name of a node. """ rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:clear_maintenance', node_ident) self._set_maintenance(rpc_node, False) # NOTE(vsaienko) We don't support pagination with VIFs, so we don't use # collection.Collection here. class VifCollection(base.Base): """API representation of a collection of VIFs. """ vifs = [types.viftype] """A list containing VIFs objects""" @staticmethod def collection_from_list(vifs): col = VifCollection() col.vifs = [types.VifType.frombasetype(vif) for vif in vifs] return col class NodeVIFController(rest.RestController): def __init__(self, node_ident): self.node_ident = node_ident def _get_node_and_topic(self, policy_name): rpc_node = api_utils.check_node_policy_and_retrieve( policy_name, self.node_ident) try: return rpc_node, api.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise @METRICS.timer('NodeVIFController.get_all') @expose.expose(VifCollection) def get_all(self): """Get a list of attached VIFs""" rpc_node, topic = self._get_node_and_topic('baremetal:node:vif:list') vifs = api.request.rpcapi.vif_list(api.request.context, rpc_node.uuid, topic=topic) return VifCollection.collection_from_list(vifs) @METRICS.timer('NodeVIFController.post') @expose.expose(None, body=types.viftype, status_code=http_client.NO_CONTENT) def post(self, vif): """Attach a VIF to this node :param vif: a dictionary of information about a VIF. It must have an 'id' key, whose value is a unique identifier for that VIF. """ rpc_node, topic = self._get_node_and_topic('baremetal:node:vif:attach') api.request.rpcapi.vif_attach(api.request.context, rpc_node.uuid, vif_info=vif, topic=topic) @METRICS.timer('NodeVIFController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, vif_id): """Detach a VIF from this node :param vif_id: The ID of a VIF to detach """ rpc_node, topic = self._get_node_and_topic('baremetal:node:vif:detach') api.request.rpcapi.vif_detach(api.request.context, rpc_node.uuid, vif_id=vif_id, topic=topic) class NodesController(rest.RestController): """REST controller for Nodes.""" # NOTE(lucasagomes): For future reference. If we happen # to need to add another sub-controller in this class let's # try to make it a parameter instead of an endpoint due # https://bugs.launchpad.net/ironic/+bug/1572651, e.g, instead of # v1/nodes/(ident)/detail we could have v1/nodes/(ident)?detail=True states = NodeStatesController() """Expose the state controller action as a sub-element of nodes""" vendor_passthru = NodeVendorPassthruController() """A resource used for vendors to expose a custom functionality in the API""" management = NodeManagementController() """Expose management as a sub-element of nodes""" maintenance = NodeMaintenanceController() """Expose maintenance as a sub-element of nodes""" from_chassis = False """A flag to indicate if the requests to this controller are coming from the top-level resource Chassis""" _custom_actions = { 'detail': ['GET'], 'validate': ['GET'], } invalid_sort_key_list = ['properties', 'driver_info', 'extra', 'instance_info', 'driver_internal_info', 'clean_step', 'deploy_step', 'raid_config', 'target_raid_config', 'traits'] _subcontroller_map = { 'ports': port.PortsController, 'portgroups': portgroup.PortgroupsController, 'vifs': NodeVIFController, 'volume': volume.VolumeController, 'traits': NodeTraitsController, 'bios': bios.NodeBiosController, 'allocation': allocation.NodeAllocationController, } @pecan.expose() def _lookup(self, ident, *remainder): try: ident = types.uuid_or_name.validate(ident) except exception.InvalidUuidOrName as e: pecan.abort(http_client.BAD_REQUEST, e.args[0]) if not remainder: return if ((remainder[0] == 'portgroups' and not api_utils.allow_portgroups_subcontrollers()) or (remainder[0] == 'vifs' and not api_utils.allow_vifs_subcontroller()) or (remainder[0] == 'bios' and not api_utils.allow_bios_interface()) or (remainder[0] == 'allocation' and not api_utils.allow_allocations())): pecan.abort(http_client.NOT_FOUND) if remainder[0] == 'traits' and not api_utils.allow_traits(): # NOTE(mgoddard): Returning here will ensure we exhibit the # behaviour of previous releases for microversions without this # endpoint. return subcontroller = self._subcontroller_map.get(remainder[0]) if subcontroller: return subcontroller(node_ident=ident), remainder[1:] def _filter_by_conductor(self, nodes, conductor): filtered_nodes = [] for n in nodes: try: host = api.request.rpcapi.get_conductor_for(n) if host == conductor: filtered_nodes.append(n) except (exception.NoValidHost, exception.TemporaryFailure): # NOTE(kaifeng) Node gets orphaned in case some conductor # offline or all conductors are offline. pass return filtered_nodes def _get_nodes_collection(self, chassis_uuid, instance_uuid, associated, maintenance, retired, provision_state, marker, limit, sort_key, sort_dir, driver=None, resource_class=None, resource_url=None, fields=None, fault=None, conductor_group=None, detail=None, conductor=None, owner=None, lessee=None, project=None, description_contains=None): if self.from_chassis and not chassis_uuid: raise exception.MissingParameterValue( _("Chassis id not specified.")) limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) marker_obj = None if marker: marker_obj = objects.Node.get_by_uuid(api.request.context, marker) # The query parameters for the 'next' URL parameters = {} if instance_uuid: # NOTE(rloo) if instance_uuid is specified, the other query # parameters are ignored. Since there can be at most one node that # has this instance_uuid, we do not want to generate a 'next' link. nodes = self._get_nodes_by_instance(instance_uuid) # NOTE(rloo) if limit==1 and len(nodes)==1 (see # Collection.has_next()), a 'next' link will # be generated, which we don't want. limit = 0 else: possible_filters = { 'maintenance': maintenance, 'chassis_uuid': chassis_uuid, 'associated': associated, 'provision_state': provision_state, 'driver': driver, 'resource_class': resource_class, 'fault': fault, 'conductor_group': conductor_group, 'owner': owner, 'lessee': lessee, 'project': project, 'description_contains': description_contains, 'retired': retired, } filters = {} for key, value in possible_filters.items(): if value is not None: filters[key] = value nodes = objects.Node.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir, filters=filters) # Special filtering on results based on conductor field if conductor: nodes = self._filter_by_conductor(nodes, conductor) parameters = {'sort_key': sort_key, 'sort_dir': sort_dir} if associated: parameters['associated'] = associated if maintenance: parameters['maintenance'] = maintenance if retired: parameters['retired'] = retired if detail is not None: parameters['detail'] = detail return NodeCollection.convert_with_links(nodes, limit, url=resource_url, fields=fields, **parameters) def _get_nodes_by_instance(self, instance_uuid): """Retrieve a node by its instance uuid. It returns a list with the node, or an empty list if no node is found. """ try: node = objects.Node.get_by_instance_uuid(api.request.context, instance_uuid) return [node] except exception.InstanceNotFound: return [] def _check_names_acceptable(self, names, error_msg): """Checks all node 'name's are acceptable, it does not return a value. This function will raise an exception for unacceptable names. :param names: list of node names to check :param error_msg: error message in case of exception.ClientSideError, should contain %(name)s placeholder. :raises: exception.NotAcceptable :raises: exception.ClientSideError """ if not api_utils.allow_node_logical_names(): raise exception.NotAcceptable() reserved_names = get_nodes_controller_reserved_names() for name in names: if not api_utils.is_valid_node_name(name): raise exception.ClientSideError( error_msg % {'name': name}, status_code=http_client.BAD_REQUEST) if name in reserved_names: raise exception.ClientSideError( 'The word "%(name)s" is reserved and can not be used as a ' 'node name. Reserved words are: %(reserved)s.' % {'name': name, 'reserved': ', '.join(reserved_names)}, status_code=http_client.BAD_REQUEST) def _update_changed_fields(self, node, rpc_node): """Update rpc_node based on changed fields in a node. """ # NOTE(mgoddard): Traits cannot be updated via a node PATCH. fields = set(objects.Node.fields) - {'traits'} for field in fields: try: patch_val = getattr(node, field) except AttributeError: # Ignore fields that aren't exposed in the API, except # chassis_id. chassis_id would have been set (instead of # chassis_uuid) if the node belongs to a chassis. This # AttributeError is raised for chassis_id only if # 1. the node doesn't belong to a chassis or # 2. the node belonged to a chassis but is now being removed # from the chassis. if (field == "chassis_id" and rpc_node[field] is not None): if not api_utils.allow_remove_chassis_uuid(): raise exception.NotAcceptable() rpc_node[field] = None continue if patch_val == atypes.Unset: patch_val = None # conductor_group is case-insensitive, and we use it to calculate # the conductor to send an update too. lowercase it here instead # of just before saving so we calculate correctly. if field == 'conductor_group': patch_val = patch_val.lower() if rpc_node[field] != patch_val: rpc_node[field] = patch_val def _check_driver_changed_and_console_enabled(self, rpc_node, node_ident): """Checks if the driver and the console is enabled in a node. If it does, is necessary to prevent updating it because the new driver will not be able to stop a console started by the previous one. :param rpc_node: RPC Node object to be verified. :param node_ident: the UUID or logical name of a node. :raises: exception.ClientSideError """ delta = rpc_node.obj_what_changed() if 'driver' in delta and rpc_node.console_enabled: raise exception.ClientSideError( _("Node %s can not update the driver while the console is " "enabled. Please stop the console first.") % node_ident, status_code=http_client.CONFLICT) @METRICS.timer('NodesController.get_all') @expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean, types.boolean, types.boolean, str, types.uuid, int, str, str, str, types.listtype, str, str, str, types.boolean, str, str, str, str, str) def get_all(self, chassis_uuid=None, instance_uuid=None, associated=None, maintenance=None, retired=None, provision_state=None, marker=None, limit=None, sort_key='id', sort_dir='asc', driver=None, fields=None, resource_class=None, fault=None, conductor_group=None, detail=None, conductor=None, owner=None, description_contains=None, lessee=None, project=None): """Retrieve a list of nodes. :param chassis_uuid: Optional UUID of a chassis, to get only nodes for that chassis. :param instance_uuid: Optional UUID of an instance, to find the node associated with that instance. :param associated: Optional boolean whether to return a list of associated or unassociated nodes. May be combined with other parameters. :param maintenance: Optional boolean value that indicates whether to get nodes in maintenance mode ("True"), or not in maintenance mode ("False"). :param retired: Optional boolean value that indicates whether to get retired nodes. :param provision_state: Optional string value to get only nodes in that provision state. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param driver: Optional string value to get only nodes using that driver. :param resource_class: Optional string value to get only nodes with that resource_class. :param conductor_group: Optional string value to get only nodes with that conductor_group. :param conductor: Optional string value to get only nodes managed by that conductor. :param owner: Optional string value that set the owner whose nodes are to be retrurned. :param lessee: Optional string value that set the lessee whose nodes are to be returned. :param project: Optional string value that set the project - lessee or owner - whose nodes are to be returned. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param fault: Optional string value to get only nodes with that fault. :param description_contains: Optional string value to get only nodes with description field contains matching value. """ project = api_utils.check_list_policy('node', project) api_utils.check_allow_specify_fields(fields) api_utils.check_allowed_fields(fields) api_utils.check_allowed_fields([sort_key]) api_utils.check_for_invalid_state_and_allow_filter(provision_state) api_utils.check_allow_specify_driver(driver) api_utils.check_allow_specify_resource_class(resource_class) api_utils.check_allow_filter_by_fault(fault) api_utils.check_allow_filter_by_conductor_group(conductor_group) api_utils.check_allow_filter_by_conductor(conductor) api_utils.check_allow_filter_by_owner(owner) api_utils.check_allow_filter_by_lessee(lessee) fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) extra_args = {'description_contains': description_contains} return self._get_nodes_collection(chassis_uuid, instance_uuid, associated, maintenance, retired, provision_state, marker, limit, sort_key, sort_dir, driver=driver, resource_class=resource_class, fields=fields, fault=fault, conductor_group=conductor_group, detail=detail, conductor=conductor, owner=owner, lessee=lessee, project=project, **extra_args) @METRICS.timer('NodesController.detail') @expose.expose(NodeCollection, types.uuid, types.uuid, types.boolean, types.boolean, types.boolean, str, types.uuid, int, str, str, str, str, str, str, str, str, str, str, str) def detail(self, chassis_uuid=None, instance_uuid=None, associated=None, maintenance=None, retired=None, provision_state=None, marker=None, limit=None, sort_key='id', sort_dir='asc', driver=None, resource_class=None, fault=None, conductor_group=None, conductor=None, owner=None, description_contains=None, lessee=None, project=None): """Retrieve a list of nodes with detail. :param chassis_uuid: Optional UUID of a chassis, to get only nodes for that chassis. :param instance_uuid: Optional UUID of an instance, to find the node associated with that instance. :param associated: Optional boolean whether to return a list of associated or unassociated nodes. May be combined with other parameters. :param maintenance: Optional boolean value that indicates whether to get nodes in maintenance mode ("True"), or not in maintenance mode ("False"). :param retired: Optional boolean value that indicates whether to get nodes which are retired. :param provision_state: Optional string value to get only nodes in that provision state. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param driver: Optional string value to get only nodes using that driver. :param resource_class: Optional string value to get only nodes with that resource_class. :param fault: Optional string value to get only nodes with that fault. :param conductor_group: Optional string value to get only nodes with that conductor_group. :param owner: Optional string value that set the owner whose nodes are to be retrurned. :param lessee: Optional string value that set the lessee whose nodes are to be returned. :param project: Optional string value that set the project - lessee or owner - whose nodes are to be returned. :param description_contains: Optional string value to get only nodes with description field contains matching value. """ project = api_utils.check_list_policy('node', project) api_utils.check_for_invalid_state_and_allow_filter(provision_state) api_utils.check_allow_specify_driver(driver) api_utils.check_allow_specify_resource_class(resource_class) api_utils.check_allow_filter_by_fault(fault) api_utils.check_allow_filter_by_conductor_group(conductor_group) api_utils.check_allow_filter_by_owner(owner) api_utils.check_allow_filter_by_lessee(lessee) api_utils.check_allowed_fields([sort_key]) # /detail should only work against collections parent = api.request.path.split('/')[:-1][-1] if parent != "nodes": raise exception.HTTPNotFound() api_utils.check_allow_filter_by_conductor(conductor) resource_url = '/'.join(['nodes', 'detail']) extra_args = {'description_contains': description_contains} return self._get_nodes_collection(chassis_uuid, instance_uuid, associated, maintenance, retired, provision_state, marker, limit, sort_key, sort_dir, driver=driver, resource_class=resource_class, resource_url=resource_url, fault=fault, conductor_group=conductor_group, conductor=conductor, owner=owner, lessee=lessee, project=project, **extra_args) @METRICS.timer('NodesController.validate') @expose.expose(str, types.uuid_or_name, types.uuid) def validate(self, node=None, node_uuid=None): """Validate the driver interfaces, using the node's UUID or name. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node. :param node_uuid: UUID of a node. """ if node is not None: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:validate', node_uuid or node) topic = api.request.rpcapi.get_topic_for(rpc_node) return api.request.rpcapi.validate_driver_interfaces( api.request.context, rpc_node.uuid, topic) @METRICS.timer('NodesController.get_one') @expose.expose(Node, types.uuid_or_name, types.listtype) def get_one(self, node_ident, fields=None): """Retrieve information about the given node. :param node_ident: UUID or logical name of a node. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ if self.from_chassis: raise exception.OperationNotPermitted() rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:get', node_ident, with_suffix=True) api_utils.check_allow_specify_fields(fields) api_utils.check_allowed_fields(fields) return Node.convert_with_links(rpc_node, fields=fields) @METRICS.timer('NodesController.post') @expose.expose(Node, body=Node, status_code=http_client.CREATED) def post(self, node): """Create a new node. :param node: a node within the request body. """ if self.from_chassis: raise exception.OperationNotPermitted() context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:node:create', cdict, cdict) if node.conductor is not atypes.Unset: msg = _("Cannot specify conductor on node creation.") raise exception.Invalid(msg) reject_fields_in_newer_versions(node) if node.traits is not atypes.Unset: msg = _("Cannot specify node traits on node creation. Traits must " "be set via the node traits API.") raise exception.Invalid(msg) if (node.protected is not atypes.Unset or node.protected_reason is not atypes.Unset): msg = _("Cannot specify protected or protected_reason on node " "creation. These fields can only be set for active nodes") raise exception.Invalid(msg) if (node.description is not atypes.Unset and len(node.description) > _NODE_DESCRIPTION_MAX_LENGTH): msg = _("Cannot create node with description exceeding %s " "characters") % _NODE_DESCRIPTION_MAX_LENGTH raise exception.Invalid(msg) if node.allocation_uuid is not atypes.Unset: msg = _("Allocation UUID cannot be specified, use allocations API") raise exception.Invalid(msg) # NOTE(tenbrae): get_topic_for checks if node.driver is in the hash # ring and raises NoValidHost if it is not. # We need to ensure that node has a UUID before it can # be mapped onto the hash ring. if not node.uuid: node.uuid = uuidutils.generate_uuid() try: topic = api.request.rpcapi.get_topic_for(node) except exception.NoValidHost as e: # NOTE(tenbrae): convert from 404 to 400 because client can see # list of available drivers and shouldn't request # one that doesn't exist. e.code = http_client.BAD_REQUEST raise if node.name != atypes.Unset and node.name is not None: error_msg = _("Cannot create node with invalid name '%(name)s'") self._check_names_acceptable([node.name], error_msg) node.provision_state = api_utils.initial_node_provision_state() if not node.resource_class: node.resource_class = CONF.default_resource_class new_node = objects.Node(context, **node.as_dict()) notify.emit_start_notification(context, new_node, 'create', chassis_uuid=node.chassis_uuid) with notify.handle_error_notification(context, new_node, 'create', chassis_uuid=node.chassis_uuid): new_node = api.request.rpcapi.create_node(context, new_node, topic) # Set the HTTP Location Header api.response.location = link.build_url('nodes', new_node.uuid) api_node = Node.convert_with_links(new_node) notify.emit_end_notification(context, new_node, 'create', chassis_uuid=api_node.chassis_uuid) return api_node def _validate_patch(self, patch, reset_interfaces): if self.from_chassis: raise exception.OperationNotPermitted() reject_patch_in_newer_versions(patch) traits = api_utils.get_patch_values(patch, '/traits') if traits: msg = _("Cannot update node traits via node patch. Node traits " "should be updated via the node traits API.") raise exception.Invalid(msg) driver = api_utils.get_patch_values(patch, '/driver') if reset_interfaces and not driver: msg = _("The reset_interfaces parameter can only be used when " "changing the node's driver.") raise exception.Invalid(msg) description = api_utils.get_patch_values(patch, '/description') if description and len(description[0]) > _NODE_DESCRIPTION_MAX_LENGTH: msg = _("Cannot update node with description exceeding %s " "characters") % _NODE_DESCRIPTION_MAX_LENGTH raise exception.Invalid(msg) def _authorize_patch_and_get_node(self, node_ident, patch): # deal with attribute-specific policy rules policy_checks = [] generic_update = False for p in patch: if p['path'].startswith('/instance_info'): policy_checks.append('baremetal:node:update_instance_info') elif p['path'].startswith('/extra'): policy_checks.append('baremetal:node:update_extra') else: generic_update = True # always do at least one check if generic_update or not policy_checks: policy_checks.append('baremetal:node:update') return api_utils.check_multiple_node_policies_and_retrieve( policy_checks, node_ident, with_suffix=True) @METRICS.timer('NodesController.patch') @wsme.validate(types.uuid, types.boolean, [NodePatchType]) @expose.expose(Node, types.uuid_or_name, types.boolean, body=[NodePatchType]) def patch(self, node_ident, reset_interfaces=None, patch=None): """Update an existing node. :param node_ident: UUID or logical name of a node. :param reset_interfaces: whether to reset hardware interfaces to their defaults. Only valid when updating the driver field. :param patch: a json PATCH document to apply to this node. """ if (reset_interfaces is not None and not api_utils.allow_reset_interfaces()): raise exception.NotAcceptable() self._validate_patch(patch, reset_interfaces) context = api.request.context rpc_node = self._authorize_patch_and_get_node(node_ident, patch) remove_inst_uuid_patch = [{'op': 'remove', 'path': '/instance_uuid'}] if rpc_node.maintenance and patch == remove_inst_uuid_patch: LOG.debug('Removing instance uuid %(instance)s from node %(node)s', {'instance': rpc_node.instance_uuid, 'node': rpc_node.uuid}) # Check if node is transitioning state, although nodes in some states # can be updated. elif (rpc_node.target_provision_state and rpc_node.provision_state not in ir_states.UPDATE_ALLOWED_STATES): msg = _("Node %s can not be updated while a state transition " "is in progress.") raise exception.ClientSideError( msg % node_ident, status_code=http_client.CONFLICT) elif (rpc_node.provision_state == ir_states.INSPECTING and api_utils.allow_inspect_wait_state()): msg = _('Cannot update node "%(node)s" while it is in state ' '"%(state)s".') % {'node': rpc_node.uuid, 'state': ir_states.INSPECTING} raise exception.ClientSideError(msg, status_code=http_client.CONFLICT) elif api_utils.get_patch_values(patch, '/owner'): # check if updating a provisioned node's owner is allowed if rpc_node.provision_state == ir_states.ACTIVE: try: api_utils.check_owner_policy( 'node', 'baremetal:node:update_owner_provisioned', rpc_node['owner'], rpc_node['lessee']) except exception.HTTPForbidden: msg = _('Cannot update owner of node "%(node)s" while it ' 'is in state "%(state)s".') % { 'node': rpc_node.uuid, 'state': ir_states.ACTIVE} raise exception.ClientSideError( msg, status_code=http_client.CONFLICT) # check if node has an associated allocation with an owner if rpc_node.allocation_id: try: allocation = objects.Allocation.get_by_id( context, rpc_node.allocation_id) if allocation.owner is not None: msg = _('Cannot update owner of node "%(node)s" while ' 'it is allocated to an allocation with an ' ' owner.') % {'node': rpc_node.uuid} raise exception.ClientSideError( msg, status_code=http_client.CONFLICT) except exception.AllocationNotFound: pass names = api_utils.get_patch_values(patch, '/name') if len(names): error_msg = (_("Node %s: Cannot change name to invalid name ") % node_ident) error_msg += "'%(name)s'" self._check_names_acceptable(names, error_msg) node_dict = rpc_node.as_dict() # NOTE(lucasagomes): # 1) Remove chassis_id because it's an internal value and # not present in the API object # 2) Add chassis_uuid node_dict['chassis_uuid'] = node_dict.pop('chassis_id', None) node = Node(**api_utils.apply_jsonpatch(node_dict, patch)) self._update_changed_fields(node, rpc_node) # NOTE(tenbrae): we calculate the rpc topic here in case node.driver # has changed, so that update is sent to the # new conductor, not the old one which may fail to # load the new driver. try: topic = api.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: # NOTE(tenbrae): convert from 404 to 400 because client can see # list of available drivers and shouldn't request # one that doesn't exist. e.code = http_client.BAD_REQUEST raise self._check_driver_changed_and_console_enabled(rpc_node, node_ident) notify.emit_start_notification(context, rpc_node, 'update', chassis_uuid=node.chassis_uuid) with notify.handle_error_notification(context, rpc_node, 'update', chassis_uuid=node.chassis_uuid): new_node = api.request.rpcapi.update_node(context, rpc_node, topic, reset_interfaces) api_node = Node.convert_with_links(new_node) notify.emit_end_notification(context, new_node, 'update', chassis_uuid=api_node.chassis_uuid) return api_node @METRICS.timer('NodesController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, node_ident): """Delete a node. :param node_ident: UUID or logical name of a node. """ if self.from_chassis: raise exception.OperationNotPermitted() context = api.request.context rpc_node = api_utils.check_node_policy_and_retrieve( 'baremetal:node:delete', node_ident, with_suffix=True) chassis_uuid = _get_chassis_uuid(rpc_node) notify.emit_start_notification(context, rpc_node, 'delete', chassis_uuid=chassis_uuid) with notify.handle_error_notification(context, rpc_node, 'delete', chassis_uuid=chassis_uuid): try: topic = api.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise api.request.rpcapi.destroy_node(context, rpc_node.uuid, topic) notify.emit_end_notification(context, rpc_node, 'delete', chassis_uuid=chassis_uuid) ironic-15.0.0/ironic/api/controllers/v1/driver.py0000664000175000017500000004150313652514273021712 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client from ironic_lib import metrics_utils from pecan import rest from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.drivers import base as driver_base METRICS = metrics_utils.get_metrics_logger(__name__) # Property information for drivers: # key = driver name; # value = dictionary of properties of that driver: # key = property name. # value = description of the property. # NOTE(rloo). This is cached for the lifetime of the API service. If one or # more conductor services are restarted with new driver versions, the API # service should be restarted. _DRIVER_PROPERTIES = {} # Vendor information for drivers: # key = driver name; # value = dictionary of vendor methods of that driver: # key = method name. # value = dictionary with the metadata of that method. # NOTE(lucasagomes). This is cached for the lifetime of the API # service. If one or more conductor services are restarted with new driver # versions, the API service should be restarted. _VENDOR_METHODS = {} # RAID (logical disk) configuration information for drivers: # key = driver name; # value = dictionary of RAID configuration information of that driver: # key = property name. # value = description of the property # NOTE(rloo). This is cached for the lifetime of the API service. If one or # more conductor services are restarted with new driver versions, the API # service should be restarted. _RAID_PROPERTIES = {} def hide_fields_in_newer_versions(obj): """This method hides fields that were added in newer API versions. Certain fields were introduced at certain API versions. These fields are only made available when the request's API version matches or exceeds the versions when these fields were introduced. """ if not api_utils.allow_storage_interface(): obj.default_storage_interface = atypes.Unset obj.enabled_storage_interfaces = atypes.Unset if not api_utils.allow_rescue_interface(): obj.default_rescue_interface = atypes.Unset obj.enabled_rescue_interfaces = atypes.Unset if not api_utils.allow_bios_interface(): obj.default_bios_interface = atypes.Unset obj.enabled_bios_interfaces = atypes.Unset class Driver(base.Base): """API representation of a driver.""" name = str """The name of the driver""" hosts = [str] """A list of active conductors that support this driver""" type = str """Whether the driver is classic or dynamic (hardware type)""" links = atypes.wsattr([link.Link], readonly=True) """A list containing self and bookmark links""" properties = atypes.wsattr([link.Link], readonly=True) """A list containing links to driver properties""" """Default interface for a hardware type""" default_bios_interface = str default_boot_interface = str default_console_interface = str default_deploy_interface = str default_inspect_interface = str default_management_interface = str default_network_interface = str default_power_interface = str default_raid_interface = str default_rescue_interface = str default_storage_interface = str default_vendor_interface = str """A list of enabled interfaces for a hardware type""" enabled_bios_interfaces = [str] enabled_boot_interfaces = [str] enabled_console_interfaces = [str] enabled_deploy_interfaces = [str] enabled_inspect_interfaces = [str] enabled_management_interfaces = [str] enabled_network_interfaces = [str] enabled_power_interfaces = [str] enabled_raid_interfaces = [str] enabled_rescue_interfaces = [str] enabled_storage_interfaces = [str] enabled_vendor_interfaces = [str] @staticmethod def convert_with_links(name, hosts, detail=False, interface_info=None): """Convert driver/hardware type info to an API-serializable object. :param name: name of a hardware type. :param hosts: list of conductor hostnames driver is active on. :param detail: boolean, whether to include detailed info, such as the 'type' field and default/enabled interfaces fields. :param interface_info: optional list of dicts of hardware interface info. :returns: API-serializable driver object. """ driver = Driver() driver.name = name driver.hosts = hosts driver.links = [ link.Link.make_link('self', api.request.public_url, 'drivers', name), link.Link.make_link('bookmark', api.request.public_url, 'drivers', name, bookmark=True) ] if api_utils.allow_links_node_states_and_driver_properties(): driver.properties = [ link.Link.make_link('self', api.request.public_url, 'drivers', name + "/properties"), link.Link.make_link('bookmark', api.request.public_url, 'drivers', name + "/properties", bookmark=True) ] if api_utils.allow_dynamic_drivers(): # NOTE(dtantsur): only dynamic drivers (based on hardware types) # are supported starting with the Rocky release. driver.type = 'dynamic' if detail: if interface_info is None: # TODO(jroll) objectify this interface_info = (api.request.dbapi .list_hardware_type_interfaces([name])) for iface_type in driver_base.ALL_INTERFACES: default = None enabled = set() for iface in interface_info: if iface['interface_type'] == iface_type: iface_name = iface['interface_name'] enabled.add(iface_name) # NOTE(jroll) this assumes the default is the same # on all conductors if iface['default']: default = iface_name default_key = 'default_%s_interface' % iface_type enabled_key = 'enabled_%s_interfaces' % iface_type setattr(driver, default_key, default) setattr(driver, enabled_key, list(enabled)) hide_fields_in_newer_versions(driver) return driver @classmethod def sample(cls): attrs = { 'name': 'sample-driver', 'hosts': ['fake-host'], 'type': 'classic', } for iface_type in driver_base.ALL_INTERFACES: attrs['default_%s_interface' % iface_type] = None attrs['enabled_%s_interfaces' % iface_type] = None sample = cls(**attrs) return sample class DriverList(base.Base): """API representation of a list of drivers.""" drivers = [Driver] """A list containing drivers objects""" @staticmethod def convert_with_links(hardware_types, detail=False): """Convert drivers and hardware types to an API-serializable object. :param hardware_types: dict mapping hardware type names to conductor hostnames. :param detail: boolean, whether to include detailed info, such as the 'type' field and default/enabled interfaces fields. :returns: an API-serializable driver collection object. """ collection = DriverList() collection.drivers = [] # NOTE(jroll) we return hardware types in all API versions, # but restrict type/default/enabled fields to 1.30. # This is checked in Driver.convert_with_links(), however also # checking here can save us a DB query. if api_utils.allow_dynamic_drivers() and detail: iface_info = api.request.dbapi.list_hardware_type_interfaces( list(hardware_types)) else: iface_info = [] for htname in hardware_types: interface_info = [i for i in iface_info if i['hardware_type'] == htname] collection.drivers.append( Driver.convert_with_links(htname, list(hardware_types[htname]), detail=detail, interface_info=interface_info)) return collection @classmethod def sample(cls): sample = cls() sample.drivers = [Driver.sample()] return sample class DriverPassthruController(rest.RestController): """REST controller for driver passthru. This controller allow vendors to expose cross-node functionality in the Ironic API. Ironic will merely relay the message from here to the specified driver, no introspection will be made in the message body. """ _custom_actions = { 'methods': ['GET'] } @METRICS.timer('DriverPassthruController.methods') @expose.expose(str, str) def methods(self, driver_name): """Retrieve information about vendor methods of the given driver. :param driver_name: name of the driver. :returns: dictionary with : entries. :raises: DriverNotFound if the driver name is invalid or the driver cannot be loaded. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:vendor_passthru', cdict, cdict) if driver_name not in _VENDOR_METHODS: topic = api.request.rpcapi.get_topic_for_driver(driver_name) ret = api.request.rpcapi.get_driver_vendor_passthru_methods( api.request.context, driver_name, topic=topic) _VENDOR_METHODS[driver_name] = ret return _VENDOR_METHODS[driver_name] @METRICS.timer('DriverPassthruController._default') @expose.expose(str, str, str, body=str) def _default(self, driver_name, method, data=None): """Call a driver API extension. :param driver_name: name of the driver to call. :param method: name of the method, to be passed to the vendor implementation. :param data: body of data to supply to the specified method. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:vendor_passthru', cdict, cdict) topic = api.request.rpcapi.get_topic_for_driver(driver_name) return api_utils.vendor_passthru(driver_name, method, topic, data=data, driver_passthru=True) class DriverRaidController(rest.RestController): _custom_actions = { 'logical_disk_properties': ['GET'] } @METRICS.timer('DriverRaidController.logical_disk_properties') @expose.expose(types.jsontype, str) def logical_disk_properties(self, driver_name): """Returns the logical disk properties for the driver. :param driver_name: Name of the driver. :returns: A dictionary containing the properties that can be mentioned for logical disks and a textual description for them. :raises: UnsupportedDriverExtension if the driver doesn't support RAID configuration. :raises: NotAcceptable, if requested version of the API is less than 1.12. :raises: DriverNotFound, if driver is not loaded on any of the conductors. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:get_raid_logical_disk_properties', cdict, cdict) if not api_utils.allow_raid_config(): raise exception.NotAcceptable() if driver_name not in _RAID_PROPERTIES: topic = api.request.rpcapi.get_topic_for_driver(driver_name) try: info = api.request.rpcapi.get_raid_logical_disk_properties( api.request.context, driver_name, topic=topic) except exception.UnsupportedDriverExtension as e: # Change error code as 404 seems appropriate because RAID is a # standard interface and all drivers might not have it. e.code = http_client.NOT_FOUND raise _RAID_PROPERTIES[driver_name] = info return _RAID_PROPERTIES[driver_name] class DriversController(rest.RestController): """REST controller for Drivers.""" vendor_passthru = DriverPassthruController() raid = DriverRaidController() """Expose RAID as a sub-element of drivers""" _custom_actions = { 'properties': ['GET'], } @METRICS.timer('DriversController.get_all') @expose.expose(DriverList, str, types.boolean) def get_all(self, type=None, detail=None): """Retrieve a list of drivers.""" # FIXME(tenbrae): formatting of the auto-generated REST API docs # will break from a single-line doc string. # This is a result of a bug in sphinxcontrib-pecanwsme # https://github.com/dreamhost/sphinxcontrib-pecanwsme/issues/8 cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:get', cdict, cdict) api_utils.check_allow_driver_detail(detail) api_utils.check_allow_filter_driver_type(type) if type not in (None, 'classic', 'dynamic'): raise exception.Invalid(_( '"type" filter must be one of "classic" or "dynamic", ' 'if specified.')) if type is None or type == 'dynamic': hw_type_dict = api.request.dbapi.get_active_hardware_type_dict() else: # NOTE(dtantsur): we don't support classic drivers starting with # the Rocky release. hw_type_dict = {} return DriverList.convert_with_links(hw_type_dict, detail=detail) @METRICS.timer('DriversController.get_one') @expose.expose(Driver, str) def get_one(self, driver_name): """Retrieve a single driver.""" # NOTE(russell_h): There is no way to make this more efficient than # retrieving a list of drivers using the current sqlalchemy schema, but # this path must be exposed for Pecan to route any paths we might # choose to expose below it. cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:get', cdict, cdict) hw_type_dict = api.request.dbapi.get_active_hardware_type_dict() for name, hosts in hw_type_dict.items(): if name == driver_name: return Driver.convert_with_links(name, list(hosts), detail=True) raise exception.DriverNotFound(driver_name=driver_name) @METRICS.timer('DriversController.properties') @expose.expose(str, str) def properties(self, driver_name): """Retrieve property information of the given driver. :param driver_name: name of the driver. :returns: dictionary with : entries. :raises: DriverNotFound (HTTP 404) if the driver name is invalid or the driver cannot be loaded. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:get_properties', cdict, cdict) if driver_name not in _DRIVER_PROPERTIES: topic = api.request.rpcapi.get_topic_for_driver(driver_name) properties = api.request.rpcapi.get_driver_properties( api.request.context, driver_name, topic=topic) _DRIVER_PROPERTIES[driver_name] = properties return _DRIVER_PROPERTIES[driver_name] ironic-15.0.0/ironic/api/controllers/v1/volume_connector.py0000664000175000017500000005170213652514273024002 0ustar zuulzuul00000000000000# Copyright (c) 2017 Hitachi, Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_utils import uuidutils from pecan import rest import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'node_uuid', 'type', 'connector_id') class VolumeConnector(base.APIBase): """API representation of a volume connector. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a volume connector. """ _node_uuid = None def _get_node_uuid(self): return self._node_uuid def _set_node_identifiers(self, value): """Set both UUID and ID of a node for VolumeConnector object :param value: UUID, ID of a node, or atypes.Unset """ if value == atypes.Unset: self._node_uuid = atypes.Unset elif value and self._node_uuid != value: try: node = objects.Node.get(api.request.context, value) self._node_uuid = node.uuid # NOTE(smoriya): Create the node_id attribute on-the-fly # to satisfy the api -> rpc object conversion. self.node_id = node.id except exception.NodeNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a VolumeConnector e.code = http_client.BAD_REQUEST # BadRequest raise uuid = types.uuid """Unique UUID for this volume connector""" type = atypes.wsattr(str, mandatory=True) """The type of volume connector""" connector_id = atypes.wsattr(str, mandatory=True) """The connector_id for this volume connector""" extra = {str: types.jsontype} """The metadata for this volume connector""" node_uuid = atypes.wsproperty(types.uuid, _get_node_uuid, _set_node_identifiers, mandatory=True) """The UUID of the node this volume connector belongs to""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated volume connector links""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.VolumeConnector.fields) for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) # NOTE(smoriya): node_id is an attribute created on-the-fly # by _set_node_uuid(), it needs to be present in the fields so # that as_dict() will contain node_id field when converting it # before saving it in the database. self.fields.append('node_id') # NOTE(smoriya): node_uuid is not part of objects.VolumeConnector.- # fields because it's an API-only attribute self.fields.append('node_uuid') # NOTE(jtaryma): Additionally to node_uuid, node_id is handled as a # secondary identifier in case RPC volume connector object dictionary # was passed to the constructor. self.node_uuid = kwargs.get('node_uuid') or kwargs.get('node_id', atypes.Unset) @staticmethod def _convert_with_links(connector, url): connector.links = [link.Link.make_link('self', url, 'volume/connectors', connector.uuid), link.Link.make_link('bookmark', url, 'volume/connectors', connector.uuid, bookmark=True) ] return connector @classmethod def convert_with_links(cls, rpc_connector, fields=None, sanitize=True): connector = VolumeConnector(**rpc_connector.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, connector.as_dict()) connector = cls._convert_with_links(connector, api.request.public_url) if not sanitize: return connector connector.sanitize(fields) return connector def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if fields is not None: self.unset_fields_except(fields) # never expose the node_id attribute self.node_id = atypes.Unset @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) sample = cls(uuid='86cfd480-0842-4abb-8386-e46149beb82f', type='iqn', connector_id='iqn.2010-10.org.openstack:51332b70524', extra={'foo': 'bar'}, created_at=time, updated_at=time) sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class VolumeConnectorPatchType(types.JsonPatchType): _api_base = VolumeConnector class VolumeConnectorCollection(collection.Collection): """API representation of a collection of volume connectors.""" connectors = [VolumeConnector] """A list containing volume connector objects""" def __init__(self, **kwargs): self._type = 'connectors' @staticmethod def convert_with_links(rpc_connectors, limit, url=None, fields=None, detail=None, **kwargs): collection = VolumeConnectorCollection() collection.connectors = [ VolumeConnector.convert_with_links(p, fields=fields, sanitize=False) for p in rpc_connectors] if detail: kwargs['detail'] = detail collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for connector in collection.connectors: connector.sanitize(fields) return collection @classmethod def sample(cls): sample = cls() sample.connectors = [VolumeConnector.sample(expand=False)] return sample class VolumeConnectorsController(rest.RestController): """REST controller for VolumeConnectors.""" invalid_sort_key_list = ['extra'] def __init__(self, node_ident=None): super(VolumeConnectorsController, self).__init__() self.parent_node_ident = node_ident def _get_volume_connectors_collection(self, node_ident, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.VolumeConnector.get_by_uuid( api.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) node_ident = self.parent_node_ident or node_ident if node_ident: # FIXME(comstud): Since all we need is the node ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. node = api_utils.get_rpc_node(node_ident) connectors = objects.VolumeConnector.list_by_node_id( api.request.context, node.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) else: connectors = objects.VolumeConnector.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) return VolumeConnectorCollection.convert_with_links(connectors, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir, detail=detail) @METRICS.timer('VolumeConnectorsController.get_all') @expose.expose(VolumeConnectorCollection, types.uuid_or_name, types.uuid, int, str, str, types.listtype, types.boolean) def get_all(self, node=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of volume connectors. :param node: UUID or name of a node, to get only volume connectors for that node. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: "asc". :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param detail: Optional, whether to retrieve with detail. :returns: a list of volume connectors, or an empty list if no volume connector is found. :raises: InvalidParameterValue if sort_key does not exist :raises: InvalidParameterValue if sort key is invalid for sorting. :raises: InvalidParameterValue if both fields and detail are specified. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:volume:get', cdict, cdict) if fields is None and not detail: fields = _DEFAULT_RETURN_FIELDS if fields and detail: raise exception.InvalidParameterValue( _("Can't fetch a subset of fields with 'detail' set")) resource_url = 'volume/connectors' return self._get_volume_connectors_collection( node, marker, limit, sort_key, sort_dir, resource_url=resource_url, fields=fields, detail=detail) @METRICS.timer('VolumeConnectorsController.get_one') @expose.expose(VolumeConnector, types.uuid, types.listtype) def get_one(self, connector_uuid, fields=None): """Retrieve information about the given volume connector. :param connector_uuid: UUID of a volume connector. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :returns: API-serializable volume connector object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: VolumeConnectorNotFound if no volume connector exists with the specified UUID. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:volume:get', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() rpc_connector = objects.VolumeConnector.get_by_uuid( api.request.context, connector_uuid) return VolumeConnector.convert_with_links(rpc_connector, fields=fields) @METRICS.timer('VolumeConnectorsController.post') @expose.expose(VolumeConnector, body=VolumeConnector, status_code=http_client.CREATED) def post(self, connector): """Create a new volume connector. :param connector: a volume connector within the request body. :returns: API-serializable volume connector object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: VolumeConnectorTypeAndIdAlreadyExists if a volume connector already exists with the same type and connector_id :raises: VolumeConnectorAlreadyExists if a volume connector with the same UUID already exists """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:create', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() connector_dict = connector.as_dict() # NOTE(hshiina): UUID is mandatory for notification payload if not connector_dict.get('uuid'): connector_dict['uuid'] = uuidutils.generate_uuid() new_connector = objects.VolumeConnector(context, **connector_dict) notify.emit_start_notification(context, new_connector, 'create', node_uuid=connector.node_uuid) with notify.handle_error_notification(context, new_connector, 'create', node_uuid=connector.node_uuid): new_connector.create() notify.emit_end_notification(context, new_connector, 'create', node_uuid=connector.node_uuid) # Set the HTTP Location Header api.response.location = link.build_url('volume/connectors', new_connector.uuid) return VolumeConnector.convert_with_links(new_connector) @METRICS.timer('VolumeConnectorsController.patch') @wsme.validate(types.uuid, [VolumeConnectorPatchType]) @expose.expose(VolumeConnector, types.uuid, body=[VolumeConnectorPatchType]) def patch(self, connector_uuid, patch): """Update an existing volume connector. :param connector_uuid: UUID of a volume connector. :param patch: a json PATCH document to apply to this volume connector. :returns: API-serializable volume connector object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: PatchError if a given patch can not be applied. :raises: VolumeConnectorNotFound if no volume connector exists with the specified UUID. :raises: InvalidParameterValue if the volume connector's UUID is being changed :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the connector does not exist :raises: VolumeConnectorTypeAndIdAlreadyExists if another connector already exists with the same values for type and connector_id fields :raises: InvalidUUID if invalid node UUID is passed in the patch. :raises: InvalidStateRequested If a node associated with the volume connector is not powered off. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:update', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() values = api_utils.get_patch_values(patch, '/node_uuid') for value in values: if not uuidutils.is_uuid_like(value): message = _("Expected a UUID for node_uuid, but received " "%(uuid)s.") % {'uuid': str(value)} raise exception.InvalidUUID(message=message) rpc_connector = objects.VolumeConnector.get_by_uuid(context, connector_uuid) connector_dict = rpc_connector.as_dict() # NOTE(smoriya): # 1) Remove node_id because it's an internal value and # not present in the API object # 2) Add node_uuid connector_dict['node_uuid'] = connector_dict.pop('node_id', None) connector = VolumeConnector( **api_utils.apply_jsonpatch(connector_dict, patch)) # Update only the fields that have changed. for field in objects.VolumeConnector.fields: try: patch_val = getattr(connector, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_connector[field] != patch_val: rpc_connector[field] = patch_val rpc_node = objects.Node.get_by_id(context, rpc_connector.node_id) notify.emit_start_notification(context, rpc_connector, 'update', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_connector, 'update', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) new_connector = api.request.rpcapi.update_volume_connector( context, rpc_connector, topic) api_connector = VolumeConnector.convert_with_links(new_connector) notify.emit_end_notification(context, new_connector, 'update', node_uuid=rpc_node.uuid) return api_connector @METRICS.timer('VolumeConnectorsController.delete') @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, connector_uuid): """Delete a volume connector. :param connector_uuid: UUID of a volume connector. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the connector does not exist :raises: VolumeConnectorNotFound if the volume connector cannot be found :raises: InvalidStateRequested If a node associated with the volume connector is not powered off. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:delete', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() rpc_connector = objects.VolumeConnector.get_by_uuid(context, connector_uuid) rpc_node = objects.Node.get_by_id(context, rpc_connector.node_id) notify.emit_start_notification(context, rpc_connector, 'delete', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_connector, 'delete', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.destroy_volume_connector(context, rpc_connector, topic) notify.emit_end_notification(context, rpc_connector, 'delete', node_uuid=rpc_node.uuid) ironic-15.0.0/ironic/api/controllers/v1/volume_target.py0000664000175000017500000005140313652514273023274 0ustar zuulzuul00000000000000# Copyright (c) 2017 Hitachi, Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_utils import uuidutils from pecan import rest import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'node_uuid', 'volume_type', 'boot_index', 'volume_id') class VolumeTarget(base.APIBase): """API representation of a volume target. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a volume target. """ _node_uuid = None def _get_node_uuid(self): return self._node_uuid def _set_node_identifiers(self, value): """Set both UUID and ID of a node for VolumeTarget object :param value: UUID, ID of a node, or atypes.Unset """ if value == atypes.Unset: self._node_uuid = atypes.Unset elif value and self._node_uuid != value: try: node = objects.Node.get(api.request.context, value) self._node_uuid = node.uuid # NOTE(smoriya): Create the node_id attribute on-the-fly # to satisfy the api -> rpc object conversion. self.node_id = node.id except exception.NodeNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a VolumeTarget e.code = http_client.BAD_REQUEST # BadRequest raise uuid = types.uuid """Unique UUID for this volume target""" volume_type = atypes.wsattr(str, mandatory=True) """The volume_type of volume target""" properties = {str: types.jsontype} """The properties for this volume target""" boot_index = atypes.wsattr(int, mandatory=True) """The boot_index of volume target""" volume_id = atypes.wsattr(str, mandatory=True) """The volume_id for this volume target""" extra = {str: types.jsontype} """The metadata for this volume target""" node_uuid = atypes.wsproperty(types.uuid, _get_node_uuid, _set_node_identifiers, mandatory=True) """The UUID of the node this volume target belongs to""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated volume target links""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.VolumeTarget.fields) for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) # NOTE(smoriya): node_id is an attribute created on-the-fly # by _set_node_uuid(), it needs to be present in the fields so # that as_dict() will contain node_id field when converting it # before saving it in the database. self.fields.append('node_id') # NOTE(smoriya): node_uuid is not part of objects.VolumeTarget.- # fields because it's an API-only attribute self.fields.append('node_uuid') # NOTE(jtaryma): Additionally to node_uuid, node_id is handled as a # secondary identifier in case RPC volume target object dictionary # was passed to the constructor. self.node_uuid = kwargs.get('node_uuid') or kwargs.get('node_id', atypes.Unset) @staticmethod def _convert_with_links(target, url): target.links = [link.Link.make_link('self', url, 'volume/targets', target.uuid), link.Link.make_link('bookmark', url, 'volume/targets', target.uuid, bookmark=True) ] return target @classmethod def convert_with_links(cls, rpc_target, fields=None, sanitize=True): target = VolumeTarget(**rpc_target.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, target.as_dict()) target = cls._convert_with_links(target, api.request.public_url) if not sanitize: return target target.sanitize(fields) return target def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if fields is not None: self.unset_fields_except(fields) # never expose the node_id attribute self.node_id = atypes.Unset @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) properties = {"auth_method": "CHAP", "auth_username": "XXX", "auth_password": "XXX", "target_iqn": "iqn.2010-10.com.example:vol-X", "target_portal": "192.168.0.123:3260", "volume_id": "a2f3ff15-b3ea-4656-ab90-acbaa1a07607", "target_lun": 0, "access_mode": "rw"} sample = cls(uuid='667808d4-622f-4629-b629-07753a19e633', volume_type='iscsi', boot_index=0, volume_id='a2f3ff15-b3ea-4656-ab90-acbaa1a07607', properties=properties, extra={'foo': 'bar'}, created_at=time, updated_at=time) sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class VolumeTargetPatchType(types.JsonPatchType): _api_base = VolumeTarget class VolumeTargetCollection(collection.Collection): """API representation of a collection of volume targets.""" targets = [VolumeTarget] """A list containing volume target objects""" def __init__(self, **kwargs): self._type = 'targets' @staticmethod def convert_with_links(rpc_targets, limit, url=None, fields=None, detail=None, **kwargs): collection = VolumeTargetCollection() collection.targets = [ VolumeTarget.convert_with_links(p, fields=fields, sanitize=False) for p in rpc_targets] if detail: kwargs['detail'] = detail collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for target in collection.targets: target.sanitize(fields) return collection @classmethod def sample(cls): sample = cls() sample.targets = [VolumeTarget.sample(expand=False)] return sample class VolumeTargetsController(rest.RestController): """REST controller for VolumeTargets.""" invalid_sort_key_list = ['extra', 'properties'] def __init__(self, node_ident=None): super(VolumeTargetsController, self).__init__() self.parent_node_ident = node_ident def _get_volume_targets_collection(self, node_ident, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.VolumeTarget.get_by_uuid( api.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) node_ident = self.parent_node_ident or node_ident if node_ident: # FIXME(comstud): Since all we need is the node ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. node = api_utils.get_rpc_node(node_ident) targets = objects.VolumeTarget.list_by_node_id( api.request.context, node.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) else: targets = objects.VolumeTarget.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) return VolumeTargetCollection.convert_with_links(targets, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir, detail=detail) @METRICS.timer('VolumeTargetsController.get_all') @expose.expose(VolumeTargetCollection, types.uuid_or_name, types.uuid, int, str, str, types.listtype, types.boolean) def get_all(self, node=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of volume targets. :param node: UUID or name of a node, to get only volume targets for that node. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: "asc". :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param detail: Optional, whether to retrieve with detail. :returns: a list of volume targets, or an empty list if no volume target is found. :raises: InvalidParameterValue if sort_key does not exist :raises: InvalidParameterValue if sort key is invalid for sorting. :raises: InvalidParameterValue if both fields and detail are specified. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:volume:get', cdict, cdict) if fields is None and not detail: fields = _DEFAULT_RETURN_FIELDS if fields and detail: raise exception.InvalidParameterValue( _("Can't fetch a subset of fields with 'detail' set")) resource_url = 'volume/targets' return self._get_volume_targets_collection(node, marker, limit, sort_key, sort_dir, resource_url=resource_url, fields=fields, detail=detail) @METRICS.timer('VolumeTargetsController.get_one') @expose.expose(VolumeTarget, types.uuid, types.listtype) def get_one(self, target_uuid, fields=None): """Retrieve information about the given volume target. :param target_uuid: UUID of a volume target. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :returns: API-serializable volume target object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: VolumeTargetNotFound if no volume target with this UUID exists """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:volume:get', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() rpc_target = objects.VolumeTarget.get_by_uuid( api.request.context, target_uuid) return VolumeTarget.convert_with_links(rpc_target, fields=fields) @METRICS.timer('VolumeTargetsController.post') @expose.expose(VolumeTarget, body=VolumeTarget, status_code=http_client.CREATED) def post(self, target): """Create a new volume target. :param target: a volume target within the request body. :returns: API-serializable volume target object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same node ID and boot index :raises: VolumeTargetAlreadyExists if a volume target with the same UUID exists """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:create', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() target_dict = target.as_dict() # NOTE(hshiina): UUID is mandatory for notification payload if not target_dict.get('uuid'): target_dict['uuid'] = uuidutils.generate_uuid() new_target = objects.VolumeTarget(context, **target_dict) notify.emit_start_notification(context, new_target, 'create', node_uuid=target.node_uuid) with notify.handle_error_notification(context, new_target, 'create', node_uuid=target.node_uuid): new_target.create() notify.emit_end_notification(context, new_target, 'create', node_uuid=target.node_uuid) # Set the HTTP Location Header api.response.location = link.build_url('volume/targets', new_target.uuid) return VolumeTarget.convert_with_links(new_target) @METRICS.timer('VolumeTargetsController.patch') @wsme.validate(types.uuid, [VolumeTargetPatchType]) @expose.expose(VolumeTarget, types.uuid, body=[VolumeTargetPatchType]) def patch(self, target_uuid, patch): """Update an existing volume target. :param target_uuid: UUID of a volume target. :param patch: a json PATCH document to apply to this volume target. :returns: API-serializable volume target object. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: PatchError if a given patch can not be applied. :raises: InvalidParameterValue if the volume target's UUID is being changed :raises: NodeLocked if the node is already locked :raises: NodeNotFound if the node associated with the volume target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found :raises: VolumeTargetBootIndexAlreadyExists if a volume target already exists with the same node ID and boot index values :raises: InvalidUUID if invalid node UUID is passed in the patch. :raises: InvalidStateRequested If a node associated with the volume target is not powered off. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:update', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() values = api_utils.get_patch_values(patch, '/node_uuid') for value in values: if not uuidutils.is_uuid_like(value): message = _("Expected a UUID for node_uuid, but received " "%(uuid)s.") % {'uuid': str(value)} raise exception.InvalidUUID(message=message) rpc_target = objects.VolumeTarget.get_by_uuid(context, target_uuid) target_dict = rpc_target.as_dict() # NOTE(smoriya): # 1) Remove node_id because it's an internal value and # not present in the API object # 2) Add node_uuid target_dict['node_uuid'] = target_dict.pop('node_id', None) target = VolumeTarget( **api_utils.apply_jsonpatch(target_dict, patch)) # Update only the fields that have changed. for field in objects.VolumeTarget.fields: try: patch_val = getattr(target, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_target[field] != patch_val: rpc_target[field] = patch_val rpc_node = objects.Node.get_by_id(context, rpc_target.node_id) notify.emit_start_notification(context, rpc_target, 'update', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_target, 'update', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) new_target = api.request.rpcapi.update_volume_target( context, rpc_target, topic) api_target = VolumeTarget.convert_with_links(new_target) notify.emit_end_notification(context, new_target, 'update', node_uuid=rpc_node.uuid) return api_target @METRICS.timer('VolumeTargetsController.delete') @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, target_uuid): """Delete a volume target. :param target_uuid: UUID of a volume target. :raises: OperationNotPermitted if accessed with specifying a parent node. :raises: NodeLocked if node is locked by another conductor :raises: NodeNotFound if the node associated with the target does not exist :raises: VolumeTargetNotFound if the volume target cannot be found :raises: InvalidStateRequested If a node associated with the volume target is not powered off. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:volume:delete', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() rpc_target = objects.VolumeTarget.get_by_uuid(context, target_uuid) rpc_node = objects.Node.get_by_id(context, rpc_target.node_id) notify.emit_start_notification(context, rpc_target, 'delete', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_target, 'delete', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.destroy_volume_target(context, rpc_target, topic) notify.emit_end_notification(context, rpc_target, 'delete', node_uuid=rpc_node.uuid) ironic-15.0.0/ironic/api/controllers/v1/portgroup.py0000664000175000017500000006272013652514273022464 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_utils import uuidutils import pecan import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import port from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states as ir_states from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'address', 'name') class Portgroup(base.APIBase): """API representation of a portgroup. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a portgroup. """ _node_uuid = None def _get_node_uuid(self): return self._node_uuid def _set_node_uuid(self, value): if value and self._node_uuid != value: if not api_utils.allow_portgroups(): self._node_uuid = atypes.Unset return try: node = objects.Node.get(api.request.context, value) self._node_uuid = node.uuid # NOTE: Create the node_id attribute on-the-fly # to satisfy the api -> rpc object # conversion. self.node_id = node.id except exception.NodeNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Portgroup e.code = http_client.BAD_REQUEST raise e elif value == atypes.Unset: self._node_uuid = atypes.Unset uuid = types.uuid """Unique UUID for this portgroup""" address = atypes.wsattr(types.macaddress) """MAC Address for this portgroup""" extra = {str: types.jsontype} """This portgroup's meta data""" internal_info = atypes.wsattr({str: types.jsontype}, readonly=True) """This portgroup's internal info""" node_uuid = atypes.wsproperty(types.uuid, _get_node_uuid, _set_node_uuid, mandatory=True) """The UUID of the node this portgroup belongs to""" name = atypes.wsattr(str) """The logical name for this portgroup""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated portgroup links""" standalone_ports_supported = types.boolean """Indicates whether ports of this portgroup may be used as single NIC ports""" mode = atypes.wsattr(str) """The mode for this portgroup. See linux bonding documentation for details: https://www.kernel.org/doc/Documentation/networking/bonding.txt""" properties = {str: types.jsontype} """This portgroup's properties""" ports = atypes.wsattr([link.Link], readonly=True) """Links to the collection of ports of this portgroup""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.Portgroup.fields) # NOTE: node_uuid is not part of objects.Portgroup.fields # because it's an API-only attribute fields.append('node_uuid') for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) # NOTE: node_id is an attribute created on-the-fly # by _set_node_uuid(), it needs to be present in the fields so # that as_dict() will contain node_id field when converting it # before saving it in the database. self.fields.append('node_id') setattr(self, 'node_uuid', kwargs.get('node_id', atypes.Unset)) @staticmethod def _convert_with_links(portgroup, url, fields=None): """Add links to the portgroup.""" if fields is None: portgroup.ports = [ link.Link.make_link('self', url, 'portgroups', portgroup.uuid + "/ports"), link.Link.make_link('bookmark', url, 'portgroups', portgroup.uuid + "/ports", bookmark=True) ] # never expose the node_id attribute portgroup.node_id = atypes.Unset portgroup.links = [link.Link.make_link('self', url, 'portgroups', portgroup.uuid), link.Link.make_link('bookmark', url, 'portgroups', portgroup.uuid, bookmark=True) ] return portgroup @classmethod def convert_with_links(cls, rpc_portgroup, fields=None, sanitize=True): """Add links to the portgroup.""" portgroup = Portgroup(**rpc_portgroup.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, portgroup.as_dict()) portgroup = cls._convert_with_links(portgroup, api.request.host_url, fields=fields) if not sanitize: return portgroup portgroup.sanitize(fields) return portgroup def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if fields is not None: self.unset_fields_except(fields) # never expose the node_id attribute self.node_id = atypes.Unset @classmethod def sample(cls, expand=True): """Return a sample of the portgroup.""" sample = cls(uuid='a594544a-2daf-420c-8775-17a8c3e0852f', address='fe:54:00:77:07:d9', name='node1-portgroup-01', extra={'foo': 'bar'}, internal_info={'baz': 'boo'}, standalone_ports_supported=True, mode='active-backup', properties={}, created_at=datetime.datetime(2000, 1, 1, 12, 0, 0), updated_at=datetime.datetime(2000, 1, 1, 12, 0, 0)) # NOTE(lucasagomes): node_uuid getter() method look at the # _node_uuid variable sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class PortgroupPatchType(types.JsonPatchType): _api_base = Portgroup _extra_non_removable_attrs = {'/mode'} @staticmethod def internal_attrs(): defaults = types.JsonPatchType.internal_attrs() return defaults + ['/internal_info'] class PortgroupCollection(collection.Collection): """API representation of a collection of portgroups.""" portgroups = [Portgroup] """A list containing portgroup objects""" def __init__(self, **kwargs): self._type = 'portgroups' @staticmethod def convert_with_links(rpc_portgroups, limit, url=None, fields=None, **kwargs): collection = PortgroupCollection() collection.portgroups = [Portgroup.convert_with_links(p, fields=fields, sanitize=False) for p in rpc_portgroups] collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for item in collection.portgroups: item.sanitize(fields=fields) return collection @classmethod def sample(cls): """Return a sample of the portgroup.""" sample = cls() sample.portgroups = [Portgroup.sample(expand=False)] return sample class PortgroupsController(pecan.rest.RestController): """REST controller for portgroups.""" _custom_actions = { 'detail': ['GET'], } invalid_sort_key_list = ['extra', 'internal_info', 'properties'] _subcontroller_map = { 'ports': port.PortsController, } @pecan.expose() def _lookup(self, ident, *remainder): if not api_utils.allow_portgroups(): pecan.abort(http_client.NOT_FOUND) try: ident = types.uuid_or_name.validate(ident) except exception.InvalidUuidOrName as e: pecan.abort(http_client.BAD_REQUEST, e.args[0]) if not remainder: return subcontroller = self._subcontroller_map.get(remainder[0]) if subcontroller: if api_utils.allow_portgroups_subcontrollers(): return subcontroller( portgroup_ident=ident, node_ident=self.parent_node_ident), remainder[1:] pecan.abort(http_client.NOT_FOUND) def __init__(self, node_ident=None): super(PortgroupsController, self).__init__() self.parent_node_ident = node_ident def _get_portgroups_collection(self, node_ident, address, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None): """Return portgroups collection. :param node_ident: UUID or name of a node. :param address: MAC address of a portgroup. :param marker: Pagination marker for large data sets. :param limit: Maximum number of resources to return in a single result. :param sort_key: Column to sort results by. Default: id. :param sort_dir: Direction to sort. "asc" or "desc". Default: asc. :param resource_url: Optional, URL to the portgroup resource. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Portgroup.get_by_uuid(api.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) node_ident = self.parent_node_ident or node_ident if node_ident: # FIXME: Since all we need is the node ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. node = api_utils.get_rpc_node(node_ident) portgroups = objects.Portgroup.list_by_node_id( api.request.context, node.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) elif address: portgroups = self._get_portgroups_by_address(address) else: portgroups = objects.Portgroup.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) parameters = {} if detail is not None: parameters['detail'] = detail return PortgroupCollection.convert_with_links(portgroups, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir, **parameters) def _get_portgroups_by_address(self, address): """Retrieve a portgroup by its address. :param address: MAC address of a portgroup, to get the portgroup which has this MAC address. :returns: a list with the portgroup, or an empty list if no portgroup is found. """ try: portgroup = objects.Portgroup.get_by_address(api.request.context, address) return [portgroup] except exception.PortgroupNotFound: return [] @METRICS.timer('PortgroupsController.get_all') @expose.expose(PortgroupCollection, types.uuid_or_name, types.macaddress, types.uuid, int, str, str, types.listtype, types.boolean) def get_all(self, node=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of portgroups. :param node: UUID or name of a node, to get only portgroups for that node. :param address: MAC address of a portgroup, to get the portgroup which has this MAC address. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ if not api_utils.allow_portgroups(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:portgroup:get', cdict, cdict) api_utils.check_allowed_portgroup_fields(fields) api_utils.check_allowed_portgroup_fields([sort_key]) fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) return self._get_portgroups_collection(node, address, marker, limit, sort_key, sort_dir, fields=fields, detail=detail) @METRICS.timer('PortgroupsController.detail') @expose.expose(PortgroupCollection, types.uuid_or_name, types.macaddress, types.uuid, int, str, str) def detail(self, node=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc'): """Retrieve a list of portgroups with detail. :param node: UUID or name of a node, to get only portgroups for that node. :param address: MAC address of a portgroup, to get the portgroup which has this MAC address. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. """ if not api_utils.allow_portgroups(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:portgroup:get', cdict, cdict) api_utils.check_allowed_portgroup_fields([sort_key]) # NOTE: /detail should only work against collections parent = api.request.path.split('/')[:-1][-1] if parent != "portgroups": raise exception.HTTPNotFound() resource_url = '/'.join(['portgroups', 'detail']) return self._get_portgroups_collection( node, address, marker, limit, sort_key, sort_dir, resource_url=resource_url) @METRICS.timer('PortgroupsController.get_one') @expose.expose(Portgroup, types.uuid_or_name, types.listtype) def get_one(self, portgroup_ident, fields=None): """Retrieve information about the given portgroup. :param portgroup_ident: UUID or logical name of a portgroup. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ if not api_utils.allow_portgroups(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:portgroup:get', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() api_utils.check_allowed_portgroup_fields(fields) rpc_portgroup = api_utils.get_rpc_portgroup_with_suffix( portgroup_ident) return Portgroup.convert_with_links(rpc_portgroup, fields=fields) @METRICS.timer('PortgroupsController.post') @expose.expose(Portgroup, body=Portgroup, status_code=http_client.CREATED) def post(self, portgroup): """Create a new portgroup. :param portgroup: a portgroup within the request body. """ if not api_utils.allow_portgroups(): raise exception.NotFound() context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:portgroup:create', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() if (not api_utils.allow_portgroup_mode_properties() and (portgroup.mode is not atypes.Unset or portgroup.properties is not atypes.Unset)): raise exception.NotAcceptable() if (portgroup.name and not api_utils.is_valid_logical_name(portgroup.name)): error_msg = _("Cannot create portgroup with invalid name " "'%(name)s'") % {'name': portgroup.name} raise exception.ClientSideError( error_msg, status_code=http_client.BAD_REQUEST) pg_dict = portgroup.as_dict() api_utils.handle_post_port_like_extra_vif(pg_dict) # NOTE(yuriyz): UUID is mandatory for notifications payload if not pg_dict.get('uuid'): pg_dict['uuid'] = uuidutils.generate_uuid() new_portgroup = objects.Portgroup(context, **pg_dict) notify.emit_start_notification(context, new_portgroup, 'create', node_uuid=portgroup.node_uuid) with notify.handle_error_notification(context, new_portgroup, 'create', node_uuid=portgroup.node_uuid): new_portgroup.create() notify.emit_end_notification(context, new_portgroup, 'create', node_uuid=portgroup.node_uuid) # Set the HTTP Location Header api.response.location = link.build_url('portgroups', new_portgroup.uuid) return Portgroup.convert_with_links(new_portgroup) @METRICS.timer('PortgroupsController.patch') @wsme.validate(types.uuid_or_name, [PortgroupPatchType]) @expose.expose(Portgroup, types.uuid_or_name, body=[PortgroupPatchType]) def patch(self, portgroup_ident, patch): """Update an existing portgroup. :param portgroup_ident: UUID or logical name of a portgroup. :param patch: a json PATCH document to apply to this portgroup. """ if not api_utils.allow_portgroups(): raise exception.NotFound() context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:portgroup:update', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() if (not api_utils.allow_portgroup_mode_properties() and (api_utils.is_path_updated(patch, '/mode') or api_utils.is_path_updated(patch, '/properties'))): raise exception.NotAcceptable() rpc_portgroup = api_utils.get_rpc_portgroup_with_suffix( portgroup_ident) names = api_utils.get_patch_values(patch, '/name') for name in names: if (name and not api_utils.is_valid_logical_name(name)): error_msg = _("Portgroup %(portgroup)s: Cannot change name to" " invalid name '%(name)s'") % {'portgroup': portgroup_ident, 'name': name} raise exception.ClientSideError( error_msg, status_code=http_client.BAD_REQUEST) portgroup_dict = rpc_portgroup.as_dict() # NOTE: # 1) Remove node_id because it's an internal value and # not present in the API object # 2) Add node_uuid portgroup_dict['node_uuid'] = portgroup_dict.pop('node_id', None) portgroup = Portgroup(**api_utils.apply_jsonpatch(portgroup_dict, patch)) api_utils.handle_patch_port_like_extra_vif(rpc_portgroup, portgroup, patch) # Update only the fields that have changed for field in objects.Portgroup.fields: try: patch_val = getattr(portgroup, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_portgroup[field] != patch_val: rpc_portgroup[field] = patch_val rpc_node = objects.Node.get_by_id(context, rpc_portgroup.node_id) if (rpc_node.provision_state == ir_states.INSPECTING and api_utils.allow_inspect_wait_state()): msg = _('Cannot update portgroup "%(portgroup)s" on node ' '"%(node)s" while it is in state "%(state)s".') % { 'portgroup': rpc_portgroup.uuid, 'node': rpc_node.uuid, 'state': ir_states.INSPECTING} raise exception.ClientSideError(msg, status_code=http_client.CONFLICT) notify.emit_start_notification(context, rpc_portgroup, 'update', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_portgroup, 'update', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) new_portgroup = api.request.rpcapi.update_portgroup( context, rpc_portgroup, topic) api_portgroup = Portgroup.convert_with_links(new_portgroup) notify.emit_end_notification(context, new_portgroup, 'update', node_uuid=api_portgroup.node_uuid) return api_portgroup @METRICS.timer('PortgroupsController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, portgroup_ident): """Delete a portgroup. :param portgroup_ident: UUID or logical name of a portgroup. """ if not api_utils.allow_portgroups(): raise exception.NotFound() context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:portgroup:delete', cdict, cdict) if self.parent_node_ident: raise exception.OperationNotPermitted() rpc_portgroup = api_utils.get_rpc_portgroup_with_suffix( portgroup_ident) rpc_node = objects.Node.get_by_id(api.request.context, rpc_portgroup.node_id) notify.emit_start_notification(context, rpc_portgroup, 'delete', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_portgroup, 'delete', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.destroy_portgroup(context, rpc_portgroup, topic) notify.emit_end_notification(context, rpc_portgroup, 'delete', node_uuid=rpc_node.uuid) ironic-15.0.0/ironic/api/controllers/v1/types.py0000664000175000017500000003757413652514273021600 0ustar zuulzuul00000000000000# coding: utf-8 # # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import json from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils from ironic.api.controllers import base from ironic.api.controllers.v1 import utils as v1_utils from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils LOG = log.getLogger(__name__) class MacAddressType(atypes.UserType): """A simple MAC address type.""" basetype = str name = 'macaddress' @staticmethod def validate(value): return utils.validate_and_normalize_mac(value) @staticmethod def frombasetype(value): if value is None: return None return MacAddressType.validate(value) class UuidOrNameType(atypes.UserType): """A simple UUID or logical name type.""" basetype = str name = 'uuid_or_name' @staticmethod def validate(value): if not (uuidutils.is_uuid_like(value) or v1_utils.is_valid_logical_name(value)): raise exception.InvalidUuidOrName(name=value) return value @staticmethod def frombasetype(value): if value is None: return None return UuidOrNameType.validate(value) class NameType(atypes.UserType): """A simple logical name type.""" basetype = str name = 'name' @staticmethod def validate(value): if not v1_utils.is_valid_logical_name(value): raise exception.InvalidName(name=value) return value @staticmethod def frombasetype(value): if value is None: return None return NameType.validate(value) class UuidType(atypes.UserType): """A simple UUID type.""" basetype = str name = 'uuid' @staticmethod def validate(value): if not uuidutils.is_uuid_like(value): raise exception.InvalidUUID(uuid=value) return value @staticmethod def frombasetype(value): if value is None: return None return UuidType.validate(value) class BooleanType(atypes.UserType): """A simple boolean type.""" basetype = str name = 'boolean' @staticmethod def validate(value): try: return strutils.bool_from_string(value, strict=True) except ValueError as e: # raise Invalid to return 400 (BadRequest) in the API raise exception.Invalid(str(e)) @staticmethod def frombasetype(value): if value is None: return None return BooleanType.validate(value) class JsonType(atypes.UserType): """A simple JSON type.""" basetype = str name = 'json' def __str__(self): # These are the json serializable native types return ' | '.join(map(str, (str, int, float, BooleanType, list, dict, None))) @staticmethod def validate(value): try: json.dumps(value) except TypeError: raise exception.Invalid(_('%s is not JSON serializable') % value) else: return value @staticmethod def frombasetype(value): return JsonType.validate(value) class ListType(atypes.UserType): """A simple list type.""" basetype = str name = 'list' @staticmethod def validate(value): """Validate and convert the input to a ListType. :param value: A comma separated string of values :returns: A list of unique values (lower-cased), maintaining the same order """ items = [] for v in str(value).split(','): v_norm = v.strip().lower() if v_norm and v_norm not in items: items.append(v_norm) return items @staticmethod def frombasetype(value): if value is None: return None return ListType.validate(value) macaddress = MacAddressType() uuid_or_name = UuidOrNameType() name = NameType() uuid = UuidType() boolean = BooleanType() listtype = ListType() # Can't call it 'json' because that's the name of the stdlib module jsontype = JsonType() class JsonPatchType(base.Base): """A complex type that represents a single json-patch operation.""" path = atypes.wsattr(atypes.StringType(pattern='^(/[\\w-]+)+$'), mandatory=True) op = atypes.wsattr(atypes.Enum(str, 'add', 'replace', 'remove'), mandatory=True) value = atypes.wsattr(jsontype, default=atypes.Unset) # The class of the objects being patched. Override this in subclasses. # Should probably be a subclass of ironic.api.controllers.base.APIBase. _api_base = None # Attributes that are not required for construction, but which may not be # removed if set. Override in subclasses if needed. _extra_non_removable_attrs = set() # Set of non-removable attributes, calculated lazily. _non_removable_attrs = None @staticmethod def internal_attrs(): """Returns a list of internal attributes. Internal attributes can't be added, replaced or removed. This method may be overwritten by derived class. """ return ['/created_at', '/id', '/links', '/updated_at', '/uuid'] @classmethod def non_removable_attrs(cls): """Returns a set of names of attributes that may not be removed. Attributes whose 'mandatory' property is True are automatically added to this set. To add additional attributes to the set, override the field _extra_non_removable_attrs in subclasses, with a set of the form {'/foo', '/bar'}. """ if cls._non_removable_attrs is None: cls._non_removable_attrs = cls._extra_non_removable_attrs.copy() if cls._api_base: fields = inspect.getmembers(cls._api_base, lambda a: not inspect.isroutine(a)) for name, field in fields: if getattr(field, 'mandatory', False): cls._non_removable_attrs.add('/%s' % name) return cls._non_removable_attrs @staticmethod def validate(patch): _path = '/' + patch.path.split('/')[1] if _path in patch.internal_attrs(): msg = _("'%s' is an internal attribute and can not be updated") raise exception.ClientSideError(msg % patch.path) if patch.path in patch.non_removable_attrs() and patch.op == 'remove': msg = _("'%s' is a mandatory attribute and can not be removed") raise exception.ClientSideError(msg % patch.path) if patch.op != 'remove': if patch.value is atypes.Unset: msg = _("'add' and 'replace' operations need a value") raise exception.ClientSideError(msg) ret = {'path': patch.path, 'op': patch.op} if patch.value is not atypes.Unset: ret['value'] = patch.value return ret class LocalLinkConnectionType(atypes.UserType): """A type describing local link connection.""" basetype = atypes.DictType name = 'locallinkconnection' local_link_mandatory_fields = {'port_id', 'switch_id'} smart_nic_mandatory_fields = {'port_id', 'hostname'} mandatory_fields_list = [local_link_mandatory_fields, smart_nic_mandatory_fields] optional_fields = {'switch_info', 'network_type'} valid_fields = set.union(optional_fields, *mandatory_fields_list) valid_network_types = {'managed', 'unmanaged'} @staticmethod def validate(value): """Validate and convert the input to a LocalLinkConnectionType. :param value: A dictionary of values to validate, switch_id is a MAC address or an OpenFlow based datapath_id, switch_info is an optional field. Required Smart NIC fields are port_id and hostname. For example:: { 'switch_id': mac_or_datapath_id(), 'port_id': 'Ethernet3/1', 'switch_info': 'switch1' } Or for Smart NIC:: { 'port_id': 'rep0-0', 'hostname': 'host1-bf' } :returns: A dictionary. :raises: Invalid if some of the keys in the dictionary being validated are unknown, invalid, or some required ones are missing. """ atypes.DictType(str, str).validate(value) keys = set(value) # This is to workaround an issue when an API object is initialized from # RPC object, in which dictionary fields that are set to None become # empty dictionaries if not keys: return value invalid = keys - LocalLinkConnectionType.valid_fields if invalid: raise exception.Invalid(_('%s are invalid keys') % (invalid)) # If network_type is 'unmanaged', this is a network with no switch # management. i.e local_link_connection details are not required. if 'network_type' in keys: if (value['network_type'] not in LocalLinkConnectionType.valid_network_types): msg = _( 'Invalid network_type %(type)s, valid network_types are ' '%(valid_network_types)s.') % { 'type': value['network_type'], 'valid_network_types': LocalLinkConnectionType.valid_network_types} raise exception.Invalid(msg) if (value['network_type'] == 'unmanaged' and not (keys - {'network_type'})): # Only valid network_type 'unmanaged' is set, no for further # validation required. return value # Check any mandatory fields sets are present for mandatory_set in LocalLinkConnectionType.mandatory_fields_list: if mandatory_set <= keys: break else: msg = _('Missing mandatory keys. Required keys are ' '%(required_fields)s. Or in case of Smart NIC ' '%(smart_nic_required_fields)s. ' 'Submitted keys are %(keys)s .') % { 'required_fields': LocalLinkConnectionType.local_link_mandatory_fields, 'smart_nic_required_fields': LocalLinkConnectionType.smart_nic_mandatory_fields, 'keys': keys} raise exception.Invalid(msg) # Check switch_id is either a valid mac address or # OpenFlow datapath_id and normalize it. try: value['switch_id'] = utils.validate_and_normalize_mac( value['switch_id']) except exception.InvalidMAC: try: value['switch_id'] = utils.validate_and_normalize_datapath_id( value['switch_id']) except exception.InvalidDatapathID: raise exception.InvalidSwitchID(switch_id=value['switch_id']) except KeyError: # In Smart NIC case 'switch_id' is optional. pass return value @staticmethod def frombasetype(value): if value is None: return None return LocalLinkConnectionType.validate(value) @staticmethod def validate_for_smart_nic(value): """Validates Smart NIC field are present 'port_id' and 'hostname' :param value: local link information of type Dictionary. :return: True if both fields 'port_id' and 'hostname' are present in 'value', False otherwise. """ atypes.DictType(str, str).validate(value) keys = set(value) if LocalLinkConnectionType.smart_nic_mandatory_fields <= keys: return True return False locallinkconnectiontype = LocalLinkConnectionType() class VifType(JsonType): basetype = str name = 'viftype' mandatory_fields = {'id'} @staticmethod def validate(value): super(VifType, VifType).validate(value) keys = set(value) # Check all mandatory fields are present missing = VifType.mandatory_fields - keys if missing: msg = _('Missing mandatory keys: %s') % ', '.join(list(missing)) raise exception.Invalid(msg) UuidOrNameType.validate(value['id']) return value @staticmethod def frombasetype(value): if value is None: return None return VifType.validate(value) viftype = VifType() class EventType(atypes.UserType): """A simple Event type.""" basetype = atypes.DictType name = 'event' def _validate_network_port_event(value): """Validate network port event fields. :param value: A event dict :returns: value :raises: Invalid if network port event not in proper format """ validators = { 'port_id': UuidType.validate, 'mac_address': MacAddressType.validate, 'status': str, 'device_id': UuidType.validate, 'binding:host_id': UuidType.validate, 'binding:vnic_type': str } keys = set(value) net_keys = set(validators) net_mandatory_fields = {'port_id', 'mac_address', 'status'} # Check all keys are valid for network port event invalid = keys.difference(EventType.mandatory_fields.union(net_keys)) if invalid: raise exception.Invalid(_('%s are invalid keys') % ', '.join(invalid)) # Check all mandatory fields for network port event is present missing = net_mandatory_fields.difference(keys) if missing: raise exception.Invalid(_('Missing mandatory keys: %s') % ', '.join(missing)) # Check all values are of expected type for key in net_keys: if key in value: try: validators[key](value[key]) except Exception as e: msg = (_('Event validation failure for %(key)s. ' '%(message)s') % {'key': key, 'message': e}) raise exception.Invalid(msg) return value mandatory_fields = {'event'} event_validators = { 'network.bind_port': _validate_network_port_event, 'network.unbind_port': _validate_network_port_event, 'network.delete_port': _validate_network_port_event, } valid_events = set(event_validators) @staticmethod def validate(value): """Validate the input :param value: A event dict :returns: value :raises: Invalid if event not in proper format """ atypes.DictType(str, str).validate(value) keys = set(value) # Check all mandatory fields are present missing = EventType.mandatory_fields.difference(keys) if missing: raise exception.Invalid(_('Missing mandatory keys: %s') % ', '.join(missing)) # Check event is a supported event if value['event'] not in EventType.valid_events: raise exception.Invalid( _('%(event)s is not one of valid events: %(valid_events)s.') % {'event': value['event'], 'valid_events': ', '.join(EventType.valid_events)}) return EventType.event_validators[value['event']](value) eventtype = EventType() ironic-15.0.0/ironic/api/controllers/v1/volume.py0000664000175000017500000000677613652514273021743 0ustar zuulzuul00000000000000# Copyright (c) 2017 Hitachi, Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client import pecan from pecan import rest from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import utils as api_utils from ironic.api.controllers.v1 import volume_connector from ironic.api.controllers.v1 import volume_target from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common import policy class Volume(base.APIBase): """API representation of a volume root. This class exists as a root class for the volume connectors and volume targets controllers. """ links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated volume links""" connectors = atypes.wsattr([link.Link], readonly=True) """Links to the volume connectors resource""" targets = atypes.wsattr([link.Link], readonly=True) """Links to the volume targets resource""" @staticmethod def convert(node_ident=None): url = api.request.public_url volume = Volume() if node_ident: resource = 'nodes' args = '%s/volume/' % node_ident else: resource = 'volume' args = '' volume.links = [ link.Link.make_link('self', url, resource, args), link.Link.make_link('bookmark', url, resource, args, bookmark=True)] volume.connectors = [ link.Link.make_link('self', url, resource, args + 'connectors'), link.Link.make_link('bookmark', url, resource, args + 'connectors', bookmark=True)] volume.targets = [ link.Link.make_link('self', url, resource, args + 'targets'), link.Link.make_link('bookmark', url, resource, args + 'targets', bookmark=True)] return volume class VolumeController(rest.RestController): """REST controller for volume root""" _subcontroller_map = { 'connectors': volume_connector.VolumeConnectorsController, 'targets': volume_target.VolumeTargetsController } def __init__(self, node_ident=None): super(VolumeController, self).__init__() self.parent_node_ident = node_ident @expose.expose(Volume) def get(self): if not api_utils.allow_volume(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:volume:get', cdict, cdict) return Volume.convert(self.parent_node_ident) @pecan.expose() def _lookup(self, subres, *remainder): if not api_utils.allow_volume(): pecan.abort(http_client.NOT_FOUND) subcontroller = self._subcontroller_map.get(subres) if subcontroller: return subcontroller(node_ident=self.parent_node_ident), remainder ironic-15.0.0/ironic/api/controllers/v1/port.py0000664000175000017500000010225313652514273021403 0ustar zuulzuul00000000000000# Copyright 2013 UnitedStack Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_log import log from oslo_utils import uuidutils from pecan import rest import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states as ir_states from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) LOG = log.getLogger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'address') def hide_fields_in_newer_versions(obj): # if requested version is < 1.18, hide internal_info field if not api_utils.allow_port_internal_info(): obj.internal_info = atypes.Unset # if requested version is < 1.19, hide local_link_connection and # pxe_enabled fields if not api_utils.allow_port_advanced_net_fields(): obj.pxe_enabled = atypes.Unset obj.local_link_connection = atypes.Unset # if requested version is < 1.24, hide portgroup_uuid field if not api_utils.allow_portgroups_subcontrollers(): obj.portgroup_uuid = atypes.Unset # if requested version is < 1.34, hide physical_network field. if not api_utils.allow_port_physical_network(): obj.physical_network = atypes.Unset # if requested version is < 1.53, hide is_smartnic field. if not api_utils.allow_port_is_smartnic(): obj.is_smartnic = atypes.Unset class Port(base.APIBase): """API representation of a port. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a port. """ _node_uuid = None _portgroup_uuid = None def _get_node_uuid(self): return self._node_uuid def _set_node_uuid(self, value): if value and self._node_uuid != value: try: # FIXME(comstud): One should only allow UUID here, but # there seems to be a bug in that tests are passing an # ID. See bug #1301046 for more details. node = objects.Node.get(api.request.context, value) self._node_uuid = node.uuid # NOTE(lucasagomes): Create the node_id attribute on-the-fly # to satisfy the api -> rpc object # conversion. self.node_id = node.id except exception.NodeNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Port e.code = http_client.BAD_REQUEST # BadRequest raise elif value == atypes.Unset: self._node_uuid = atypes.Unset def _get_portgroup_uuid(self): return self._portgroup_uuid def _set_portgroup_uuid(self, value): if value and self._portgroup_uuid != value: if not api_utils.allow_portgroups_subcontrollers(): self._portgroup_uuid = atypes.Unset return try: portgroup = objects.Portgroup.get(api.request.context, value) if portgroup.node_id != self.node_id: raise exception.BadRequest(_('Port can not be added to a ' 'portgroup belonging to a ' 'different node.')) self._portgroup_uuid = portgroup.uuid # NOTE(lucasagomes): Create the portgroup_id attribute # on-the-fly to satisfy the api -> # rpc object conversion. self.portgroup_id = portgroup.id except exception.PortgroupNotFound as e: # Change error code because 404 (NotFound) is inappropriate # response for a POST request to create a Port e.code = http_client.BAD_REQUEST # BadRequest raise e elif value == atypes.Unset: self._portgroup_uuid = atypes.Unset elif value is None and api_utils.allow_portgroups_subcontrollers(): # This is to output portgroup_uuid field if API version allows this self._portgroup_uuid = None uuid = types.uuid """Unique UUID for this port""" address = atypes.wsattr(types.macaddress, mandatory=True) """MAC Address for this port""" extra = {str: types.jsontype} """This port's meta data""" internal_info = atypes.wsattr({str: types.jsontype}, readonly=True) """This port's internal information maintained by ironic""" node_uuid = atypes.wsproperty(types.uuid, _get_node_uuid, _set_node_uuid, mandatory=True) """The UUID of the node this port belongs to""" portgroup_uuid = atypes.wsproperty(types.uuid, _get_portgroup_uuid, _set_portgroup_uuid, mandatory=False) """The UUID of the portgroup this port belongs to""" pxe_enabled = types.boolean """Indicates whether pxe is enabled or disabled on the node.""" local_link_connection = types.locallinkconnectiontype """The port binding profile for the port""" physical_network = atypes.StringType(max_length=64) """The name of the physical network to which this port is connected.""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated port links""" is_smartnic = types.boolean """Indicates whether this port is a Smart NIC port.""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.Port.fields) # NOTE(lucasagomes): node_uuid is not part of objects.Port.fields # because it's an API-only attribute fields.append('node_uuid') # NOTE: portgroup_uuid is not part of objects.Port.fields # because it's an API-only attribute fields.append('portgroup_uuid') for field in fields: # Add fields we expose. if hasattr(self, field): self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) # NOTE(lucasagomes): node_id is an attribute created on-the-fly # by _set_node_uuid(), it needs to be present in the fields so # that as_dict() will contain node_id field when converting it # before saving it in the database. self.fields.append('node_id') setattr(self, 'node_uuid', kwargs.get('node_id', atypes.Unset)) # NOTE: portgroup_id is an attribute created on-the-fly # by _set_portgroup_uuid(), it needs to be present in the fields so # that as_dict() will contain portgroup_id field when converting it # before saving it in the database. self.fields.append('portgroup_id') setattr(self, 'portgroup_uuid', kwargs.get('portgroup_id', atypes.Unset)) @classmethod def convert_with_links(cls, rpc_port, fields=None, sanitize=True): port = Port(**rpc_port.as_dict()) port._validate_fields(fields) url = api.request.public_url port.links = [link.Link.make_link('self', url, 'ports', port.uuid), link.Link.make_link('bookmark', url, 'ports', port.uuid, bookmark=True) ] if not sanitize: return port port.sanitize(fields=fields) return port def _validate_fields(self, fields=None): if fields is not None: api_utils.check_for_invalid_fields(fields, self.as_dict()) def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ hide_fields_in_newer_versions(self) if fields is not None: self.unset_fields_except(fields) # never expose the node_id attribute self.node_id = atypes.Unset # never expose the portgroup_id attribute self.portgroup_id = atypes.Unset @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) sample = cls(uuid='27e3153e-d5bf-4b7e-b517-fb518e17f34c', address='fe:54:00:77:07:d9', extra={'foo': 'bar'}, internal_info={}, created_at=time, updated_at=time, pxe_enabled=True, local_link_connection={ 'switch_info': 'host', 'port_id': 'Gig0/1', 'switch_id': 'aa:bb:cc:dd:ee:ff'}, physical_network='physnet1', is_smartnic=False) # NOTE(lucasagomes): node_uuid getter() method look at the # _node_uuid variable sample._node_uuid = '7ae81bb3-dec3-4289-8d6c-da80bd8001ae' sample._portgroup_uuid = '037d9a52-af89-4560-b5a3-a33283295ba2' fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class PortPatchType(types.JsonPatchType): _api_base = Port @staticmethod def internal_attrs(): defaults = types.JsonPatchType.internal_attrs() return defaults + ['/internal_info'] class PortCollection(collection.Collection): """API representation of a collection of ports.""" ports = [Port] """A list containing ports objects""" def __init__(self, **kwargs): self._type = 'ports' @staticmethod def convert_with_links(rpc_ports, limit, url=None, fields=None, **kwargs): collection = PortCollection() collection.ports = [] for rpc_port in rpc_ports: try: port = Port.convert_with_links(rpc_port, fields=fields, sanitize=False) except exception.NodeNotFound: # NOTE(dtantsur): node was deleted after we fetched the port # list, meaning that the port was also deleted. Skip it. LOG.debug('Skipping port %s as its node was deleted', rpc_port.uuid) continue except exception.PortgroupNotFound: # NOTE(dtantsur): port group was deleted after we fetched the # port list, it may mean that the port was deleted too, but # we don't know it. Pretend that the port group was removed. LOG.debug('Removing port group UUID from port %s as the port ' 'group was deleted', rpc_port.uuid) rpc_port.portgroup_id = None port = Port.convert_with_links(rpc_port, fields=fields, sanitize=False) collection.ports.append(port) collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for item in collection.ports: item.sanitize(fields=fields) return collection @classmethod def sample(cls): sample = cls() sample.ports = [Port.sample(expand=False)] return sample class PortsController(rest.RestController): """REST controller for Ports.""" _custom_actions = { 'detail': ['GET'], } invalid_sort_key_list = ['extra', 'internal_info', 'local_link_connection'] advanced_net_fields = ['pxe_enabled', 'local_link_connection'] def __init__(self, node_ident=None, portgroup_ident=None): super(PortsController, self).__init__() self.parent_node_ident = node_ident self.parent_portgroup_ident = portgroup_ident def _get_ports_collection(self, node_ident, address, portgroup_ident, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None, owner=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Port.get_by_uuid(api.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) node_ident = self.parent_node_ident or node_ident portgroup_ident = self.parent_portgroup_ident or portgroup_ident if node_ident and portgroup_ident: raise exception.OperationNotPermitted() if portgroup_ident: # FIXME: Since all we need is the portgroup ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. portgroup = api_utils.get_rpc_portgroup(portgroup_ident) ports = objects.Port.list_by_portgroup_id(api.request.context, portgroup.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir, owner=owner) elif node_ident: # FIXME(comstud): Since all we need is the node ID, we can # make this more efficient by only querying # for that column. This will get cleaned up # as we move to the object interface. node = api_utils.get_rpc_node(node_ident) ports = objects.Port.list_by_node_id(api.request.context, node.id, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir, owner=owner) elif address: ports = self._get_ports_by_address(address, owner=owner) else: ports = objects.Port.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir, owner=owner) parameters = {} if detail is not None: parameters['detail'] = detail return PortCollection.convert_with_links(ports, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir, **parameters) def _get_ports_by_address(self, address, owner=None): """Retrieve a port by its address. :param address: MAC address of a port, to get the port which has this MAC address. :returns: a list with the port, or an empty list if no port is found. """ try: port = objects.Port.get_by_address(api.request.context, address, owner=owner) return [port] except exception.PortNotFound: return [] def _check_allowed_port_fields(self, fields): """Check if fetching a particular field of a port is allowed. Check if the required version is being requested for fields that are only allowed to be fetched in a particular API version. :param fields: list or set of fields to check :raises: NotAcceptable if a field is not allowed """ if fields is None: return if (not api_utils.allow_port_advanced_net_fields() and set(fields).intersection(self.advanced_net_fields)): raise exception.NotAcceptable() if ('portgroup_uuid' in fields and not api_utils.allow_portgroups_subcontrollers()): raise exception.NotAcceptable() if ('physical_network' in fields and not api_utils.allow_port_physical_network()): raise exception.NotAcceptable() if ('is_smartnic' in fields and not api_utils.allow_port_is_smartnic()): raise exception.NotAcceptable() if ('local_link_connection/network_type' in fields and not api_utils.allow_local_link_connection_network_type()): raise exception.NotAcceptable() if (isinstance(fields, dict) and fields.get('local_link_connection') is not None): if (not api_utils.allow_local_link_connection_network_type() and 'network_type' in fields['local_link_connection']): raise exception.NotAcceptable() @METRICS.timer('PortsController.get_all') @expose.expose(PortCollection, types.uuid_or_name, types.uuid, types.macaddress, types.uuid, int, str, str, types.listtype, types.uuid_or_name, types.boolean) def get_all(self, node=None, node_uuid=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, portgroup=None, detail=None): """Retrieve a list of ports. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node, to get only ports for that node. :param node_uuid: UUID of a node, to get only ports for that node. :param address: MAC address of a port, to get the port which has this MAC address. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param portgroup: UUID or name of a portgroup, to get only ports for that portgroup. :raises: NotAcceptable, HTTPNotFound """ owner = api_utils.check_port_list_policy() api_utils.check_allow_specify_fields(fields) self._check_allowed_port_fields(fields) self._check_allowed_port_fields([sort_key]) if portgroup and not api_utils.allow_portgroups_subcontrollers(): raise exception.NotAcceptable() fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) if not node_uuid and node: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. # Make sure only one interface, node or node_uuid is used if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() return self._get_ports_collection(node_uuid or node, address, portgroup, marker, limit, sort_key, sort_dir, fields=fields, detail=detail, owner=owner) @METRICS.timer('PortsController.detail') @expose.expose(PortCollection, types.uuid_or_name, types.uuid, types.macaddress, types.uuid, int, str, str, types.uuid_or_name) def detail(self, node=None, node_uuid=None, address=None, marker=None, limit=None, sort_key='id', sort_dir='asc', portgroup=None): """Retrieve a list of ports with detail. Note that the 'node_uuid' interface is deprecated in favour of the 'node' interface :param node: UUID or name of a node, to get only ports for that node. :param node_uuid: UUID of a node, to get only ports for that node. :param address: MAC address of a port, to get the port which has this MAC address. :param portgroup: UUID or name of a portgroup, to get only ports for that portgroup. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :raises: NotAcceptable, HTTPNotFound """ owner = api_utils.check_port_list_policy() self._check_allowed_port_fields([sort_key]) if portgroup and not api_utils.allow_portgroups_subcontrollers(): raise exception.NotAcceptable() if not node_uuid and node: # We're invoking this interface using positional notation, or # explicitly using 'node'. Try and determine which one. # Make sure only one interface, node or node_uuid is used if (not api_utils.allow_node_logical_names() and not uuidutils.is_uuid_like(node)): raise exception.NotAcceptable() # NOTE(lucasagomes): /detail should only work against collections parent = api.request.path.split('/')[:-1][-1] if parent != "ports": raise exception.HTTPNotFound() resource_url = '/'.join(['ports', 'detail']) return self._get_ports_collection(node_uuid or node, address, portgroup, marker, limit, sort_key, sort_dir, resource_url, owner=owner) @METRICS.timer('PortsController.get_one') @expose.expose(Port, types.uuid, types.listtype) def get_one(self, port_uuid, fields=None): """Retrieve information about the given port. :param port_uuid: UUID of a port. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :raises: NotAcceptable, HTTPNotFound """ if self.parent_node_ident or self.parent_portgroup_ident: raise exception.OperationNotPermitted() rpc_port, rpc_node = api_utils.check_port_policy_and_retrieve( 'baremetal:port:get', port_uuid) api_utils.check_allow_specify_fields(fields) self._check_allowed_port_fields(fields) return Port.convert_with_links(rpc_port, fields=fields) @METRICS.timer('PortsController.post') @expose.expose(Port, body=Port, status_code=http_client.CREATED) def post(self, port): """Create a new port. :param port: a port within the request body. :raises: NotAcceptable, HTTPNotFound, Conflict """ if self.parent_node_ident or self.parent_portgroup_ident: raise exception.OperationNotPermitted() context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:port:create', cdict, cdict) pdict = port.as_dict() self._check_allowed_port_fields(pdict) if (port.is_smartnic and not types.locallinkconnectiontype .validate_for_smart_nic(port.local_link_connection)): raise exception.Invalid( "Smart NIC port must have port_id " "and hostname in local_link_connection") create_remotely = api.request.rpcapi.can_send_create_port() if (not create_remotely and pdict.get('portgroup_uuid')): # NOTE(mgoddard): In RPC API v1.41, port creation was moved to the # conductor service to facilitate validation of the physical # network field of ports in portgroups. During a rolling upgrade, # the RPCAPI will reject the create_port method, so we need to # create the port locally. If the port is a member of a portgroup, # we are unable to perform the validation and must reject the # request. raise exception.NotAcceptable() vif = api_utils.handle_post_port_like_extra_vif(pdict) if (pdict.get('portgroup_uuid') and (pdict.get('pxe_enabled') or vif)): rpc_pg = objects.Portgroup.get_by_uuid(context, pdict['portgroup_uuid']) if not rpc_pg.standalone_ports_supported: msg = _("Port group %s doesn't support standalone ports. " "This port cannot be created as a member of that " "port group because either 'extra/vif_port_id' " "was specified or 'pxe_enabled' was set to True.") raise exception.Conflict( msg % pdict['portgroup_uuid']) # NOTE(yuriyz): UUID is mandatory for notifications payload if not pdict.get('uuid'): pdict['uuid'] = uuidutils.generate_uuid() rpc_port = objects.Port(context, **pdict) rpc_node = objects.Node.get_by_id(context, rpc_port.node_id) notify_extra = {'node_uuid': port.node_uuid, 'portgroup_uuid': port.portgroup_uuid} notify.emit_start_notification(context, rpc_port, 'create', **notify_extra) with notify.handle_error_notification(context, rpc_port, 'create', **notify_extra): # NOTE(mgoddard): In RPC API v1.41, port creation was moved to the # conductor service to facilitate validation of the physical # network field of ports in portgroups. During a rolling upgrade, # the RPCAPI will reject the create_port method, so we need to # create the port locally. if create_remotely: topic = api.request.rpcapi.get_topic_for(rpc_node) new_port = api.request.rpcapi.create_port(context, rpc_port, topic) else: rpc_port.create() new_port = rpc_port notify.emit_end_notification(context, new_port, 'create', **notify_extra) # Set the HTTP Location Header api.response.location = link.build_url('ports', new_port.uuid) return Port.convert_with_links(new_port) @METRICS.timer('PortsController.patch') @wsme.validate(types.uuid, [PortPatchType]) @expose.expose(Port, types.uuid, body=[PortPatchType]) def patch(self, port_uuid, patch): """Update an existing port. :param port_uuid: UUID of a port. :param patch: a json PATCH document to apply to this port. :raises: NotAcceptable, HTTPNotFound """ if self.parent_node_ident or self.parent_portgroup_ident: raise exception.OperationNotPermitted() rpc_port, rpc_node = api_utils.check_port_policy_and_retrieve( 'baremetal:port:update', port_uuid) context = api.request.context fields_to_check = set() for field in (self.advanced_net_fields + ['portgroup_uuid', 'physical_network', 'is_smartnic', 'local_link_connection/network_type']): field_path = '/%s' % field if (api_utils.get_patch_values(patch, field_path) or api_utils.is_path_removed(patch, field_path)): fields_to_check.add(field) self._check_allowed_port_fields(fields_to_check) port_dict = rpc_port.as_dict() # NOTE(lucasagomes): # 1) Remove node_id because it's an internal value and # not present in the API object # 2) Add node_uuid port_dict['node_uuid'] = port_dict.pop('node_id', None) # NOTE(vsaienko): # 1) Remove portgroup_id because it's an internal value and # not present in the API object # 2) Add portgroup_uuid port_dict['portgroup_uuid'] = port_dict.pop('portgroup_id', None) port = Port(**api_utils.apply_jsonpatch(port_dict, patch)) api_utils.handle_patch_port_like_extra_vif(rpc_port, port, patch) if api_utils.is_path_removed(patch, '/portgroup_uuid'): rpc_port.portgroup_id = None # Update only the fields that have changed for field in objects.Port.fields: try: patch_val = getattr(port, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_port[field] != patch_val: rpc_port[field] = patch_val if (rpc_node.provision_state == ir_states.INSPECTING and api_utils.allow_inspect_wait_state()): msg = _('Cannot update port "%(port)s" on "%(node)s" while it is ' 'in state "%(state)s".') % {'port': rpc_port.uuid, 'node': rpc_node.uuid, 'state': ir_states.INSPECTING} raise exception.ClientSideError(msg, status_code=http_client.CONFLICT) notify_extra = {'node_uuid': rpc_node.uuid, 'portgroup_uuid': port.portgroup_uuid} notify.emit_start_notification(context, rpc_port, 'update', **notify_extra) with notify.handle_error_notification(context, rpc_port, 'update', **notify_extra): topic = api.request.rpcapi.get_topic_for(rpc_node) new_port = api.request.rpcapi.update_port(context, rpc_port, topic) api_port = Port.convert_with_links(new_port) notify.emit_end_notification(context, new_port, 'update', **notify_extra) return api_port @METRICS.timer('PortsController.delete') @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, port_uuid): """Delete a port. :param port_uuid: UUID of a port. :raises: OperationNotPermitted, HTTPNotFound """ if self.parent_node_ident or self.parent_portgroup_ident: raise exception.OperationNotPermitted() rpc_port, rpc_node = api_utils.check_port_policy_and_retrieve( 'baremetal:port:delete', port_uuid) context = api.request.context portgroup_uuid = None if rpc_port.portgroup_id: portgroup = objects.Portgroup.get_by_id(context, rpc_port.portgroup_id) portgroup_uuid = portgroup.uuid notify_extra = {'node_uuid': rpc_node.uuid, 'portgroup_uuid': portgroup_uuid} notify.emit_start_notification(context, rpc_port, 'delete', **notify_extra) with notify.handle_error_notification(context, rpc_port, 'delete', **notify_extra): topic = api.request.rpcapi.get_topic_for(rpc_node) api.request.rpcapi.destroy_port(context, rpc_port, topic) notify.emit_end_notification(context, rpc_port, 'delete', **notify_extra) ironic-15.0.0/ironic/api/controllers/v1/state.py0000664000175000017500000000200113652514273021525 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.api.controllers import base from ironic.api.controllers import link class State(base.APIBase): current = str """The current state""" target = str """The user modified desired state""" available = [str] """A list of available states it is able to transition to""" links = [link.Link] """A list containing a self link and associated state links""" ironic-15.0.0/ironic/api/controllers/v1/ramdisk.py0000664000175000017500000002235513652514273022055 0ustar zuulzuul00000000000000# Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client from oslo_config import cfg from oslo_log import log from pecan import rest from ironic import api from ironic.api.controllers import base from ironic.api.controllers.v1 import node as node_ctl from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states from ironic.common import utils from ironic import objects CONF = cfg.CONF LOG = log.getLogger(__name__) _LOOKUP_RETURN_FIELDS = ('uuid', 'properties', 'instance_info', 'driver_internal_info') def config(token): return { 'metrics': { 'backend': CONF.metrics.agent_backend, 'prepend_host': CONF.metrics.agent_prepend_host, 'prepend_uuid': CONF.metrics.agent_prepend_uuid, 'prepend_host_reverse': CONF.metrics.agent_prepend_host_reverse, 'global_prefix': CONF.metrics.agent_global_prefix }, 'metrics_statsd': { 'statsd_host': CONF.metrics_statsd.agent_statsd_host, 'statsd_port': CONF.metrics_statsd.agent_statsd_port }, 'heartbeat_timeout': CONF.api.ramdisk_heartbeat_timeout, 'agent_token': token, # Not an API version based indicator, passing as configuration # as the signifigants indicates support should also be present. 'agent_token_required': CONF.require_agent_token, } class LookupResult(base.APIBase): """API representation of the node lookup result.""" node = node_ctl.Node """The short node representation.""" config = {str: types.jsontype} """The configuration to pass to the ramdisk.""" @classmethod def sample(cls): return cls(node=node_ctl.Node.sample(), config={'heartbeat_timeout': 600}) @classmethod def convert_with_links(cls, node): token = node.driver_internal_info.get('agent_secret_token') node = node_ctl.Node.convert_with_links(node, _LOOKUP_RETURN_FIELDS) return cls(node=node, config=config(token)) class LookupController(rest.RestController): """Controller handling node lookup for a deploy ramdisk.""" @property def lookup_allowed_states(self): if CONF.deploy.fast_track: return states.FASTTRACK_LOOKUP_ALLOWED_STATES return states.LOOKUP_ALLOWED_STATES @expose.expose(LookupResult, types.listtype, types.uuid) def get_all(self, addresses=None, node_uuid=None): """Look up a node by its MAC addresses and optionally UUID. If the "restrict_lookup" option is set to True (the default), limit the search to nodes in certain transient states (e.g. deploy wait). :param addresses: list of MAC addresses for a node. :param node_uuid: UUID of a node. :raises: NotFound if requested API version does not allow this endpoint. :raises: NotFound if suitable node was not found or node's provision state is not allowed for the lookup. :raises: IncompleteLookup if neither node UUID nor any valid MAC address was provided. """ if not api_utils.allow_ramdisk_endpoints(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:driver:ipa_lookup', cdict, cdict) # Validate the list of MAC addresses if addresses is None: addresses = [] valid_addresses = [] invalid_addresses = [] for addr in addresses: try: mac = utils.validate_and_normalize_mac(addr) valid_addresses.append(mac) except exception.InvalidMAC: invalid_addresses.append(addr) if invalid_addresses: node_log = ('' if not node_uuid else '(Node UUID: %s)' % node_uuid) LOG.warning('The following MAC addresses "%(addrs)s" are ' 'invalid and will be ignored by the lookup ' 'request %(node)s', {'addrs': ', '.join(invalid_addresses), 'node': node_log}) if not valid_addresses and not node_uuid: raise exception.IncompleteLookup() try: if node_uuid: node = objects.Node.get_by_uuid( api.request.context, node_uuid) else: node = objects.Node.get_by_port_addresses( api.request.context, valid_addresses) except exception.NotFound: # NOTE(dtantsur): we are reraising the same exception to make sure # we don't disclose the difference between nodes that are not found # at all and nodes in a wrong state by different error messages. raise exception.NotFound() if (CONF.api.restrict_lookup and node.provision_state not in self.lookup_allowed_states): raise exception.NotFound() if api_utils.allow_agent_token() or CONF.require_agent_token: try: topic = api.request.rpcapi.get_topic_for(node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise found_node = api.request.rpcapi.get_node_with_token( api.request.context, node.uuid, topic=topic) else: found_node = node return LookupResult.convert_with_links(found_node) class HeartbeatController(rest.RestController): """Controller handling heartbeats from deploy ramdisk.""" @expose.expose(None, types.uuid_or_name, str, str, str, status_code=http_client.ACCEPTED) def post(self, node_ident, callback_url, agent_version=None, agent_token=None): """Process a heartbeat from the deploy ramdisk. :param node_ident: the UUID or logical name of a node. :param callback_url: the URL to reach back to the ramdisk. :param agent_version: The version of the agent that is heartbeating. ``None`` indicates that the agent that is heartbeating is a version before sending agent_version was introduced so agent v3.0.0 (the last release before sending agent_version was introduced) will be assumed. :raises: NodeNotFound if node with provided UUID or name was not found. :raises: InvalidUuidOrName if node_ident is not valid name or UUID. :raises: NoValidHost if RPC topic for node could not be retrieved. :raises: NotFound if requested API version does not allow this endpoint. """ if not api_utils.allow_ramdisk_endpoints(): raise exception.NotFound() if agent_version and not api_utils.allow_agent_version_in_heartbeat(): raise exception.InvalidParameterValue( _('Field "agent_version" not recognised')) cdict = api.request.context.to_policy_values() policy.authorize('baremetal:node:ipa_heartbeat', cdict, cdict) rpc_node = api_utils.get_rpc_node_with_suffix(node_ident) dii = rpc_node['driver_internal_info'] agent_url = dii.get('agent_url') # If we have an agent_url on file, and we get something different # we should fail because this is unexpected behavior of the agent. if agent_url is not None and agent_url != callback_url: LOG.error('Received heartbeat for node %(node)s with ' 'callback URL %(url)s. This is not expected, ' 'and the heartbeat will not be processed.', {'node': rpc_node.uuid, 'url': callback_url}) raise exception.Invalid( _('Detected change in ramdisk provided ' '"callback_url"')) # NOTE(TheJulia): If tokens are required, lets go ahead and fail the # heartbeat very early on. token_required = CONF.require_agent_token if token_required and agent_token is None: LOG.error('Agent heartbeat received for node %(node)s ' 'without an agent token.', {'node': node_ident}) raise exception.InvalidParameterValue( _('Agent token is required for heartbeat processing.')) try: topic = api.request.rpcapi.get_topic_for(rpc_node) except exception.NoValidHost as e: e.code = http_client.BAD_REQUEST raise api.request.rpcapi.heartbeat( api.request.context, rpc_node.uuid, callback_url, agent_version, agent_token, topic=topic) ironic-15.0.0/ironic/api/controllers/v1/chassis.py0000664000175000017500000003444513652514273022063 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_utils import uuidutils from pecan import rest import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import node from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('uuid', 'description') class Chassis(base.APIBase): """API representation of a chassis. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a chassis. """ uuid = types.uuid """The UUID of the chassis""" description = atypes.StringType(max_length=255) """The description of the chassis""" extra = {str: types.jsontype} """The metadata of the chassis""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated chassis links""" nodes = atypes.wsattr([link.Link], readonly=True) """Links to the collection of nodes contained in this chassis""" def __init__(self, **kwargs): self.fields = [] for field in objects.Chassis.fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) @staticmethod def _convert_with_links(chassis, url, fields=None): if fields is None: chassis.nodes = [link.Link.make_link('self', url, 'chassis', chassis.uuid + "/nodes"), link.Link.make_link('bookmark', url, 'chassis', chassis.uuid + "/nodes", bookmark=True) ] chassis.links = [link.Link.make_link('self', url, 'chassis', chassis.uuid), link.Link.make_link('bookmark', url, 'chassis', chassis.uuid, bookmark=True) ] return chassis @classmethod def convert_with_links(cls, rpc_chassis, fields=None, sanitize=True): chassis = Chassis(**rpc_chassis.as_dict()) if fields is not None: api_utils.check_for_invalid_fields(fields, chassis.as_dict()) chassis = cls._convert_with_links(chassis, api.request.public_url, fields) if not sanitize: return chassis chassis.sanitize(fields) return chassis def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if fields is not None: self.unset_fields_except(fields) @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) sample = cls(uuid='eaaca217-e7d8-47b4-bb41-3f99f20eed89', extra={}, description='Sample chassis', created_at=time, updated_at=time) fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class ChassisPatchType(types.JsonPatchType): _api_base = Chassis class ChassisCollection(collection.Collection): """API representation of a collection of chassis.""" chassis = [Chassis] """A list containing chassis objects""" def __init__(self, **kwargs): self._type = 'chassis' @staticmethod def convert_with_links(chassis, limit, url=None, fields=None, **kwargs): collection = ChassisCollection() collection.chassis = [Chassis.convert_with_links(ch, fields=fields, sanitize=False) for ch in chassis] url = url or None collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for item in collection.chassis: item.sanitize(fields) return collection @classmethod def sample(cls): # FIXME(jroll) hack for docs build, bug #1560508 if not hasattr(objects, 'Chassis'): objects.register_all() sample = cls() sample.chassis = [Chassis.sample(expand=False)] return sample class ChassisController(rest.RestController): """REST controller for Chassis.""" nodes = node.NodesController() """Expose nodes as a sub-element of chassis""" # Set the flag to indicate that the requests to this resource are # coming from a top-level resource nodes.from_chassis = True _custom_actions = { 'detail': ['GET'], } invalid_sort_key_list = ['extra'] def _get_chassis_collection(self, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) marker_obj = None if marker: marker_obj = objects.Chassis.get_by_uuid(api.request.context, marker) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for sorting") % {'key': sort_key}) chassis = objects.Chassis.list(api.request.context, limit, marker_obj, sort_key=sort_key, sort_dir=sort_dir) parameters = {} if detail is not None: parameters['detail'] = detail return ChassisCollection.convert_with_links(chassis, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir, **parameters) @METRICS.timer('ChassisController.get_all') @expose.expose(ChassisCollection, types.uuid, int, str, str, types.listtype, types.boolean) def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of chassis. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:chassis:get', cdict, cdict) api_utils.check_allow_specify_fields(fields) fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) return self._get_chassis_collection(marker, limit, sort_key, sort_dir, fields=fields, detail=detail) @METRICS.timer('ChassisController.detail') @expose.expose(ChassisCollection, types.uuid, int, str, str) def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'): """Retrieve a list of chassis with detail. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:chassis:get', cdict, cdict) # /detail should only work against collections parent = api.request.path.split('/')[:-1][-1] if parent != "chassis": raise exception.HTTPNotFound() resource_url = '/'.join(['chassis', 'detail']) return self._get_chassis_collection(marker, limit, sort_key, sort_dir, resource_url) @METRICS.timer('ChassisController.get_one') @expose.expose(Chassis, types.uuid, types.listtype) def get_one(self, chassis_uuid, fields=None): """Retrieve information about the given chassis. :param chassis_uuid: UUID of a chassis. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:chassis:get', cdict, cdict) api_utils.check_allow_specify_fields(fields) rpc_chassis = objects.Chassis.get_by_uuid(api.request.context, chassis_uuid) return Chassis.convert_with_links(rpc_chassis, fields=fields) @METRICS.timer('ChassisController.post') @expose.expose(Chassis, body=Chassis, status_code=http_client.CREATED) def post(self, chassis): """Create a new chassis. :param chassis: a chassis within the request body. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:chassis:create', cdict, cdict) # NOTE(yuriyz): UUID is mandatory for notifications payload if not chassis.uuid: chassis.uuid = uuidutils.generate_uuid() new_chassis = objects.Chassis(context, **chassis.as_dict()) notify.emit_start_notification(context, new_chassis, 'create') with notify.handle_error_notification(context, new_chassis, 'create'): new_chassis.create() notify.emit_end_notification(context, new_chassis, 'create') # Set the HTTP Location Header api.response.location = link.build_url('chassis', new_chassis.uuid) return Chassis.convert_with_links(new_chassis) @METRICS.timer('ChassisController.patch') @wsme.validate(types.uuid, [ChassisPatchType]) @expose.expose(Chassis, types.uuid, body=[ChassisPatchType]) def patch(self, chassis_uuid, patch): """Update an existing chassis. :param chassis_uuid: UUID of a chassis. :param patch: a json PATCH document to apply to this chassis. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:chassis:update', cdict, cdict) rpc_chassis = objects.Chassis.get_by_uuid(context, chassis_uuid) chassis = Chassis( **api_utils.apply_jsonpatch(rpc_chassis.as_dict(), patch)) # Update only the fields that have changed for field in objects.Chassis.fields: try: patch_val = getattr(chassis, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_chassis[field] != patch_val: rpc_chassis[field] = patch_val notify.emit_start_notification(context, rpc_chassis, 'update') with notify.handle_error_notification(context, rpc_chassis, 'update'): rpc_chassis.save() notify.emit_end_notification(context, rpc_chassis, 'update') return Chassis.convert_with_links(rpc_chassis) @METRICS.timer('ChassisController.delete') @expose.expose(None, types.uuid, status_code=http_client.NO_CONTENT) def delete(self, chassis_uuid): """Delete a chassis. :param chassis_uuid: UUID of a chassis. """ context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:chassis:delete', cdict, cdict) rpc_chassis = objects.Chassis.get_by_uuid(context, chassis_uuid) notify.emit_start_notification(context, rpc_chassis, 'delete') with notify.handle_error_notification(context, rpc_chassis, 'delete'): rpc_chassis.destroy() notify.emit_end_notification(context, rpc_chassis, 'delete') ironic-15.0.0/ironic/api/controllers/v1/conductor.py0000664000175000017500000002276513652514273022430 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from ironic_lib import metrics_utils from oslo_log import log from oslo_utils import timeutils from pecan import rest from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy import ironic.conf from ironic import objects CONF = ironic.conf.CONF LOG = log.getLogger(__name__) METRICS = metrics_utils.get_metrics_logger(__name__) _DEFAULT_RETURN_FIELDS = ('hostname', 'conductor_group', 'alive') class Conductor(base.APIBase): """API representation of a bare metal conductor.""" hostname = atypes.wsattr(str) """The hostname for this conductor""" conductor_group = atypes.wsattr(str) """The conductor group this conductor belongs to""" alive = types.boolean """Indicates whether this conductor is considered alive""" drivers = atypes.wsattr([str]) """The drivers enabled on this conductor""" links = atypes.wsattr([link.Link]) """A list containing a self link and associated conductor links""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.Conductor.fields) # NOTE(kaifeng): alive is not part of objects.Conductor.fields # because it's an API-only attribute. fields.append('alive') for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) @staticmethod def _convert_with_links(conductor, url, fields=None): conductor.links = [link.Link.make_link('self', url, 'conductors', conductor.hostname), link.Link.make_link('bookmark', url, 'conductors', conductor.hostname, bookmark=True)] return conductor @classmethod def convert_with_links(cls, rpc_conductor, fields=None): conductor = Conductor(**rpc_conductor.as_dict()) conductor.alive = not timeutils.is_older_than( conductor.updated_at, CONF.conductor.heartbeat_timeout) if fields is not None: api_utils.check_for_invalid_fields(fields, conductor.as_dict()) conductor = cls._convert_with_links(conductor, api.request.public_url, fields=fields) conductor.sanitize(fields) return conductor def sanitize(self, fields): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ if fields is not None: self.unset_fields_except(fields) @classmethod def sample(cls, expand=True): time = datetime.datetime(2000, 1, 1, 12, 0, 0) sample = cls(hostname='computer01', conductor_group='', alive=True, drivers=['ipmi'], created_at=time, updated_at=time) fields = None if expand else _DEFAULT_RETURN_FIELDS return cls._convert_with_links(sample, 'http://localhost:6385', fields=fields) class ConductorCollection(collection.Collection): """API representation of a collection of conductors.""" conductors = [Conductor] """A list containing conductor objects""" def __init__(self, **kwargs): self._type = 'conductors' # NOTE(kaifeng) Override because conductors use hostname instead of uuid. @classmethod def get_key_field(cls): return 'hostname' @staticmethod def convert_with_links(conductors, limit, url=None, fields=None, **kwargs): collection = ConductorCollection() collection.conductors = [Conductor.convert_with_links(c, fields=fields) for c in conductors] collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for conductor in collection.conductors: conductor.sanitize(fields) return collection @classmethod def sample(cls): sample = cls() conductor = Conductor.sample(expand=False) sample.conductors = [conductor] return sample class ConductorsController(rest.RestController): """REST controller for conductors.""" invalid_sort_key_list = ['alive', 'drivers'] def _get_conductors_collection(self, marker, limit, sort_key, sort_dir, resource_url=None, fields=None, detail=None): limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) marker_obj = None if marker: marker_obj = objects.Conductor.get_by_hostname( api.request.context, marker, online=None) conductors = objects.Conductor.list(api.request.context, limit=limit, marker=marker_obj, sort_key=sort_key, sort_dir=sort_dir) parameters = {'sort_key': sort_key, 'sort_dir': sort_dir} if detail is not None: parameters['detail'] = detail return ConductorCollection.convert_with_links(conductors, limit, url=resource_url, fields=fields, **parameters) @METRICS.timer('ConductorsController.get_all') @expose.expose(ConductorCollection, types.name, int, str, str, types.listtype, types.boolean) def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, detail=None): """Retrieve a list of conductors. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param detail: Optional, boolean to indicate whether retrieve a list of conductors with detail. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:conductor:get', cdict, cdict) if not api_utils.allow_expose_conductors(): raise exception.NotFound() api_utils.check_allow_specify_fields(fields) api_utils.check_allowed_fields(fields) api_utils.check_allowed_fields([sort_key]) fields = api_utils.get_request_return_fields(fields, detail, _DEFAULT_RETURN_FIELDS) return self._get_conductors_collection(marker, limit, sort_key, sort_dir, fields=fields, detail=detail) @METRICS.timer('ConductorsController.get_one') @expose.expose(Conductor, types.name, types.listtype) def get_one(self, hostname, fields=None): """Retrieve information about the given conductor. :param hostname: hostname of a conductor. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ cdict = api.request.context.to_policy_values() policy.authorize('baremetal:conductor:get', cdict, cdict) if not api_utils.allow_expose_conductors(): raise exception.NotFound() api_utils.check_allow_specify_fields(fields) api_utils.check_allowed_fields(fields) conductor = objects.Conductor.get_by_hostname(api.request.context, hostname, online=None) return Conductor.convert_with_links(conductor, fields=fields) ironic-15.0.0/ironic/api/controllers/v1/notification_utils.py0000664000175000017500000001652113652514273024327 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_config import cfg from oslo_log import log from oslo_messaging import exceptions as oslo_msg_exc from oslo_utils import excutils from oslo_versionedobjects import exception as oslo_vo_exc from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.objects import allocation as allocation_objects from ironic.objects import chassis as chassis_objects from ironic.objects import deploy_template as deploy_template_objects from ironic.objects import fields from ironic.objects import node as node_objects from ironic.objects import notification from ironic.objects import port as port_objects from ironic.objects import portgroup as portgroup_objects from ironic.objects import volume_connector as volume_connector_objects from ironic.objects import volume_target as volume_target_objects LOG = log.getLogger(__name__) CONF = cfg.CONF CRUD_NOTIFY_OBJ = { 'allocation': (allocation_objects.AllocationCRUDNotification, allocation_objects.AllocationCRUDPayload), 'chassis': (chassis_objects.ChassisCRUDNotification, chassis_objects.ChassisCRUDPayload), 'deploytemplate': (deploy_template_objects.DeployTemplateCRUDNotification, deploy_template_objects.DeployTemplateCRUDPayload), 'node': (node_objects.NodeCRUDNotification, node_objects.NodeCRUDPayload), 'port': (port_objects.PortCRUDNotification, port_objects.PortCRUDPayload), 'portgroup': (portgroup_objects.PortgroupCRUDNotification, portgroup_objects.PortgroupCRUDPayload), 'volumeconnector': (volume_connector_objects.VolumeConnectorCRUDNotification, volume_connector_objects.VolumeConnectorCRUDPayload), 'volumetarget': (volume_target_objects.VolumeTargetCRUDNotification, volume_target_objects.VolumeTargetCRUDPayload), } def _emit_api_notification(context, obj, action, level, status, **kwargs): """Helper for emitting API notifications. :param context: request context. :param obj: resource rpc object. :param action: Action string to go in the EventType. :param level: Notification level. One of `ironic.objects.fields.NotificationLevel.ALL` :param status: Status to go in the EventType. One of `ironic.objects.fields.NotificationStatus.ALL` :param kwargs: kwargs to use when creating the notification payload. """ resource = obj.__class__.__name__.lower() # value atypes.Unset can be passed from API representation of resource extra_args = {k: (v if v != atypes.Unset else None) for k, v in kwargs.items()} try: try: if action == 'maintenance_set': notification_method = node_objects.NodeMaintenanceNotification payload_method = node_objects.NodePayload elif resource not in CRUD_NOTIFY_OBJ: notification_name = payload_name = _("is not defined") raise KeyError(_("Unsupported resource: %s") % resource) else: notification_method, payload_method = CRUD_NOTIFY_OBJ[resource] notification_name = notification_method.__name__ payload_name = payload_method.__name__ finally: # Prepare our exception message just in case exception_values = {"resource": resource, "uuid": obj.uuid, "action": action, "status": status, "level": level, "notification_method": notification_name, "payload_method": payload_name} exception_message = (_("Failed to send baremetal.%(resource)s." "%(action)s.%(status)s notification for " "%(resource)s %(uuid)s with level " "%(level)s, notification method " "%(notification_method)s, payload method " "%(payload_method)s, error %(error)s")) payload = payload_method(obj, **extra_args) if resource == 'node': notification.mask_secrets(payload) notification_method( publisher=notification.NotificationPublisher( service='ironic-api', host=CONF.host), event_type=notification.EventType( object=resource, action=action, status=status), level=level, payload=payload).emit(context) except (exception.NotificationSchemaObjectError, exception.NotificationSchemaKeyError, exception.NotificationPayloadError, oslo_msg_exc.MessageDeliveryFailure, oslo_vo_exc.VersionedObjectsException) as e: exception_values['error'] = e LOG.warning(exception_message, exception_values) except Exception as e: exception_values['error'] = e LOG.exception(exception_message, exception_values) def emit_start_notification(context, obj, action, **kwargs): """Helper for emitting API 'start' notifications. :param context: request context. :param obj: resource rpc object. :param action: Action string to go in the EventType. :param kwargs: kwargs to use when creating the notification payload. """ _emit_api_notification(context, obj, action, fields.NotificationLevel.INFO, fields.NotificationStatus.START, **kwargs) @contextlib.contextmanager def handle_error_notification(context, obj, action, **kwargs): """Context manager to handle any error notifications. :param context: request context. :param obj: resource rpc object. :param action: Action string to go in the EventType. :param kwargs: kwargs to use when creating the notification payload. """ try: yield except Exception: with excutils.save_and_reraise_exception(): _emit_api_notification(context, obj, action, fields.NotificationLevel.ERROR, fields.NotificationStatus.ERROR, **kwargs) def emit_end_notification(context, obj, action, **kwargs): """Helper for emitting API 'end' notifications. :param context: request context. :param obj: resource rpc object. :param action: Action string to go in the EventType. :param kwargs: kwargs to use when creating the notification payload. """ _emit_api_notification(context, obj, action, fields.NotificationLevel.INFO, fields.NotificationStatus.END, **kwargs) ironic-15.0.0/ironic/api/controllers/v1/versions.py0000664000175000017500000001663513652514273022277 0ustar zuulzuul00000000000000# Copyright (c) 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from ironic.common import release_mappings CONF = cfg.CONF # This is the version 1 API BASE_VERSION = 1 # Here goes a short log of changes in every version. # Refer to doc/source/contributor/webapi-version-history.rst for a detailed # explanation of what each version contains. # # v1.0: corresponds to Juno API, not supported since Kilo # v1.1: API at the point in time when versioning support was added, # covers the following commits from Kilo cycle: # 827db7fe: Add Node.maintenance_reason # 68eed82b: Add API endpoint to set/unset the node maintenance mode # bc973889: Add sync and async support for passthru methods # e03f443b: Vendor endpoints to support different HTTP methods # e69e5309: Make vendor methods discoverable via the Ironic API # edf532db: Add logic to store the config drive passed by Nova # v1.2: Renamed NOSTATE ("None") to AVAILABLE ("available") # v1.3: Add node.driver_internal_info # v1.4: Add MANAGEABLE state # v1.5: Add logical node names # v1.6: Add INSPECT* states # v1.7: Add node.clean_step # v1.8: Add ability to return a subset of resource fields # v1.9: Add ability to filter nodes by provision state # v1.10: Logical node names support RFC 3986 unreserved characters # v1.11: Nodes appear in ENROLL state by default # v1.12: Add support for RAID # v1.13: Add 'abort' verb to CLEANWAIT # v1.14: Make the following endpoints discoverable via API: # 1. '/v1/nodes//states' # 2. '/v1/drivers//properties' # v1.15: Add ability to do manual cleaning of nodes # v1.16: Add ability to filter nodes by driver. # v1.17: Add 'adopt' verb for ADOPTING active nodes. # v1.18: Add port.internal_info. # v1.19: Add port.local_link_connection and port.pxe_enabled. # v1.20: Add node.network_interface # v1.21: Add node.resource_class # v1.22: Ramdisk lookup and heartbeat endpoints. # v1.23: Add portgroup support. # v1.24: Add subcontrollers: node.portgroup, portgroup.ports. # Add port.portgroup_uuid field. # v1.25: Add possibility to unset chassis_uuid from node. # v1.26: Add portgroup.mode and portgroup.properties. # v1.27: Add soft reboot, soft power off and timeout. # v1.28: Add vifs subcontroller to node # v1.29: Add inject nmi. # v1.30: Add dynamic driver interactions. # v1.31: Add dynamic interfaces fields to node. # v1.32: Add volume support. # v1.33: Add node storage interface # v1.34: Add physical network field to port. # v1.35: Add ability to provide configdrive when rebuilding node. # v1.36: Add Ironic Python Agent version support. # v1.37: Add node traits. # v1.38: Add rescue and unrescue provision states # v1.39: Add inspect wait provision state. # v1.40: Add bios.properties. # Add bios_interface to the node object. # v1.41: Add inspection abort support. # v1.42: Expose fault field to node. # v1.43: Add detail=True flag to all API endpoints # v1.44: Add node deploy_step field # v1.45: reset_interfaces parameter to node's PATCH # v1.46: Add conductor_group to the node object. # v1.47: Add automated_clean to the node object. # v1.48: Add protected to the node object. # v1.49: Add conductor to the node object and /v1/conductors. # v1.50: Add owner to the node object. # v1.51: Add description to the node object. # v1.52: Add allocation API. # v1.53: Add support for Smart NIC port # v1.54: Add events support. # v1.55: Add deploy templates API. # v1.56: Add support for building configdrives. # v1.57: Add support for updating an exisiting allocation. # v1.58: Add support for backfilling allocations. # v1.59: Add support vendor data in configdrives. # v1.60: Add owner to the allocation object. # v1.61: Add retired and retired_reason to the node object. # v1.62: Add agent_token support for agent communication. # v1.63: Add support for indicators # v1.64: Add network_type to port.local_link_connection # v1.65: Add lessee to the node object. MINOR_0_JUNO = 0 MINOR_1_INITIAL_VERSION = 1 MINOR_2_AVAILABLE_STATE = 2 MINOR_3_DRIVER_INTERNAL_INFO = 3 MINOR_4_MANAGEABLE_STATE = 4 MINOR_5_NODE_NAME = 5 MINOR_6_INSPECT_STATE = 6 MINOR_7_NODE_CLEAN = 7 MINOR_8_FETCHING_SUBSET_OF_FIELDS = 8 MINOR_9_PROVISION_STATE_FILTER = 9 MINOR_10_UNRESTRICTED_NODE_NAME = 10 MINOR_11_ENROLL_STATE = 11 MINOR_12_RAID_CONFIG = 12 MINOR_13_ABORT_VERB = 13 MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES = 14 MINOR_15_MANUAL_CLEAN = 15 MINOR_16_DRIVER_FILTER = 16 MINOR_17_ADOPT_VERB = 17 MINOR_18_PORT_INTERNAL_INFO = 18 MINOR_19_PORT_ADVANCED_NET_FIELDS = 19 MINOR_20_NETWORK_INTERFACE = 20 MINOR_21_RESOURCE_CLASS = 21 MINOR_22_LOOKUP_HEARTBEAT = 22 MINOR_23_PORTGROUPS = 23 MINOR_24_PORTGROUPS_SUBCONTROLLERS = 24 MINOR_25_UNSET_CHASSIS_UUID = 25 MINOR_26_PORTGROUP_MODE_PROPERTIES = 26 MINOR_27_SOFT_POWER_OFF = 27 MINOR_28_VIFS_SUBCONTROLLER = 28 MINOR_29_INJECT_NMI = 29 MINOR_30_DYNAMIC_DRIVERS = 30 MINOR_31_DYNAMIC_INTERFACES = 31 MINOR_32_VOLUME = 32 MINOR_33_STORAGE_INTERFACE = 33 MINOR_34_PORT_PHYSICAL_NETWORK = 34 MINOR_35_REBUILD_CONFIG_DRIVE = 35 MINOR_36_AGENT_VERSION_HEARTBEAT = 36 MINOR_37_NODE_TRAITS = 37 MINOR_38_RESCUE_INTERFACE = 38 MINOR_39_INSPECT_WAIT = 39 MINOR_40_BIOS_INTERFACE = 40 MINOR_41_INSPECTION_ABORT = 41 MINOR_42_FAULT = 42 MINOR_43_ENABLE_DETAIL_QUERY = 43 MINOR_44_NODE_DEPLOY_STEP = 44 MINOR_45_RESET_INTERFACES = 45 MINOR_46_NODE_CONDUCTOR_GROUP = 46 MINOR_47_NODE_AUTOMATED_CLEAN = 47 MINOR_48_NODE_PROTECTED = 48 MINOR_49_CONDUCTORS = 49 MINOR_50_NODE_OWNER = 50 MINOR_51_NODE_DESCRIPTION = 51 MINOR_52_ALLOCATION = 52 MINOR_53_PORT_SMARTNIC = 53 MINOR_54_EVENTS = 54 MINOR_55_DEPLOY_TEMPLATES = 55 MINOR_56_BUILD_CONFIGDRIVE = 56 MINOR_57_ALLOCATION_UPDATE = 57 MINOR_58_ALLOCATION_BACKFILL = 58 MINOR_59_CONFIGDRIVE_VENDOR_DATA = 59 MINOR_60_ALLOCATION_OWNER = 60 MINOR_61_NODE_RETIRED = 61 MINOR_62_AGENT_TOKEN = 62 MINOR_63_INDICATORS = 63 MINOR_64_LOCAL_LINK_CONNECTION_NETWORK_TYPE = 64 MINOR_65_NODE_LESSEE = 65 # When adding another version, update: # - MINOR_MAX_VERSION # - doc/source/contributor/webapi-version-history.rst with a detailed # explanation of what changed in the new version # - common/release_mappings.py, RELEASE_MAPPING['master']['api'] MINOR_MAX_VERSION = MINOR_65_NODE_LESSEE # String representations of the minor and maximum versions _MIN_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_1_INITIAL_VERSION) _MAX_VERSION_STRING = '{}.{}'.format(BASE_VERSION, MINOR_MAX_VERSION) def min_version_string(): """Returns the minimum supported API version (as a string)""" return _MIN_VERSION_STRING def max_version_string(): """Returns the maximum supported API version (as a string). If the service is pinned, the maximum API version is the pinned version. Otherwise, it is the maximum supported API version. """ release_ver = release_mappings.RELEASE_MAPPING.get( CONF.pin_release_version) if release_ver: return release_ver['api'] else: return _MAX_VERSION_STRING ironic-15.0.0/ironic/api/controllers/v1/utils.py0000664000175000017500000014334613652514273021567 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client import inspect import re import jsonpatch import jsonschema from jsonschema import exceptions as json_schema_exc import os_traits from oslo_config import cfg from oslo_utils import uuidutils from pecan import rest from webob import static import wsme from ironic import api from ironic.api.controllers.v1 import versions from ironic.api import types as atypes from ironic.common import exception from ironic.common import faults from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states from ironic.common import utils from ironic import objects CONF = cfg.CONF _JSONPATCH_EXCEPTIONS = (jsonpatch.JsonPatchConflict, jsonpatch.JsonPatchException, jsonpatch.JsonPointerException, KeyError, IndexError) # Minimum API version to use for certain verbs MIN_VERB_VERSIONS = { # v1.4 added the MANAGEABLE state and two verbs to move nodes into # and out of that state. Reject requests to do this in older versions states.VERBS['manage']: versions.MINOR_4_MANAGEABLE_STATE, states.VERBS['provide']: versions.MINOR_4_MANAGEABLE_STATE, states.VERBS['inspect']: versions.MINOR_6_INSPECT_STATE, states.VERBS['abort']: versions.MINOR_13_ABORT_VERB, states.VERBS['clean']: versions.MINOR_15_MANUAL_CLEAN, states.VERBS['adopt']: versions.MINOR_17_ADOPT_VERB, states.VERBS['rescue']: versions.MINOR_38_RESCUE_INTERFACE, states.VERBS['unrescue']: versions.MINOR_38_RESCUE_INTERFACE, } V31_FIELDS = [ 'boot_interface', 'console_interface', 'deploy_interface', 'inspect_interface', 'management_interface', 'power_interface', 'raid_interface', 'vendor_interface', ] STANDARD_TRAITS = os_traits.get_traits() CUSTOM_TRAIT_REGEX = re.compile("^%s[A-Z0-9_]+$" % os_traits.CUSTOM_NAMESPACE) def validate_limit(limit): if limit is None: return CONF.api.max_limit if limit <= 0: raise exception.ClientSideError(_("Limit must be positive")) return min(CONF.api.max_limit, limit) def validate_sort_dir(sort_dir): if sort_dir not in ['asc', 'desc']: raise exception.ClientSideError(_("Invalid sort direction: %s. " "Acceptable values are " "'asc' or 'desc'") % sort_dir) return sort_dir def validate_trait(trait, error_prefix=_('Invalid trait')): error = exception.ClientSideError( _('%(error_prefix)s. A valid trait must be no longer than 255 ' 'characters. Standard traits are defined in the os_traits library. ' 'A custom trait must start with the prefix CUSTOM_ and use ' 'the following characters: A-Z, 0-9 and _') % {'error_prefix': error_prefix}) if not isinstance(trait, str): raise error if len(trait) > 255 or len(trait) < 1: raise error if trait in STANDARD_TRAITS: return if CUSTOM_TRAIT_REGEX.match(trait) is None: raise error def apply_jsonpatch(doc, patch): """Apply a JSON patch, one operation at a time. If the patch fails to apply, this allows us to determine which operation failed, making the error message a little less cryptic. :param doc: The JSON document to patch. :param patch: The JSON patch to apply. :returns: The result of the patch operation. :raises: PatchError if the patch fails to apply. :raises: exception.ClientSideError if the patch adds a new root attribute. """ # Prevent removal of root attributes. for p in patch: if p['op'] == 'add' and p['path'].count('/') == 1: if p['path'].lstrip('/') not in doc: msg = _('Adding a new attribute (%s) to the root of ' 'the resource is not allowed') raise exception.ClientSideError(msg % p['path']) # Apply operations one at a time, to improve error reporting. for patch_op in patch: try: doc = jsonpatch.apply_patch(doc, jsonpatch.JsonPatch([patch_op])) except _JSONPATCH_EXCEPTIONS as e: raise exception.PatchError(patch=patch_op, reason=e) return doc def get_patch_values(patch, path): """Get the patch values corresponding to the specified path. If there are multiple values specified for the same path, for example :: [{'op': 'add', 'path': '/name', 'value': 'abc'}, {'op': 'add', 'path': '/name', 'value': 'bca'}] return all of them in a list (preserving order) :param patch: HTTP PATCH request body. :param path: the path to get the patch values for. :returns: list of values for the specified path in the patch. """ return [p['value'] for p in patch if p['path'] == path and p['op'] != 'remove'] def is_path_removed(patch, path): """Returns whether the patch includes removal of the path (or subpath of). :param patch: HTTP PATCH request body. :param path: the path to check. :returns: True if path or subpath being removed, False otherwise. """ path = path.rstrip('/') for p in patch: if ((p['path'] == path or p['path'].startswith(path + '/')) and p['op'] == 'remove'): return True def is_path_updated(patch, path): """Returns whether the patch includes operation on path (or its subpath). :param patch: HTTP PATCH request body. :param path: the path to check. :returns: True if path or subpath being patched, False otherwise. """ path = path.rstrip('/') for p in patch: return p['path'] == path or p['path'].startswith(path + '/') def allow_node_logical_names(): # v1.5 added logical name aliases return api.request.version.minor >= versions.MINOR_5_NODE_NAME def _get_with_suffix(get_func, ident, exc_class): """Helper to get a resource taking into account API .json suffix.""" try: return get_func(ident) except exc_class: if not api.request.environ['HAS_JSON_SUFFIX']: raise # NOTE(dtantsur): strip .json prefix to maintain compatibility # with the guess_content_type_from_ext feature. Try to return it # back if the resulting resource was not found. return get_func(ident + '.json') def get_rpc_node(node_ident): """Get the RPC node from the node uuid or logical name. :param node_ident: the UUID or logical name of a node. :returns: The RPC Node. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: NodeNotFound if the node is not found. """ # Check to see if the node_ident is a valid UUID. If it is, treat it # as a UUID. if uuidutils.is_uuid_like(node_ident): return objects.Node.get_by_uuid(api.request.context, node_ident) # We can refer to nodes by their name, if the client supports it if allow_node_logical_names(): if is_valid_logical_name(node_ident): return objects.Node.get_by_name(api.request.context, node_ident) raise exception.InvalidUuidOrName(name=node_ident) # Ensure we raise the same exception as we did for the Juno release raise exception.NodeNotFound(node=node_ident) def get_rpc_node_with_suffix(node_ident): """Get the RPC node from the node uuid or logical name. If HAS_JSON_SUFFIX flag is set in the pecan environment, try also looking for node_ident with '.json' suffix. Otherwise identical to get_rpc_node. :param node_ident: the UUID or logical name of a node. :returns: The RPC Node. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: NodeNotFound if the node is not found. """ return _get_with_suffix(get_rpc_node, node_ident, exception.NodeNotFound) def get_rpc_portgroup(portgroup_ident): """Get the RPC portgroup from the portgroup UUID or logical name. :param portgroup_ident: the UUID or logical name of a portgroup. :returns: The RPC portgroup. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: PortgroupNotFound if the portgroup is not found. """ # Check to see if the portgroup_ident is a valid UUID. If it is, treat it # as a UUID. if uuidutils.is_uuid_like(portgroup_ident): return objects.Portgroup.get_by_uuid(api.request.context, portgroup_ident) # We can refer to portgroups by their name if utils.is_valid_logical_name(portgroup_ident): return objects.Portgroup.get_by_name(api.request.context, portgroup_ident) raise exception.InvalidUuidOrName(name=portgroup_ident) def get_rpc_portgroup_with_suffix(portgroup_ident): """Get the RPC portgroup from the portgroup UUID or logical name. If HAS_JSON_SUFFIX flag is set in the pecan environment, try also looking for portgroup_ident with '.json' suffix. Otherwise identical to get_rpc_portgroup. :param portgroup_ident: the UUID or logical name of a portgroup. :returns: The RPC portgroup. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: PortgroupNotFound if the portgroup is not found. """ return _get_with_suffix(get_rpc_portgroup, portgroup_ident, exception.PortgroupNotFound) def get_rpc_allocation(allocation_ident): """Get the RPC allocation from the allocation UUID or logical name. :param allocation_ident: the UUID or logical name of an allocation. :returns: The RPC allocation. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: AllocationNotFound if the allocation is not found. """ # Check to see if the allocation_ident is a valid UUID. If it is, treat it # as a UUID. if uuidutils.is_uuid_like(allocation_ident): return objects.Allocation.get_by_uuid(api.request.context, allocation_ident) # We can refer to allocations by their name if utils.is_valid_logical_name(allocation_ident): return objects.Allocation.get_by_name(api.request.context, allocation_ident) raise exception.InvalidUuidOrName(name=allocation_ident) def get_rpc_allocation_with_suffix(allocation_ident): """Get the RPC allocation from the allocation UUID or logical name. If HAS_JSON_SUFFIX flag is set in the pecan environment, try also looking for allocation_ident with '.json' suffix. Otherwise identical to get_rpc_allocation. :param allocation_ident: the UUID or logical name of an allocation. :returns: The RPC allocation. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: AllocationNotFound if the allocation is not found. """ return _get_with_suffix(get_rpc_allocation, allocation_ident, exception.AllocationNotFound) def get_rpc_deploy_template(template_ident): """Get the RPC deploy template from the UUID or logical name. :param template_ident: the UUID or logical name of a deploy template. :returns: The RPC deploy template. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: DeployTemplateNotFound if the deploy template is not found. """ # Check to see if the template_ident is a valid UUID. If it is, treat it # as a UUID. if uuidutils.is_uuid_like(template_ident): return objects.DeployTemplate.get_by_uuid(api.request.context, template_ident) # We can refer to templates by their name if utils.is_valid_logical_name(template_ident): return objects.DeployTemplate.get_by_name(api.request.context, template_ident) raise exception.InvalidUuidOrName(name=template_ident) def get_rpc_deploy_template_with_suffix(template_ident): """Get the RPC deploy template from the UUID or logical name. If HAS_JSON_SUFFIX flag is set in the pecan environment, try also looking for template_ident with '.json' suffix. Otherwise identical to get_rpc_deploy_template. :param template_ident: the UUID or logical name of a deploy template. :returns: The RPC deploy template. :raises: InvalidUuidOrName if the name or uuid provided is not valid. :raises: DeployTemplateNotFound if the deploy template is not found. """ return _get_with_suffix(get_rpc_deploy_template, template_ident, exception.DeployTemplateNotFound) def is_valid_node_name(name): """Determine if the provided name is a valid node name. Check to see that the provided node name is valid, and isn't a UUID. :param name: the node name to check. :returns: True if the name is valid, False otherwise. """ return is_valid_logical_name(name) and not uuidutils.is_uuid_like(name) def is_valid_logical_name(name): """Determine if the provided name is a valid hostname.""" if api.request.version.minor < versions.MINOR_10_UNRESTRICTED_NODE_NAME: return utils.is_hostname_safe(name) else: return utils.is_valid_logical_name(name) def vendor_passthru(ident, method, topic, data=None, driver_passthru=False): """Call a vendor passthru API extension. Call the vendor passthru API extension and process the method response to set the right return code for methods that are asynchronous or synchronous; Attach the return value to the response object if it's being served statically. :param ident: The resource identification. For node's vendor passthru this is the node's UUID, for driver's vendor passthru this is the driver's name. :param method: The vendor method name. :param topic: The RPC topic. :param data: The data passed to the vendor method. Defaults to None. :param driver_passthru: Boolean value. Whether this is a node or driver vendor passthru. Defaults to False. :returns: A WSME response object to be returned by the API. """ if not method: raise exception.ClientSideError(_("Method not specified")) if data is None: data = {} http_method = api.request.method.upper() params = (api.request.context, ident, method, http_method, data, topic) if driver_passthru: response = api.request.rpcapi.driver_vendor_passthru(*params) else: response = api.request.rpcapi.vendor_passthru(*params) status_code = http_client.ACCEPTED if response['async'] else http_client.OK return_value = response['return'] response_params = {'status_code': status_code} # Attach the return value to the response object if response.get('attach'): if isinstance(return_value, str): # If unicode, convert to bytes return_value = return_value.encode('utf-8') file_ = atypes.File(content=return_value) api.response.app_iter = static.FileIter(file_.file) # Since we've attached the return value to the response # object the response body should now be empty. return_value = None response_params['return_type'] = None return wsme.api.Response(return_value, **response_params) def check_for_invalid_fields(fields, object_fields): """Check for requested non-existent fields. Check if the user requested non-existent fields. :param fields: A list of fields requested by the user :object_fields: A list of fields supported by the object. :raises: InvalidParameterValue if invalid fields were requested. """ invalid_fields = set(fields) - set(object_fields) if invalid_fields: raise exception.InvalidParameterValue( _('Field(s) "%s" are not valid') % ', '.join(invalid_fields)) def check_allow_specify_fields(fields): """Check if fetching a subset of the resource attributes is allowed. Version 1.8 of the API allows fetching a subset of the resource attributes, this method checks if the required version is being requested. """ if (fields is not None and api.request.version.minor < versions.MINOR_8_FETCHING_SUBSET_OF_FIELDS): raise exception.NotAcceptable() VERSIONED_FIELDS = { 'driver_internal_info': versions.MINOR_3_DRIVER_INTERNAL_INFO, 'name': versions.MINOR_5_NODE_NAME, 'inspection_finished_at': versions.MINOR_6_INSPECT_STATE, 'inspection_started_at': versions.MINOR_6_INSPECT_STATE, 'clean_step': versions.MINOR_7_NODE_CLEAN, 'raid_config': versions.MINOR_12_RAID_CONFIG, 'target_raid_config': versions.MINOR_12_RAID_CONFIG, 'network_interface': versions.MINOR_20_NETWORK_INTERFACE, 'resource_class': versions.MINOR_21_RESOURCE_CLASS, 'storage_interface': versions.MINOR_33_STORAGE_INTERFACE, 'traits': versions.MINOR_37_NODE_TRAITS, 'rescue_interface': versions.MINOR_38_RESCUE_INTERFACE, 'bios_interface': versions.MINOR_40_BIOS_INTERFACE, 'fault': versions.MINOR_42_FAULT, 'deploy_step': versions.MINOR_44_NODE_DEPLOY_STEP, 'conductor_group': versions.MINOR_46_NODE_CONDUCTOR_GROUP, 'automated_clean': versions.MINOR_47_NODE_AUTOMATED_CLEAN, 'protected': versions.MINOR_48_NODE_PROTECTED, 'protected_reason': versions.MINOR_48_NODE_PROTECTED, 'conductor': versions.MINOR_49_CONDUCTORS, 'owner': versions.MINOR_50_NODE_OWNER, 'description': versions.MINOR_51_NODE_DESCRIPTION, 'allocation_uuid': versions.MINOR_52_ALLOCATION, 'events': versions.MINOR_54_EVENTS, 'retired': versions.MINOR_61_NODE_RETIRED, 'retired_reason': versions.MINOR_61_NODE_RETIRED, 'lessee': versions.MINOR_65_NODE_LESSEE, } for field in V31_FIELDS: VERSIONED_FIELDS[field] = versions.MINOR_31_DYNAMIC_INTERFACES def allow_field(field): """Check if a field is allowed in the current version.""" return api.request.version.minor >= VERSIONED_FIELDS[field] def disallowed_fields(): """Generator of fields not allowed in the current request.""" for field in VERSIONED_FIELDS: if not allow_field(field): yield field def check_allowed_fields(fields): """Check if fetching a particular field is allowed. This method checks if the required version is being requested for fields that are only allowed to be fetched in a particular API version. """ if fields is None: return for field in disallowed_fields(): if field in fields: raise exception.NotAcceptable() def check_allowed_portgroup_fields(fields): """Check if fetching a particular field of a portgroup is allowed. This method checks if the required version is being requested for fields that are only allowed to be fetched in a particular API version. """ if fields is None: return if (('mode' in fields or 'properties' in fields) and not allow_portgroup_mode_properties()): raise exception.NotAcceptable() def check_allow_management_verbs(verb): min_version = MIN_VERB_VERSIONS.get(verb) if min_version is not None and api.request.version.minor < min_version: raise exception.NotAcceptable() def check_for_invalid_state_and_allow_filter(provision_state): """Check if filtering nodes by provision state is allowed. Version 1.9 of the API allows filter nodes by provision state. """ if provision_state is not None: if (api.request.version.minor < versions.MINOR_9_PROVISION_STATE_FILTER): raise exception.NotAcceptable() valid_states = states.machine.states if provision_state not in valid_states: raise exception.InvalidParameterValue( _('Provision state "%s" is not valid') % provision_state) def check_allow_specify_driver(driver): """Check if filtering nodes by driver is allowed. Version 1.16 of the API allows filter nodes by driver. """ if (driver is not None and api.request.version.minor < versions.MINOR_16_DRIVER_FILTER): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_16_DRIVER_FILTER}) def check_allow_specify_resource_class(resource_class): """Check if filtering nodes by resource_class is allowed. Version 1.21 of the API allows filtering nodes by resource_class. """ if (resource_class is not None and api.request.version.minor < versions.MINOR_21_RESOURCE_CLASS): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_21_RESOURCE_CLASS}) def check_allow_filter_driver_type(driver_type): """Check if filtering drivers by classic/dynamic is allowed. Version 1.30 of the API allows this. """ if driver_type is not None and not allow_dynamic_drivers(): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_30_DYNAMIC_DRIVERS}) def check_allow_driver_detail(detail): """Check if getting detailed driver info is allowed. Version 1.30 of the API allows this. """ if detail is not None and not allow_dynamic_drivers(): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_30_DYNAMIC_DRIVERS}) _CONFIG_DRIVE_SCHEMA = { 'anyOf': [ { 'type': 'object', 'properties': { 'meta_data': {'type': 'object'}, 'network_data': {'type': 'object'}, 'user_data': { 'type': ['object', 'array', 'string', 'null'] }, 'vendor_data': {'type': 'object'}, }, 'additionalProperties': False }, { 'type': ['string', 'null'] } ] } def check_allow_configdrive(target, configdrive=None): if not configdrive: return allowed_targets = [states.ACTIVE] if allow_node_rebuild_with_configdrive(): allowed_targets.append(states.REBUILD) if target not in allowed_targets: msg = (_('Adding a config drive is only supported when setting ' 'provision state to %s') % ', '.join(allowed_targets)) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) try: jsonschema.validate(configdrive, _CONFIG_DRIVE_SCHEMA) except json_schema_exc.ValidationError as e: msg = _('Invalid configdrive format: %s') % e raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) if isinstance(configdrive, dict): if not allow_build_configdrive(): msg = _('Providing a JSON object for configdrive is only supported' ' starting with API version %(base)s.%(opr)s') % { 'base': versions.BASE_VERSION, 'opr': versions.MINOR_56_BUILD_CONFIGDRIVE} raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) if ('vendor_data' in configdrive and not allow_configdrive_vendor_data()): msg = _('Providing vendor_data in configdrive is only supported' ' starting with API version %(base)s.%(opr)s') % { 'base': versions.BASE_VERSION, 'opr': versions.MINOR_59_CONFIGDRIVE_VENDOR_DATA} raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) def check_allow_filter_by_fault(fault): """Check if filtering nodes by fault is allowed. Version 1.42 of the API allows filtering nodes by fault. """ if (fault is not None and api.request.version.minor < versions.MINOR_42_FAULT): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_42_FAULT}) if fault is not None and fault not in faults.VALID_FAULTS: msg = (_('Unrecognized fault "%(fault)s" is specified, allowed faults ' 'are %(valid_faults)s') % {'fault': fault, 'valid_faults': faults.VALID_FAULTS}) raise exception.ClientSideError( msg, status_code=http_client.BAD_REQUEST) def check_allow_filter_by_conductor_group(conductor_group): """Check if filtering nodes by conductor_group is allowed. Version 1.46 of the API allows filtering nodes by conductor_group. """ if (conductor_group is not None and api.request.version.minor < versions.MINOR_46_NODE_CONDUCTOR_GROUP): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_46_NODE_CONDUCTOR_GROUP}) def check_allow_filter_by_owner(owner): """Check if filtering nodes by owner is allowed. Version 1.50 of the API allows filtering nodes by owner. """ if (owner is not None and api.request.version.minor < versions.MINOR_50_NODE_OWNER): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_50_NODE_OWNER}) def check_allow_filter_by_lessee(lessee): """Check if filtering nodes by lessee is allowed. Version 1.62 of the API allows filtering nodes by lessee. """ if (lessee is not None and api.request.version.minor < versions.MINOR_65_NODE_LESSEE): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_65_NODE_LESSEE}) def initial_node_provision_state(): """Return node state to use by default when creating new nodes. Previously the default state for new nodes was AVAILABLE. Starting with API 1.11 it is ENROLL. """ return (states.AVAILABLE if api.request.version.minor < versions.MINOR_11_ENROLL_STATE else states.ENROLL) def allow_raid_config(): """Check if RAID configuration is allowed for the node. Version 1.12 of the API allows RAID configuration for the node. """ return api.request.version.minor >= versions.MINOR_12_RAID_CONFIG def allow_soft_power_off(): """Check if Soft Power Off is allowed for the node. Version 1.27 of the API allows Soft Power Off, including Soft Reboot, for the node. """ return api.request.version.minor >= versions.MINOR_27_SOFT_POWER_OFF def allow_inject_nmi(): """Check if Inject NMI is allowed for the node. Version 1.29 of the API allows Inject NMI for the node. """ return api.request.version.minor >= versions.MINOR_29_INJECT_NMI def allow_links_node_states_and_driver_properties(): """Check if links are displayable. Version 1.14 of the API allows the display of links to node states and driver properties. """ return (api.request.version.minor >= versions.MINOR_14_LINKS_NODESTATES_DRIVERPROPERTIES) def allow_port_internal_info(): """Check if accessing internal_info is allowed for the port. Version 1.18 of the API exposes internal_info readonly field for the port. """ return (api.request.version.minor >= versions.MINOR_18_PORT_INTERNAL_INFO) def allow_port_advanced_net_fields(): """Check if we should return local_link_connection and pxe_enabled fields. Version 1.19 of the API added support for these new fields in port object. """ return (api.request.version.minor >= versions.MINOR_19_PORT_ADVANCED_NET_FIELDS) def allow_ramdisk_endpoints(): """Check if heartbeat and lookup endpoints are allowed. Version 1.22 of the API introduced them. """ return api.request.version.minor >= versions.MINOR_22_LOOKUP_HEARTBEAT def allow_portgroups(): """Check if we should support portgroup operations. Version 1.23 of the API added support for PortGroups. """ return (api.request.version.minor >= versions.MINOR_23_PORTGROUPS) def allow_portgroups_subcontrollers(): """Check if portgroups can be used as subcontrollers. Version 1.24 of the API added support for Portgroups as subcontrollers """ return (api.request.version.minor >= versions.MINOR_24_PORTGROUPS_SUBCONTROLLERS) def allow_remove_chassis_uuid(): """Check if chassis_uuid can be removed from node. Version 1.25 of the API added support for chassis_uuid removal """ return (api.request.version.minor >= versions.MINOR_25_UNSET_CHASSIS_UUID) def allow_portgroup_mode_properties(): """Check if mode and properties can be added to/queried from a portgroup. Version 1.26 of the API added mode and properties fields to portgroup object. """ return (api.request.version.minor >= versions.MINOR_26_PORTGROUP_MODE_PROPERTIES) def allow_vifs_subcontroller(): """Check if node/vifs can be used. Version 1.28 of the API added support for VIFs to be attached to Nodes. """ return (api.request.version.minor >= versions.MINOR_28_VIFS_SUBCONTROLLER) def allow_dynamic_drivers(): """Check if dynamic driver API calls are allowed. Version 1.30 of the API added support for all of the driver composition related calls in the /v1/drivers API. """ return (api.request.version.minor >= versions.MINOR_30_DYNAMIC_DRIVERS) def allow_dynamic_interfaces(): """Check if dynamic interface fields are allowed. Version 1.31 of the API added support for viewing and setting the fields in ``V31_FIELDS`` on the node object. """ return (api.request.version.minor >= versions.MINOR_31_DYNAMIC_INTERFACES) def allow_volume(): """Check if volume connectors and targets are allowed. Version 1.32 of the API added support for volume connectors and targets """ return api.request.version.minor >= versions.MINOR_32_VOLUME def allow_storage_interface(): """Check if we should support storage_interface node and driver fields. Version 1.33 of the API added support for storage interfaces. """ return (api.request.version.minor >= versions.MINOR_33_STORAGE_INTERFACE) def allow_port_physical_network(): """Check if port physical network field is allowed. Version 1.34 of the API added the physical network field to the port object. We also check whether the target version of the Port object supports the physical_network field as this may not be the case during a rolling upgrade. """ return ((api.request.version.minor >= versions.MINOR_34_PORT_PHYSICAL_NETWORK) and objects.Port.supports_physical_network()) def allow_node_rebuild_with_configdrive(): """Check if we should support node rebuild with configdrive. Version 1.35 of the API added support for node rebuild with configdrive. """ return (api.request.version.minor >= versions.MINOR_35_REBUILD_CONFIG_DRIVE) def allow_agent_version_in_heartbeat(): """Check if agent version is allowed to be passed into heartbeat. Version 1.36 of the API added the ability for agents to pass their version information to Ironic on heartbeat. """ return (api.request.version.minor >= versions.MINOR_36_AGENT_VERSION_HEARTBEAT) def allow_rescue_interface(): """Check if we should support rescue and unrescue operations and interface. Version 1.38 of the API added support for rescue and unrescue. """ return api.request.version.minor >= versions.MINOR_38_RESCUE_INTERFACE def allow_bios_interface(): """Check if we should support bios interface and endpoints. Version 1.40 of the API added support for bios interface. """ return api.request.version.minor >= versions.MINOR_40_BIOS_INTERFACE def get_controller_reserved_names(cls): """Get reserved names for a given controller. Inspect the controller class and return the reserved names within it. Reserved names are names that can not be used as an identifier for a resource because the names are either being used as a custom action or is the name of a nested controller inside the given class. :param cls: The controller class to be inspected. """ reserved_names = [ name for name, member in inspect.getmembers(cls) if isinstance(member, rest.RestController)] if hasattr(cls, '_custom_actions'): reserved_names += list(cls._custom_actions) return reserved_names def allow_traits(): """Check if traits are allowed for the node. Version 1.37 of the API allows traits for the node. """ return api.request.version.minor >= versions.MINOR_37_NODE_TRAITS def allow_inspect_wait_state(): """Check if inspect wait is allowed for the node. Version 1.39 of the API adds 'inspect wait' state to substitute 'inspecting' state during asynchronous hardware inspection. """ return api.request.version.minor >= versions.MINOR_39_INSPECT_WAIT def allow_inspect_abort(): """Check if inspection abort is allowed. Version 1.41 of the API added support for inspection abort """ return api.request.version.minor >= versions.MINOR_41_INSPECTION_ABORT def handle_post_port_like_extra_vif(p_dict): """Handle a Post request that sets .extra['vif_port_id']. This handles attach of VIFs via specifying the VIF port ID in a port or port group's extra['vif_port_id'] field. :param p_dict: a dictionary with field names/values for the port or port group :return: VIF or None """ vif = p_dict.get('extra', {}).get('vif_port_id') if vif: # TODO(rloo): in Stein cycle: if API version >= 1.28, remove # warning and support for extra[]; else (< 1.28) # still support it; continue copying to internal_info # (see bug 1722850). i.e., change the 7 lines of code # below to something like: # if not api_utils.allow_vifs_subcontroller(): # internal_info = {'tenant_vif_port_id': vif} # pg_dict['internal_info'] = internal_info if allow_vifs_subcontroller(): utils.warn_about_deprecated_extra_vif_port_id() # NOTE(rloo): this value should really be in .internal_info[..] # which is what would happen if they had used the # POST /v1/nodes//vifs API. internal_info = {'tenant_vif_port_id': vif} p_dict['internal_info'] = internal_info return vif def handle_patch_port_like_extra_vif(rpc_object, api_object, patch): """Handle a Patch request that modifies .extra['vif_port_id']. This handles attach/detach of VIFs via the VIF port ID in a port or port group's extra['vif_port_id'] field. :param rpc_object: a Port or Portgroup RPC object :param api_object: the corresponding Port or Portgroup API object :param patch: the JSON patch in the API request """ vif_list = get_patch_values(patch, '/extra/vif_port_id') vif = None if vif_list: # if specified more than once, use the last value vif = vif_list[-1] # TODO(rloo): in Stein cycle: if API version >= 1.28, remove this # warning and don't copy to internal_info; else (<1.28) still # support it; continue copying to internal_info (see bug 1722850). # i.e., change the 8 lines of code below to something like: # if not allow_vifs_subcontroller(): # int_info = rpc_object.internal_info.get('tenant_vif_port_id') # if (not int_info or # int_info == rpc_object.extra.get('vif_port_id')): # api_object.internal_info['tenant_vif_port_id'] = vif if allow_vifs_subcontroller(): utils.warn_about_deprecated_extra_vif_port_id() # NOTE(rloo): if the user isn't also using the REST API # 'POST nodes//vifs', we are safe to copy the # .extra[] value to the .internal_info location int_info = rpc_object.internal_info.get('tenant_vif_port_id') if (not int_info or int_info == rpc_object.extra.get('vif_port_id')): api_object.internal_info['tenant_vif_port_id'] = vif elif is_path_removed(patch, '/extra/vif_port_id'): # TODO(rloo): in Stein cycle: if API version >= 1.28, remove this # warning and don't remove from internal_info; else (<1.28) still # support it; remove from internal_info (see bug 1722850). # i.e., change the 8 lines of code below to something like: # if not allow_vifs_subcontroller(): # int_info = rpc_object.internal_info.get('tenant_vif...') # if (int_info and int_info==rpc_object.extra.get('vif_port_id')): # api_object.internal_info['tenant_vif_port_id'] = None if allow_vifs_subcontroller(): utils.warn_about_deprecated_extra_vif_port_id() # NOTE(rloo): if the user isn't also using the REST API # 'POST nodes//vifs', we are safe to remove the # .extra[] value from the .internal_info location int_info = rpc_object.internal_info.get('tenant_vif_port_id') if (int_info and int_info == rpc_object.extra.get('vif_port_id')): api_object.internal_info.pop('tenant_vif_port_id') def allow_detail_query(): """Check if passing a detail=True query string is allowed. Version 1.43 allows a user to pass the detail query string to list the resource with all the fields. """ return api.request.version.minor >= versions.MINOR_43_ENABLE_DETAIL_QUERY def allow_reset_interfaces(): """Check if passing a reset_interfaces query string is allowed.""" return api.request.version.minor >= versions.MINOR_45_RESET_INTERFACES def get_request_return_fields(fields, detail, default_fields): """Calculate fields to return from an API request The fields query and detail=True query can not be passed into a request at the same time. To use the detail query we need to be on a version of the API greater than 1.43. This function raises an InvalidParameterValue exception if either of these conditions are not met. If these checks pass then this function will return either the fields passed in or the default fields provided. :param fields: The fields query passed into the API request. :param detail: The detail query passed into the API request. :param default_fields: The default fields to return if fields=None and detail=None. :raises: InvalidParameterValue if there is an invalid combination of query strings or API version. :returns: 'fields' passed in value or 'default_fields' """ if detail is not None and not allow_detail_query(): raise exception.InvalidParameterValue( "Invalid query parameter ?detail=%s received." % detail) if fields is not None and detail: raise exception.InvalidParameterValue( "Can not specify ?detail=True and fields in the same request.") if fields is None and not detail: return default_fields return fields def allow_expose_conductors(): """Check if accessing conductor endpoints is allowed. Version 1.49 of the API exposed conductor endpoints and conductor field for the node. """ return api.request.version.minor >= versions.MINOR_49_CONDUCTORS def check_allow_filter_by_conductor(conductor): """Check if filtering nodes by conductor is allowed. Version 1.49 of the API allows filtering nodes by conductor. """ if conductor is not None and not allow_expose_conductors(): raise exception.NotAcceptable(_( "Request not acceptable. The minimal required API version " "should be %(base)s.%(opr)s") % {'base': versions.BASE_VERSION, 'opr': versions.MINOR_49_CONDUCTORS}) def allow_allocations(): """Check if accessing allocation endpoints is allowed. Version 1.52 of the API exposed allocation endpoints and allocation_uuid field for the node. """ return api.request.version.minor >= versions.MINOR_52_ALLOCATION def allow_port_is_smartnic(): """Check if port is_smartnic field is allowed. Version 1.53 of the API added is_smartnic field to the port object. """ return ((api.request.version.minor >= versions.MINOR_53_PORT_SMARTNIC) and objects.Port.supports_is_smartnic()) def allow_expose_events(): """Check if accessing events endpoint is allowed. Version 1.54 of the API added the events endpoint. """ return api.request.version.minor >= versions.MINOR_54_EVENTS def allow_deploy_templates(): """Check if accessing deploy template endpoints is allowed. Version 1.55 of the API exposed deploy template endpoints. """ return api.request.version.minor >= versions.MINOR_55_DEPLOY_TEMPLATES def check_policy(policy_name): """Check if the specified policy is authorised for this request. :policy_name: Name of the policy to check. :raises: HTTPForbidden if the policy forbids access. """ cdict = api.request.context.to_policy_values() policy.authorize(policy_name, cdict, cdict) def check_owner_policy(object_type, policy_name, owner, lessee=None): """Check if the policy authorizes this request on an object. :param: object_type: type of object being checked :param: policy_name: Name of the policy to check. :param: owner: the owner :param: lessee: the lessee :raises: HTTPForbidden if the policy forbids access. """ cdict = api.request.context.to_policy_values() target_dict = dict(cdict) target_dict[object_type + '.owner'] = owner if lessee: target_dict[object_type + '.lessee'] = lessee policy.authorize(policy_name, target_dict, cdict) def check_node_policy_and_retrieve(policy_name, node_ident, with_suffix=False): """Check if the specified policy authorizes this request on a node. :param: policy_name: Name of the policy to check. :param: node_ident: the UUID or logical name of a node. :param: with_suffix: whether the RPC node should include the suffix :raises: HTTPForbidden if the policy forbids access. :raises: NodeNotFound if the node is not found. :return: RPC node identified by node_ident """ try: if with_suffix: rpc_node = get_rpc_node_with_suffix(node_ident) else: rpc_node = get_rpc_node(node_ident) except exception.NodeNotFound: # don't expose non-existence of node unless requester # has generic access to policy cdict = api.request.context.to_policy_values() policy.authorize(policy_name, cdict, cdict) raise check_owner_policy('node', policy_name, rpc_node['owner'], rpc_node['lessee']) return rpc_node def check_allocation_policy_and_retrieve(policy_name, allocation_ident): """Check if the specified policy authorizes request on allocation. :param: policy_name: Name of the policy to check. :param: allocation_ident: the UUID or logical name of a node. :raises: HTTPForbidden if the policy forbids access. :raises: AllocationNotFound if the node is not found. :return: RPC node identified by node_ident """ try: rpc_allocation = get_rpc_allocation_with_suffix( allocation_ident) except exception.AllocationNotFound: # don't expose non-existence unless requester # has generic access to policy cdict = api.request.context.to_policy_values() policy.authorize(policy_name, cdict, cdict) raise check_owner_policy('allocation', policy_name, rpc_allocation['owner']) return rpc_allocation def check_multiple_node_policies_and_retrieve(policy_names, node_ident, with_suffix=False): """Check if the specified policies authorize this request on a node. :param: policy_names: List of policy names to check. :param: node_ident: the UUID or logical name of a node. :param: with_suffix: whether the RPC node should include the suffix :raises: HTTPForbidden if the policy forbids access. :raises: NodeNotFound if the node is not found. :return: RPC node identified by node_ident """ rpc_node = None for policy_name in policy_names: if rpc_node is None: rpc_node = check_node_policy_and_retrieve(policy_names[0], node_ident, with_suffix) else: check_owner_policy('node', policy_name, rpc_node['owner'], rpc_node['lessee']) return rpc_node def check_list_policy(object_type, owner=None): """Check if the list policy authorizes this request on an object. :param: object_type: type of object being checked :param: owner: owner filter for list query, if any :raises: HTTPForbidden if the policy forbids access. :return: owner that should be used for list query, if needed """ cdict = api.request.context.to_policy_values() try: policy.authorize('baremetal:%s:list_all' % object_type, cdict, cdict) except exception.HTTPForbidden: project_owner = cdict.get('project_id') if (not project_owner or (owner and owner != project_owner)): raise policy.authorize('baremetal:%s:list' % object_type, cdict, cdict) return project_owner return owner def check_port_policy_and_retrieve(policy_name, port_uuid): """Check if the specified policy authorizes this request on a port. :param: policy_name: Name of the policy to check. :param: port_uuid: the UUID of a port. :raises: HTTPForbidden if the policy forbids access. :raises: NodeNotFound if the node is not found. :return: RPC port identified by port_uuid and associated node """ context = api.request.context cdict = context.to_policy_values() try: rpc_port = objects.Port.get_by_uuid(context, port_uuid) except exception.PortNotFound: # don't expose non-existence of port unless requester # has generic access to policy policy.authorize(policy_name, cdict, cdict) raise rpc_node = objects.Node.get_by_id(context, rpc_port.node_id) target_dict = dict(cdict) target_dict['node.owner'] = rpc_node['owner'] target_dict['node.lessee'] = rpc_node['lessee'] policy.authorize(policy_name, target_dict, cdict) return rpc_port, rpc_node def check_port_list_policy(): """Check if the specified policy authorizes this request on a port. :raises: HTTPForbidden if the policy forbids access. :return: owner that should be used for list query, if needed """ cdict = api.request.context.to_policy_values() try: policy.authorize('baremetal:port:list_all', cdict, cdict) except exception.HTTPForbidden: owner = cdict.get('project_id') if not owner: raise policy.authorize('baremetal:port:list', cdict, cdict) return owner def allow_build_configdrive(): """Check if building configdrive is allowed. Version 1.56 of the API added support for building configdrive. """ return api.request.version.minor >= versions.MINOR_56_BUILD_CONFIGDRIVE def allow_configdrive_vendor_data(): """Check if configdrive can contain a vendor_data key. Version 1.59 of the API added support for configdrive vendor_data. """ return (api.request.version.minor >= versions.MINOR_59_CONFIGDRIVE_VENDOR_DATA) def allow_allocation_update(): """Check if updating an existing allocation is allowed or not. Version 1.57 of the API added support for updating an allocation. """ return api.request.version.minor >= versions.MINOR_57_ALLOCATION_UPDATE def allow_allocation_backfill(): """Check if backfilling allocations is allowed. Version 1.58 of the API added support for backfilling allocations. """ return api.request.version.minor >= versions.MINOR_58_ALLOCATION_BACKFILL def allow_allocation_owner(): """Check if allocation owner field is allowed. Version 1.60 of the API added the owner field to the allocation object. """ return api.request.version.minor >= versions.MINOR_60_ALLOCATION_OWNER def allow_agent_token(): """Check if agent token is available.""" return api.request.version.minor >= versions.MINOR_62_AGENT_TOKEN def allow_local_link_connection_network_type(): """Check if network_type is allowed in ports link_local_connection""" return (api.request.version.minor >= versions.MINOR_64_LOCAL_LINK_CONNECTION_NETWORK_TYPE) ironic-15.0.0/ironic/api/controllers/v1/event.py0000664000175000017500000000360013652514273021534 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from http import client as http_client from ironic_lib import metrics_utils from oslo_log import log import pecan from ironic import api from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.common import exception from ironic.common import policy METRICS = metrics_utils.get_metrics_logger(__name__) LOG = log.getLogger(__name__) class EvtCollection(collection.Collection): """API representation of a collection of events.""" events = [types.eventtype] """A list containing event dict objects""" class EventsController(pecan.rest.RestController): """REST controller for Events.""" @pecan.expose() def _lookup(self): if not api_utils.allow_expose_events(): pecan.abort(http_client.NOT_FOUND) @METRICS.timer('EventsController.post') @expose.expose(None, body=EvtCollection, status_code=http_client.NO_CONTENT) def post(self, evts): if not api_utils.allow_expose_events(): raise exception.NotFound() cdict = api.request.context.to_policy_values() policy.authorize('baremetal:events:post', cdict, cdict) for e in evts.events: LOG.debug("Received external event: %s", e) ironic-15.0.0/ironic/api/controllers/v1/__init__.py0000664000175000017500000002761713652514273022170 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Version 1 of the Ironic API Specification can be found at doc/source/webapi/v1.rst """ import pecan from pecan import rest from webob import exc from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import allocation from ironic.api.controllers.v1 import chassis from ironic.api.controllers.v1 import conductor from ironic.api.controllers.v1 import deploy_template from ironic.api.controllers.v1 import driver from ironic.api.controllers.v1 import event from ironic.api.controllers.v1 import node from ironic.api.controllers.v1 import port from ironic.api.controllers.v1 import portgroup from ironic.api.controllers.v1 import ramdisk from ironic.api.controllers.v1 import utils from ironic.api.controllers.v1 import versions from ironic.api.controllers.v1 import volume from ironic.api.controllers import version from ironic.api import expose from ironic.common.i18n import _ BASE_VERSION = versions.BASE_VERSION def min_version(): return base.Version( {base.Version.string: versions.min_version_string()}, versions.min_version_string(), versions.max_version_string()) def max_version(): return base.Version( {base.Version.string: versions.max_version_string()}, versions.min_version_string(), versions.max_version_string()) class MediaType(base.Base): """A media type representation.""" base = str type = str def __init__(self, base, type): self.base = base self.type = type class V1(base.Base): """The representation of the version 1 of the API.""" id = str """The ID of the version, also acts as the release number""" media_types = [MediaType] """An array of supported media types for this version""" links = [link.Link] """Links that point to a specific URL for this version and documentation""" chassis = [link.Link] """Links to the chassis resource""" nodes = [link.Link] """Links to the nodes resource""" ports = [link.Link] """Links to the ports resource""" portgroups = [link.Link] """Links to the portgroups resource""" drivers = [link.Link] """Links to the drivers resource""" volume = [link.Link] """Links to the volume resource""" lookup = [link.Link] """Links to the lookup resource""" heartbeat = [link.Link] """Links to the heartbeat resource""" conductors = [link.Link] """Links to the conductors resource""" allocations = [link.Link] """Links to the allocations resource""" deploy_templates = [link.Link] """Links to the deploy_templates resource""" version = version.Version """Version discovery information.""" events = [link.Link] """Links to the events resource""" @staticmethod def convert(): v1 = V1() v1.id = "v1" v1.links = [link.Link.make_link('self', api.request.public_url, 'v1', '', bookmark=True), link.Link.make_link('describedby', 'https://docs.openstack.org', '/ironic/latest/contributor/', 'webapi.html', bookmark=True, type='text/html') ] v1.media_types = [MediaType('application/json', 'application/vnd.openstack.ironic.v1+json')] v1.chassis = [link.Link.make_link('self', api.request.public_url, 'chassis', ''), link.Link.make_link('bookmark', api.request.public_url, 'chassis', '', bookmark=True) ] v1.nodes = [link.Link.make_link('self', api.request.public_url, 'nodes', ''), link.Link.make_link('bookmark', api.request.public_url, 'nodes', '', bookmark=True) ] v1.ports = [link.Link.make_link('self', api.request.public_url, 'ports', ''), link.Link.make_link('bookmark', api.request.public_url, 'ports', '', bookmark=True) ] if utils.allow_portgroups(): v1.portgroups = [ link.Link.make_link('self', api.request.public_url, 'portgroups', ''), link.Link.make_link('bookmark', api.request.public_url, 'portgroups', '', bookmark=True) ] v1.drivers = [link.Link.make_link('self', api.request.public_url, 'drivers', ''), link.Link.make_link('bookmark', api.request.public_url, 'drivers', '', bookmark=True) ] if utils.allow_volume(): v1.volume = [ link.Link.make_link('self', api.request.public_url, 'volume', ''), link.Link.make_link('bookmark', api.request.public_url, 'volume', '', bookmark=True) ] if utils.allow_ramdisk_endpoints(): v1.lookup = [link.Link.make_link('self', api.request.public_url, 'lookup', ''), link.Link.make_link('bookmark', api.request.public_url, 'lookup', '', bookmark=True) ] v1.heartbeat = [link.Link.make_link('self', api.request.public_url, 'heartbeat', ''), link.Link.make_link('bookmark', api.request.public_url, 'heartbeat', '', bookmark=True) ] if utils.allow_expose_conductors(): v1.conductors = [link.Link.make_link('self', api.request.public_url, 'conductors', ''), link.Link.make_link('bookmark', api.request.public_url, 'conductors', '', bookmark=True) ] if utils.allow_allocations(): v1.allocations = [link.Link.make_link('self', api.request.public_url, 'allocations', ''), link.Link.make_link('bookmark', api.request.public_url, 'allocations', '', bookmark=True) ] if utils.allow_expose_events(): v1.events = [link.Link.make_link('self', api.request.public_url, 'events', ''), link.Link.make_link('bookmark', api.request.public_url, 'events', '', bookmark=True) ] if utils.allow_deploy_templates(): v1.deploy_templates = [ link.Link.make_link('self', api.request.public_url, 'deploy_templates', ''), link.Link.make_link('bookmark', api.request.public_url, 'deploy_templates', '', bookmark=True) ] v1.version = version.default_version() return v1 class Controller(rest.RestController): """Version 1 API controller root.""" nodes = node.NodesController() ports = port.PortsController() portgroups = portgroup.PortgroupsController() chassis = chassis.ChassisController() drivers = driver.DriversController() volume = volume.VolumeController() lookup = ramdisk.LookupController() heartbeat = ramdisk.HeartbeatController() conductors = conductor.ConductorsController() allocations = allocation.AllocationsController() events = event.EventsController() deploy_templates = deploy_template.DeployTemplatesController() @expose.expose(V1) def get(self): # NOTE: The reason why convert() it's being called for every # request is because we need to get the host url from # the request object to make the links. return V1.convert() def _check_version(self, version, headers=None): if headers is None: headers = {} # ensure that major version in the URL matches the header if version.major != BASE_VERSION: raise exc.HTTPNotAcceptable(_( "Mutually exclusive versions requested. Version %(ver)s " "requested but not supported by this service. The supported " "version range is: [%(min)s, %(max)s].") % {'ver': version, 'min': versions.min_version_string(), 'max': versions.max_version_string()}, headers=headers) # ensure the minor version is within the supported range if version < min_version() or version > max_version(): raise exc.HTTPNotAcceptable(_( "Version %(ver)s was requested but the minor version is not " "supported by this service. The supported version range is: " "[%(min)s, %(max)s].") % {'ver': version, 'min': versions.min_version_string(), 'max': versions.max_version_string()}, headers=headers) @pecan.expose() def _route(self, args, request=None): v = base.Version(api.request.headers, versions.min_version_string(), versions.max_version_string()) # Always set the min and max headers api.response.headers[base.Version.min_string] = ( versions.min_version_string()) api.response.headers[base.Version.max_string] = ( versions.max_version_string()) # assert that requested version is supported self._check_version(v, api.response.headers) api.response.headers[base.Version.string] = str(v) api.request.version = v return super(Controller, self)._route(args, request) __all__ = ('Controller',) ironic-15.0.0/ironic/api/controllers/v1/allocation.py0000664000175000017500000006052013652514273022544 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from http import client as http_client from ironic_lib import metrics_utils from oslo_utils import uuidutils import pecan from webob import exc as webob_exc import wsme from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link from ironic.api.controllers.v1 import collection from ironic.api.controllers.v1 import notification_utils as notify from ironic.api.controllers.v1 import types from ironic.api.controllers.v1 import utils as api_utils from ironic.api import expose from ironic.api import types as atypes from ironic.common import exception from ironic.common.i18n import _ from ironic.common import policy from ironic.common import states as ir_states from ironic import objects METRICS = metrics_utils.get_metrics_logger(__name__) def hide_fields_in_newer_versions(obj): # if requested version is < 1.60, hide owner field if not api_utils.allow_allocation_owner(): obj.owner = atypes.Unset class Allocation(base.APIBase): """API representation of an allocation. This class enforces type checking and value constraints, and converts between the internal object model and the API representation of a allocation. """ uuid = types.uuid """Unique UUID for this allocation""" extra = {str: types.jsontype} """This allocation's meta data""" node_uuid = atypes.wsattr(types.uuid, readonly=True) """The UUID of the node this allocation belongs to""" node = atypes.wsattr(str) """The node to backfill the allocation for (POST only)""" name = atypes.wsattr(str) """The logical name for this allocation""" links = atypes.wsattr([link.Link], readonly=True) """A list containing a self link and associated allocation links""" state = atypes.wsattr(str, readonly=True) """The current state of the allocation""" last_error = atypes.wsattr(str, readonly=True) """Last error that happened to this allocation""" resource_class = atypes.wsattr(atypes.StringType(max_length=80)) """Requested resource class for this allocation""" owner = atypes.wsattr(str) """Owner of allocation""" # NOTE(dtantsur): candidate_nodes is a list of UUIDs on the database level, # but the API level also accept names, converting them on fly. candidate_nodes = atypes.wsattr([str]) """Candidate nodes for this allocation""" traits = atypes.wsattr([str]) """Requested traits for the allocation""" def __init__(self, **kwargs): self.fields = [] fields = list(objects.Allocation.fields) # NOTE: node_uuid is not part of objects.Allocation.fields # because it's an API-only attribute fields.append('node_uuid') for field in fields: # Skip fields we do not expose. if not hasattr(self, field): continue self.fields.append(field) setattr(self, field, kwargs.get(field, atypes.Unset)) @staticmethod def _convert_with_links(allocation, url): """Add links to the allocation.""" # This field is only used in POST, never return it. allocation.node = atypes.Unset allocation.links = [ link.Link.make_link('self', url, 'allocations', allocation.uuid), link.Link.make_link('bookmark', url, 'allocations', allocation.uuid, bookmark=True) ] return allocation @classmethod def convert_with_links(cls, rpc_allocation, fields=None, sanitize=True): """Add links to the allocation.""" allocation = Allocation(**rpc_allocation.as_dict()) if rpc_allocation.node_id: try: allocation.node_uuid = objects.Node.get_by_id( api.request.context, rpc_allocation.node_id).uuid except exception.NodeNotFound: allocation.node_uuid = None else: allocation.node_uuid = None if fields is not None: api_utils.check_for_invalid_fields(fields, allocation.fields) # Make the default values consistent between POST and GET API if allocation.candidate_nodes is None: allocation.candidate_nodes = [] if allocation.traits is None: allocation.traits = [] allocation = cls._convert_with_links(allocation, api.request.host_url) if not sanitize: return allocation allocation.sanitize(fields) return allocation def sanitize(self, fields=None): """Removes sensitive and unrequested data. Will only keep the fields specified in the ``fields`` parameter. :param fields: list of fields to preserve, or ``None`` to preserve them all :type fields: list of str """ hide_fields_in_newer_versions(self) if fields is not None: self.unset_fields_except(fields) @classmethod def sample(cls): """Return a sample of the allocation.""" sample = cls(uuid='a594544a-2daf-420c-8775-17a8c3e0852f', node_uuid='7ae81bb3-dec3-4289-8d6c-da80bd8001ae', name='node1-allocation-01', state=ir_states.ALLOCATING, last_error=None, resource_class='baremetal', traits=['CUSTOM_GPU'], candidate_nodes=[], extra={'foo': 'bar'}, created_at=datetime.datetime(2000, 1, 1, 12, 0, 0), updated_at=datetime.datetime(2000, 1, 1, 12, 0, 0), owner=None) return cls._convert_with_links(sample, 'http://localhost:6385') class AllocationCollection(collection.Collection): """API representation of a collection of allocations.""" allocations = [Allocation] """A list containing allocation objects""" def __init__(self, **kwargs): self._type = 'allocations' @staticmethod def convert_with_links(rpc_allocations, limit, url=None, fields=None, **kwargs): collection = AllocationCollection() collection.allocations = [ Allocation.convert_with_links(p, fields=fields, sanitize=False) for p in rpc_allocations ] collection.next = collection.get_next(limit, url=url, fields=fields, **kwargs) for item in collection.allocations: item.sanitize(fields=fields) return collection @classmethod def sample(cls): """Return a sample of the allocation.""" sample = cls() sample.allocations = [Allocation.sample()] return sample class AllocationPatchType(types.JsonPatchType): _api_base = Allocation class AllocationsController(pecan.rest.RestController): """REST controller for allocations.""" invalid_sort_key_list = ['extra', 'candidate_nodes', 'traits'] @pecan.expose() def _route(self, args, request=None): if not api_utils.allow_allocations(): msg = _("The API version does not allow allocations") if api.request.method == "GET": raise webob_exc.HTTPNotFound(msg) else: raise webob_exc.HTTPMethodNotAllowed(msg) return super(AllocationsController, self)._route(args, request) def _get_allocations_collection(self, node_ident=None, resource_class=None, state=None, owner=None, marker=None, limit=None, sort_key='id', sort_dir='asc', resource_url=None, fields=None): """Return allocations collection. :param node_ident: UUID or name of a node. :param marker: Pagination marker for large data sets. :param limit: Maximum number of resources to return in a single result. :param sort_key: Column to sort results by. Default: id. :param sort_dir: Direction to sort. "asc" or "desc". Default: asc. :param resource_url: Optional, URL to the allocation resource. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param owner: project_id of owner to filter by """ limit = api_utils.validate_limit(limit) sort_dir = api_utils.validate_sort_dir(sort_dir) if sort_key in self.invalid_sort_key_list: raise exception.InvalidParameterValue( _("The sort_key value %(key)s is an invalid field for " "sorting") % {'key': sort_key}) marker_obj = None if marker: marker_obj = objects.Allocation.get_by_uuid(api.request.context, marker) if node_ident: try: node_uuid = api_utils.get_rpc_node(node_ident).uuid except exception.NodeNotFound as exc: exc.code = http_client.BAD_REQUEST raise else: node_uuid = None possible_filters = { 'node_uuid': node_uuid, 'resource_class': resource_class, 'state': state, 'owner': owner } filters = {} for key, value in possible_filters.items(): if value is not None: filters[key] = value allocations = objects.Allocation.list(api.request.context, limit=limit, marker=marker_obj, sort_key=sort_key, sort_dir=sort_dir, filters=filters) return AllocationCollection.convert_with_links(allocations, limit, url=resource_url, fields=fields, sort_key=sort_key, sort_dir=sort_dir) def _check_allowed_allocation_fields(self, fields): """Check if fetching a particular field of an allocation is allowed. Check if the required version is being requested for fields that are only allowed to be fetched in a particular API version. :param fields: list or set of fields to check :raises: NotAcceptable if a field is not allowed """ if fields is None: return if 'owner' in fields and not api_utils.allow_allocation_owner(): raise exception.NotAcceptable() @METRICS.timer('AllocationsController.get_all') @expose.expose(AllocationCollection, types.uuid_or_name, str, str, types.uuid, int, str, str, types.listtype, str) def get_all(self, node=None, resource_class=None, state=None, marker=None, limit=None, sort_key='id', sort_dir='asc', fields=None, owner=None): """Retrieve a list of allocations. :param node: UUID or name of a node, to get only allocations for that node. :param resource_class: Filter by requested resource class. :param state: Filter by allocation state. :param marker: pagination marker for large data sets. :param limit: maximum number of resources to return in a single result. This value cannot be larger than the value of max_limit in the [api] section of the ironic configuration, or only max_limit resources will be returned. :param sort_key: column to sort results by. Default: id. :param sort_dir: direction to sort. "asc" or "desc". Default: asc. :param fields: Optional, a list with a specified set of fields of the resource to be returned. :param owner: Filter by owner. """ owner = api_utils.check_list_policy('allocation', owner) self._check_allowed_allocation_fields(fields) if owner is not None and not api_utils.allow_allocation_owner(): raise exception.NotAcceptable() return self._get_allocations_collection(node, resource_class, state, owner, marker, limit, sort_key, sort_dir, fields=fields) @METRICS.timer('AllocationsController.get_one') @expose.expose(Allocation, types.uuid_or_name, types.listtype) def get_one(self, allocation_ident, fields=None): """Retrieve information about the given allocation. :param allocation_ident: UUID or logical name of an allocation. :param fields: Optional, a list with a specified set of fields of the resource to be returned. """ rpc_allocation = api_utils.check_allocation_policy_and_retrieve( 'baremetal:allocation:get', allocation_ident) self._check_allowed_allocation_fields(fields) return Allocation.convert_with_links(rpc_allocation, fields=fields) def _authorize_create_allocation(self, allocation): cdict = api.request.context.to_policy_values() try: policy.authorize('baremetal:allocation:create', cdict, cdict) self._check_allowed_allocation_fields(allocation.as_dict()) except exception.HTTPForbidden: owner = cdict.get('project_id') if not owner or (allocation.owner and owner != allocation.owner): raise policy.authorize('baremetal:allocation:create_restricted', cdict, cdict) self._check_allowed_allocation_fields(allocation.as_dict()) allocation.owner = owner return allocation @METRICS.timer('AllocationsController.post') @expose.expose(Allocation, body=Allocation, status_code=http_client.CREATED) def post(self, allocation): """Create a new allocation. :param allocation: an allocation within the request body. """ context = api.request.context allocation = self._authorize_create_allocation(allocation) if (allocation.name and not api_utils.is_valid_logical_name(allocation.name)): msg = _("Cannot create allocation with invalid name " "'%(name)s'") % {'name': allocation.name} raise exception.Invalid(msg) if allocation.traits: for trait in allocation.traits: api_utils.validate_trait(trait) node = None if allocation.node is not atypes.Unset: if api_utils.allow_allocation_backfill(): try: node = api_utils.get_rpc_node(allocation.node) except exception.NodeNotFound as exc: exc.code = http_client.BAD_REQUEST raise else: msg = _("Cannot set node when creating an allocation " "in this API version") raise exception.Invalid(msg) if not allocation.resource_class: if node: allocation.resource_class = node.resource_class else: msg = _("The resource_class field is mandatory when not " "backfilling") raise exception.Invalid(msg) if allocation.candidate_nodes: # Convert nodes from names to UUIDs and check their validity try: converted = api.request.dbapi.check_node_list( allocation.candidate_nodes) except exception.NodeNotFound as exc: exc.code = http_client.BAD_REQUEST raise else: # Make sure we keep the ordering of candidate nodes. allocation.candidate_nodes = [ converted[ident] for ident in allocation.candidate_nodes] all_dict = allocation.as_dict() # NOTE(yuriyz): UUID is mandatory for notifications payload if not all_dict.get('uuid'): if node and node.instance_uuid: # When backfilling without UUID requested, assume that the # target instance_uuid is the desired UUID all_dict['uuid'] = node.instance_uuid else: all_dict['uuid'] = uuidutils.generate_uuid() new_allocation = objects.Allocation(context, **all_dict) if node: new_allocation.node_id = node.id topic = api.request.rpcapi.get_topic_for(node) else: topic = api.request.rpcapi.get_random_topic() notify.emit_start_notification(context, new_allocation, 'create') with notify.handle_error_notification(context, new_allocation, 'create'): new_allocation = api.request.rpcapi.create_allocation( context, new_allocation, topic) notify.emit_end_notification(context, new_allocation, 'create') # Set the HTTP Location Header api.response.location = link.build_url('allocations', new_allocation.uuid) return Allocation.convert_with_links(new_allocation) def _validate_patch(self, patch): allowed_fields = ['name', 'extra'] fields = set() for p in patch: path = p['path'].split('/')[1] if path not in allowed_fields: msg = _("Cannot update %s in an allocation. Only 'name' and " "'extra' are allowed to be updated.") raise exception.Invalid(msg % p['path']) fields.add(path) self._check_allowed_allocation_fields(fields) @METRICS.timer('AllocationsController.patch') @wsme.validate(types.uuid, [AllocationPatchType]) @expose.expose(Allocation, types.uuid_or_name, body=[AllocationPatchType]) def patch(self, allocation_ident, patch): """Update an existing allocation. :param allocation_ident: UUID or logical name of an allocation. :param patch: a json PATCH document to apply to this allocation. """ if not api_utils.allow_allocation_update(): raise webob_exc.HTTPMethodNotAllowed(_( "The API version does not allow updating allocations")) context = api.request.context rpc_allocation = api_utils.check_allocation_policy_and_retrieve( 'baremetal:allocation:update', allocation_ident) self._validate_patch(patch) names = api_utils.get_patch_values(patch, '/name') for name in names: if name and not api_utils.is_valid_logical_name(name): msg = _("Cannot update allocation with invalid name " "'%(name)s'") % {'name': name} raise exception.Invalid(msg) allocation_dict = rpc_allocation.as_dict() allocation = Allocation(**api_utils.apply_jsonpatch(allocation_dict, patch)) # Update only the fields that have changed for field in objects.Allocation.fields: try: patch_val = getattr(allocation, field) except AttributeError: # Ignore fields that aren't exposed in the API continue if patch_val == atypes.Unset: patch_val = None if rpc_allocation[field] != patch_val: rpc_allocation[field] = patch_val notify.emit_start_notification(context, rpc_allocation, 'update') with notify.handle_error_notification(context, rpc_allocation, 'update'): rpc_allocation.save() notify.emit_end_notification(context, rpc_allocation, 'update') return Allocation.convert_with_links(rpc_allocation) @METRICS.timer('AllocationsController.delete') @expose.expose(None, types.uuid_or_name, status_code=http_client.NO_CONTENT) def delete(self, allocation_ident): """Delete an allocation. :param allocation_ident: UUID or logical name of an allocation. """ context = api.request.context rpc_allocation = api_utils.check_allocation_policy_and_retrieve( 'baremetal:allocation:delete', allocation_ident) if rpc_allocation.node_id: node_uuid = objects.Node.get_by_id(api.request.context, rpc_allocation.node_id).uuid else: node_uuid = None notify.emit_start_notification(context, rpc_allocation, 'delete', node_uuid=node_uuid) with notify.handle_error_notification(context, rpc_allocation, 'delete', node_uuid=node_uuid): topic = api.request.rpcapi.get_random_topic() api.request.rpcapi.destroy_allocation(context, rpc_allocation, topic) notify.emit_end_notification(context, rpc_allocation, 'delete', node_uuid=node_uuid) class NodeAllocationController(pecan.rest.RestController): """REST controller for allocations.""" invalid_sort_key_list = ['extra', 'candidate_nodes', 'traits'] @pecan.expose() def _route(self, args, request=None): if not api_utils.allow_allocations(): raise webob_exc.HTTPNotFound(_( "The API version does not allow allocations")) return super(NodeAllocationController, self)._route(args, request) def __init__(self, node_ident): super(NodeAllocationController, self).__init__() self.parent_node_ident = node_ident self.inner = AllocationsController() @METRICS.timer('NodeAllocationController.get_all') @expose.expose(Allocation, types.listtype) def get_all(self, fields=None): cdict = api.request.context.to_policy_values() policy.authorize('baremetal:allocation:get', cdict, cdict) result = self.inner._get_allocations_collection(self.parent_node_ident, fields=fields) try: return result.allocations[0] except IndexError: raise exception.AllocationNotFound( _("Allocation for node %s was not found") % self.parent_node_ident) @METRICS.timer('NodeAllocationController.delete') @expose.expose(None, status_code=http_client.NO_CONTENT) def delete(self): context = api.request.context cdict = context.to_policy_values() policy.authorize('baremetal:allocation:delete', cdict, cdict) rpc_node = api_utils.get_rpc_node_with_suffix(self.parent_node_ident) allocations = objects.Allocation.list( api.request.context, filters={'node_uuid': rpc_node.uuid}) try: rpc_allocation = allocations[0] except IndexError: raise exception.AllocationNotFound( _("Allocation for node %s was not found") % self.parent_node_ident) notify.emit_start_notification(context, rpc_allocation, 'delete', node_uuid=rpc_node.uuid) with notify.handle_error_notification(context, rpc_allocation, 'delete', node_uuid=rpc_node.uuid): topic = api.request.rpcapi.get_random_topic() api.request.rpcapi.destroy_allocation(context, rpc_allocation, topic) notify.emit_end_notification(context, rpc_allocation, 'delete', node_uuid=rpc_node.uuid) ironic-15.0.0/ironic/api/controllers/root.py0000664000175000017500000000456613652514273021064 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from pecan import rest from ironic.api.controllers import base from ironic.api.controllers import v1 from ironic.api.controllers import version from ironic.api import expose class Root(base.Base): name = str """The name of the API""" description = str """Some information about this API""" versions = [version.Version] """Links to all the versions available in this API""" default_version = version.Version """A link to the default version of the API""" @staticmethod def convert(): root = Root() root.name = "OpenStack Ironic API" root.description = ("Ironic is an OpenStack project which aims to " "provision baremetal machines.") root.default_version = version.default_version() root.versions = [root.default_version] return root class RootController(rest.RestController): _versions = [version.ID_VERSION1] """All supported API versions""" _default_version = version.ID_VERSION1 """The default API version""" v1 = v1.Controller() @expose.expose(Root) def get(self): # NOTE: The reason why convert() it's being called for every # request is because we need to get the host url from # the request object to make the links. return Root.convert() @pecan.expose() def _route(self, args, request=None): """Overrides the default routing behavior. It redirects the request to the default version of the ironic API if the version number is not specified in the url. """ if args[0] and args[0] not in self._versions: args = [self._default_version] + args return super(RootController, self)._route(args, request) ironic-15.0.0/ironic/api/controllers/version.py0000664000175000017500000000400313652514273021550 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic import api from ironic.api.controllers import base from ironic.api.controllers import link ID_VERSION1 = 'v1' class Version(base.Base): """An API version representation. This class represents an API version, including the minimum and maximum minor versions that are supported within the major version. """ id = str """The ID of the (major) version, also acts as the release number""" links = [link.Link] """A Link that point to a specific version of the API""" status = str """Status of the version. One of: * CURRENT - the latest version of API, * SUPPORTED - supported, but not latest, version of API, * DEPRECATED - supported, but deprecated, version of API. """ version = str """The current, maximum supported (major.minor) version of API.""" min_version = str """Minimum supported (major.minor) version of API.""" def __init__(self, id, min_version, version, status='CURRENT'): self.id = id self.links = [link.Link.make_link('self', api.request.public_url, self.id, '', bookmark=True)] self.status = status self.version = version self.min_version = min_version def default_version(): # NOTE(dtantsur): avoid circular imports from ironic.api.controllers.v1 import versions return Version(ID_VERSION1, versions.min_version_string(), versions.max_version_string()) ironic-15.0.0/ironic/api/controllers/base.py0000664000175000017500000001057413652514273021007 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import functools from webob import exc from ironic.api import types as atypes from ironic.common.i18n import _ class AsDictMixin(object): """Mixin class adding an as_dict() method.""" def as_dict(self): """Render this object as a dict of its fields.""" def _attr_as_pod(attr): """Return an attribute as a Plain Old Data (POD) type.""" if isinstance(attr, list): return [_attr_as_pod(item) for item in attr] # Recursively evaluate objects that support as_dict(). try: return attr.as_dict() except AttributeError: return attr return dict((k, _attr_as_pod(getattr(self, k))) for k in self.fields if hasattr(self, k) and getattr(self, k) != atypes.Unset) class Base(AsDictMixin): """Base type for complex types""" def __init__(self, **kw): for key, value in kw.items(): if hasattr(self, key): setattr(self, key, value) def unset_fields_except(self, except_list=None): """Unset fields so they don't appear in the message body. :param except_list: A list of fields that won't be touched. """ if except_list is None: except_list = [] for k in self.as_dict(): if k not in except_list: setattr(self, k, atypes.Unset) class APIBase(Base): created_at = atypes.wsattr(datetime.datetime, readonly=True) """The time in UTC at which the object is created""" updated_at = atypes.wsattr(datetime.datetime, readonly=True) """The time in UTC at which the object is updated""" @functools.total_ordering class Version(object): """API Version object.""" string = 'X-OpenStack-Ironic-API-Version' """HTTP Header string carrying the requested version""" min_string = 'X-OpenStack-Ironic-API-Minimum-Version' """HTTP response header""" max_string = 'X-OpenStack-Ironic-API-Maximum-Version' """HTTP response header""" def __init__(self, headers, default_version, latest_version): """Create an API Version object from the supplied headers. :param headers: webob headers :param default_version: version to use if not specified in headers :param latest_version: version to use if latest is requested :raises: webob.HTTPNotAcceptable """ (self.major, self.minor) = Version.parse_headers( headers, default_version, latest_version) def __repr__(self): return '%s.%s' % (self.major, self.minor) @staticmethod def parse_headers(headers, default_version, latest_version): """Determine the API version requested based on the headers supplied. :param headers: webob headers :param default_version: version to use if not specified in headers :param latest_version: version to use if latest is requested :returns: a tuple of (major, minor) version numbers :raises: webob.HTTPNotAcceptable """ version_str = headers.get(Version.string, default_version) if version_str.lower() == 'latest': parse_str = latest_version else: parse_str = version_str try: version = tuple(int(i) for i in parse_str.split('.')) except ValueError: version = () if len(version) != 2: raise exc.HTTPNotAcceptable(_( "Invalid value for %s header") % Version.string) return version def __gt__(self, other): return (self.major, self.minor) > (other.major, other.minor) def __eq__(self, other): return (self.major, self.minor) == (other.major, other.minor) def __ne__(self, other): return not self.__eq__(other) ironic-15.0.0/ironic/api/controllers/link.py0000664000175000017500000000374013652514273021027 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic import api from ironic.api.controllers import base from ironic.api import types as atypes def build_url(resource, resource_args, bookmark=False, base_url=None): if base_url is None: base_url = api.request.public_url template = '%(url)s/%(res)s' if bookmark else '%(url)s/v1/%(res)s' # FIXME(lucasagomes): I'm getting a 404 when doing a GET on # a nested resource that the URL ends with a '/'. # https://groups.google.com/forum/#!topic/pecan-dev/QfSeviLg5qs template += '%(args)s' if resource_args.startswith('?') else '/%(args)s' return template % {'url': base_url, 'res': resource, 'args': resource_args} class Link(base.Base): """A link representation.""" href = str """The url of a link.""" rel = str """The name of a link.""" type = str """Indicates the type of document/link.""" @staticmethod def make_link(rel_name, url, resource, resource_args, bookmark=False, type=atypes.Unset): href = build_url(resource, resource_args, bookmark=bookmark, base_url=url) return Link(href=href, rel=rel_name, type=type) @classmethod def sample(cls): sample = cls(href="http://localhost:6385/chassis/" "eaaca217-e7d8-47b4-bb41-3f99f20eed89", rel="bookmark") return sample ironic-15.0.0/ironic/api/controllers/__init__.py0000664000175000017500000000000013652514273021613 0ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/wsgi.py0000664000175000017500000000211213652514273016465 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI script for Ironic API, installed by pbr.""" import sys from oslo_config import cfg from oslo_log import log from ironic.api import app from ironic.common import i18n from ironic.common import service CONF = cfg.CONF LOG = log.getLogger(__name__) # NOTE(dtantsur): WSGI containers may need to override the passed argv. def initialize_wsgi_app(argv=sys.argv): i18n.install('ironic') service.prepare_service(argv) LOG.debug("Configuration:") CONF.log_opt_values(LOG, log.DEBUG) return app.VersionSelectorApplication() ironic-15.0.0/ironic/api/types.py0000664000175000017500000000213213652514273016662 0ustar zuulzuul00000000000000# coding: utf-8 # # Copyright 2020 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from wsme.types import ArrayType # noqa from wsme.types import Base # noqa from wsme.types import DictType # noqa from wsme.types import Enum # noqa from wsme.types import File # noqa from wsme.types import IntegerType # noqa from wsme.types import StringType # noqa from wsme.types import text # noqa from wsme.types import Unset # noqa from wsme.types import UserType # noqa from wsme.types import wsattr # noqa from wsme.types import wsproperty # noqa ironic-15.0.0/ironic/api/app.py0000664000175000017500000001170313652514273016302 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # Copyright © 2012 New Dream Network, LLC (DreamHost) # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import keystonemiddleware.audit as audit_middleware from oslo_config import cfg import oslo_middleware.cors as cors_middleware from oslo_middleware import healthcheck from oslo_middleware import http_proxy_to_wsgi import osprofiler.web as osprofiler_web import pecan from ironic.api import config from ironic.api.controllers import base from ironic.api import hooks from ironic.api import middleware from ironic.api.middleware import auth_token from ironic.api.middleware import json_ext from ironic.common import exception from ironic.conf import CONF class IronicCORS(cors_middleware.CORS): """Ironic-specific CORS class We're adding the Ironic-specific version headers to the list of simple headers in order that a request bearing those headers might be accepted by the Ironic REST API. """ simple_headers = cors_middleware.CORS.simple_headers + [ 'X-Auth-Token', base.Version.max_string, base.Version.min_string, base.Version.string ] def get_pecan_config(): # Set up the pecan configuration filename = config.__file__.replace('.pyc', '.py') return pecan.configuration.conf_from_file(filename) def setup_app(pecan_config=None, extra_hooks=None): app_hooks = [hooks.ConfigHook(), hooks.DBHook(), hooks.ContextHook(pecan_config.app.acl_public_routes), hooks.RPCHook(), hooks.NoExceptionTracebackHook(), hooks.PublicUrlHook()] if extra_hooks: app_hooks.extend(extra_hooks) if not pecan_config: pecan_config = get_pecan_config() pecan.configuration.set_config(dict(pecan_config), overwrite=True) app = pecan.make_app( pecan_config.app.root, debug=CONF.pecan_debug, static_root=pecan_config.app.static_root if CONF.pecan_debug else None, force_canonical=getattr(pecan_config.app, 'force_canonical', True), hooks=app_hooks, wrap_app=middleware.ParsableErrorMiddleware, # NOTE(dtantsur): enabling this causes weird issues with nodes named # as if they had a known mime extension, e.g. "mynode.1". We do # simulate the same behaviour for .json extensions for backward # compatibility through JsonExtensionMiddleware. guess_content_type_from_ext=False, ) if CONF.audit.enabled: try: app = audit_middleware.AuditMiddleware( app, audit_map_file=CONF.audit.audit_map_file, ignore_req_list=CONF.audit.ignore_req_list ) except (EnvironmentError, OSError, audit_middleware.PycadfAuditApiConfigError) as e: raise exception.InputFileError( file_name=CONF.audit.audit_map_file, reason=e ) if CONF.auth_strategy == "keystone": app = auth_token.AuthTokenMiddleware( app, {"oslo_config_config": cfg.CONF}, public_api_routes=pecan_config.app.acl_public_routes) if CONF.profiler.enabled: app = osprofiler_web.WsgiMiddleware(app) # NOTE(pas-ha) this registers oslo_middleware.enable_proxy_headers_parsing # option, when disabled (default) this is noop middleware app = http_proxy_to_wsgi.HTTPProxyToWSGI(app, CONF) # add in the healthcheck middleware if enabled # NOTE(jroll) this is after the auth token middleware as we don't want auth # in front of this, and WSGI works from the outside in. Requests to # /healthcheck will be handled and returned before the auth middleware # is reached. if CONF.healthcheck.enabled: app = healthcheck.Healthcheck(app, CONF) # Create a CORS wrapper, and attach ironic-specific defaults that must be # included in all CORS responses. app = IronicCORS(app, CONF) cors_middleware.set_defaults( allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'], expose_headers=[base.Version.max_string, base.Version.min_string, base.Version.string] ) app = json_ext.JsonExtensionMiddleware(app) return app class VersionSelectorApplication(object): def __init__(self): pc = get_pecan_config() self.v1 = setup_app(pecan_config=pc) def __call__(self, environ, start_response): return self.v1(environ, start_response) ironic-15.0.0/ironic/api/config.py0000664000175000017500000000251113652514273016764 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Server Specific Configurations # See https://pecan.readthedocs.org/en/latest/configuration.html#server-configuration # noqa server = { 'port': '6385', 'host': '0.0.0.0' } # Pecan Application Configurations # See https://pecan.readthedocs.org/en/latest/configuration.html#application-configuration # noqa app = { 'root': 'ironic.api.controllers.root.RootController', 'modules': ['ironic.api'], 'static_root': '%(confdir)s/public', 'debug': False, 'acl_public_routes': [ '/', '/v1', # IPA ramdisk methods '/v1/lookup', '/v1/heartbeat/[a-z0-9\\-]+', ], } # WSME Configurations # See https://wsme.readthedocs.org/en/latest/integrate.html#configuration wsme = { 'debug': False, } ironic-15.0.0/ironic/api/middleware/0000775000175000017500000000000013652514443017262 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/api/middleware/json_ext.py0000664000175000017500000000272113652514273021470 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from ironic.common import utils LOG = log.getLogger(__name__) class JsonExtensionMiddleware(object): """Simplified processing of .json extension. Previously Ironic API used the "guess_content_type_from_ext" feature. It was never needed, as we never allowed non-JSON content types anyway. Now that it is removed, this middleware strips .json extension for backward compatibility. """ def __init__(self, app): self.app = app def __call__(self, env, start_response): path = utils.safe_rstrip(env.get('PATH_INFO'), '/') if path and path.endswith('.json'): LOG.debug('Stripping .json prefix from %s for compatibility ' 'with pecan', path) env['PATH_INFO'] = path[:-5] env['HAS_JSON_SUFFIX'] = True else: env['HAS_JSON_SUFFIX'] = False return self.app(env, start_response) ironic-15.0.0/ironic/api/middleware/auth_token.py0000664000175000017500000000435313652514273022003 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from keystonemiddleware import auth_token from ironic.common import exception from ironic.common.i18n import _ from ironic.common import utils class AuthTokenMiddleware(auth_token.AuthProtocol): """A wrapper on Keystone auth_token middleware. Does not perform verification of authentication tokens for public routes in the API. """ def __init__(self, app, conf, public_api_routes=None): api_routes = [] if public_api_routes is None else public_api_routes self._ironic_app = app # TODO(mrda): Remove .xml and ensure that doesn't result in a # 401 Authentication Required instead of 404 Not Found route_pattern_tpl = '%s(\\.json|\\.xml)?$' try: self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl) for route_tpl in api_routes] except re.error as e: raise exception.ConfigInvalid( error_msg=_('Cannot compile public API routes: %s') % e) super(AuthTokenMiddleware, self).__init__(app, conf) def __call__(self, env, start_response): path = utils.safe_rstrip(env.get('PATH_INFO'), '/') # The information whether the API call is being performed against the # public API is required for some other components. Saving it to the # WSGI environment is reasonable thereby. env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path), self.public_api_routes)) if env['is_public_api']: return self._ironic_app(env, start_response) return super(AuthTokenMiddleware, self).__call__(env, start_response) ironic-15.0.0/ironic/api/middleware/parsable_error.py0000664000175000017500000000614213652514273022642 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright © 2012 New Dream Network, LLC (DreamHost) # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Middleware to replace the plain text message body of an error response with one formatted so the client can parse it. Based on pecan.middleware.errordocument """ import json from oslo_log import log from ironic.common.i18n import _ LOG = log.getLogger(__name__) class ParsableErrorMiddleware(object): """Replace error body with something the client can parse.""" def __init__(self, app): self.app = app def __call__(self, environ, start_response): # Request for this state, modified by replace_start_response() # and used when an error is being reported. state = {} def replacement_start_response(status, headers, exc_info=None): """Overrides the default response to make errors parsable.""" try: status_code = int(status.split(' ')[0]) state['status_code'] = status_code except (ValueError, TypeError): # pragma: nocover raise Exception(_( 'ErrorDocumentMiddleware received an invalid ' 'status %s') % status) else: if (state['status_code'] // 100) not in (2, 3): # Remove some headers so we can replace them later # when we have the full error message and can # compute the length. headers = [(h, v) for (h, v) in headers if h not in ('Content-Length', 'Content-Type') ] # Save the headers in case we need to modify them. state['headers'] = headers return start_response(status, headers, exc_info) # The default for ironic is application/json. However, Pecan will try # to output HTML errors if no Accept header is provided. if 'HTTP_ACCEPT' not in environ or environ['HTTP_ACCEPT'] == '*/*': environ['HTTP_ACCEPT'] = 'application/json' app_iter = self.app(environ, replacement_start_response) if (state['status_code'] // 100) not in (2, 3): app_iter = [i.decode('utf-8') for i in app_iter] body = [json.dumps({'error_message': '\n'.join(app_iter)})] body = [item.encode('utf-8') for item in body] state['headers'].append(('Content-Type', 'application/json')) state['headers'].append(('Content-Length', str(len(body[0])))) else: body = app_iter return body ironic-15.0.0/ironic/api/middleware/__init__.py0000664000175000017500000000175113652514273021400 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironic.api.middleware import auth_token from ironic.api.middleware import json_ext from ironic.api.middleware import parsable_error ParsableErrorMiddleware = parsable_error.ParsableErrorMiddleware AuthTokenMiddleware = auth_token.AuthTokenMiddleware JsonExtensionMiddleware = json_ext.JsonExtensionMiddleware __all__ = ('ParsableErrorMiddleware', 'AuthTokenMiddleware', 'JsonExtensionMiddleware') ironic-15.0.0/ironic/api/__init__.py0000664000175000017500000000115513652514273017261 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan request = pecan.request response = pecan.response del pecan ironic-15.0.0/ironic/cmd/0000775000175000017500000000000013652514443015137 5ustar zuulzuul00000000000000ironic-15.0.0/ironic/cmd/api.py0000664000175000017500000000345413652514273016271 0ustar zuulzuul00000000000000# -*- encoding: utf-8 -*- # # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Ironic Service API.""" import sys from oslo_config import cfg from oslo_log import log try: from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts except ImportError: gmr = None from ironic.common import profiler from ironic.common import service as ironic_service from ironic.common import wsgi_service from ironic import version CONF = cfg.CONF LOG = log.getLogger(__name__) def main(): # Parse config file and command line options, then start logging ironic_service.prepare_service(sys.argv) if gmr is not None: gmr_opts.set_defaults(CONF) gmr.TextGuruMeditation.setup_autorun(version) else: LOG.debug('Guru meditation reporting is disabled ' 'because oslo.reports is not installed') profiler.setup('ironic_api', CONF.host) # Build and start the WSGI app launcher = ironic_service.process_launcher() server = wsgi_service.WSGIService('ironic_api', CONF.api.enable_ssl_api) launcher.launch_service(server, workers=server.workers) launcher.wait() if __name__ == '__main__': sys.exit(main()) ironic-15.0.0/ironic/cmd/dbsync.py0000664000175000017500000003541213652514273017001 0ustar zuulzuul00000000000000# Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Run storage database migration. """ import sys from oslo_config import cfg from ironic.common import context from ironic.common import exception from ironic.common.i18n import _ from ironic.common import service from ironic.conf import CONF from ironic.db import api as db_api from ironic.db import migration from ironic import version dbapi = db_api.get_instance() # NOTE(rloo): This is a list of functions to perform online data migrations # (from previous releases) for this release, in batches. It may be empty. # The migration functions should be ordered by execution order; from earlier # to later releases. # # Each migration function takes two arguments -- the context and maximum # number of objects to migrate, and returns a 2-tuple -- the total number of # objects that need to be migrated at the beginning of the function, and the # number migrated. If the function determines that no migrations are needed, # it returns (0, 0). # # The last migration step should always remain the last one -- it migrates # all objects to their latest known versions. # # Example of a function docstring: # # def sample_data_migration(context, max_count): # """Sample method to migrate data to new format. # # :param context: an admin context # :param max_count: The maximum number of objects to migrate. Must be # >= 0. If zero, all the objects will be migrated. # :returns: A 2-tuple -- the total number of objects that need to be # migrated (at the beginning of this call) and the number # of migrated objects. # """ # NOTE(vdrok): Do not access objects' attributes, instead only provide object # and attribute name tuples, so that not to trigger the load of the whole # object, in case it is lazy loaded. The attribute will be accessed when needed # by doing getattr on the object ONLINE_MIGRATIONS = ( # NOTE(rloo): Don't remove this; it should always be last (dbapi, 'update_to_latest_versions'), ) # These are the models added in supported releases. We skip the version check # for them since the tables do not exist when it happens. NEW_MODELS = [ ] class DBCommand(object): def check_obj_versions(self, ignore_missing_tables=False): """Check the versions of objects. Check that the object versions are compatible with this release of ironic. It does this by comparing the objects' .version field in the database, with the expected versions of these objects. Returns None if compatible; a string describing the issue otherwise. """ if migration.version() is None: # no tables, nothing to check return if ignore_missing_tables: ignore_models = NEW_MODELS else: ignore_models = () msg = None try: if not dbapi.check_versions(ignore_models=ignore_models): msg = (_('The database is not compatible with this ' 'release of ironic (%s). Please run ' '"ironic-dbsync online_data_migrations" using ' 'the previous release.\n') % version.version_info.release_string()) except exception.DatabaseVersionTooOld: msg = (_('The database version is not compatible with this ' 'release of ironic (%s). This can happen if you are ' 'attempting to upgrade from a version older than ' 'the previous release (skip versions upgrade). ' 'This is an unsupported upgrade method. ' 'Please run "ironic-dbsync upgrade" using the previous ' 'releases for a fast-forward upgrade.\n') % version.version_info.release_string()) return msg def _check_versions(self, ignore_missing_tables=False): msg = self.check_obj_versions( ignore_missing_tables=ignore_missing_tables) if not msg: return else: sys.stderr.write(msg) # NOTE(rloo): We return 1 in online_data_migrations() to indicate # that there are more objects to migrate, so don't use 1 here. sys.exit(2) def upgrade(self): self._check_versions(ignore_missing_tables=True) migration.upgrade(CONF.command.revision) def revision(self): migration.revision(CONF.command.message, CONF.command.autogenerate) def stamp(self): migration.stamp(CONF.command.revision) def version(self): print(migration.version()) def create_schema(self): migration.create_schema() def online_data_migrations(self): self._check_versions() self._run_online_data_migrations(max_count=CONF.command.max_count, options=CONF.command.options) def _run_migration_functions(self, context, max_count, options): """Runs the migration functions. Runs the data migration functions in the ONLINE_MIGRATIONS list. It makes sure the total number of object migrations doesn't exceed the specified max_count. A migration of an object will typically migrate one row of data inside the database. :param context: an admin context :param max_count: the maximum number of objects (rows) to migrate; a value >= 1. :param options: migration options - dict mapping migration name to a dictionary of options for this migration. :raises: Exception from the migration function :returns: Boolean value indicating whether migrations are done. Returns False if max_count objects have been migrated (since at that point, it is unknown whether all migrations are done). Returns True if migrations are all done (i.e. fewer than max_count objects were migrated when the migrations are done). """ total_migrated = 0 for migration_func_obj, migration_func_name in ONLINE_MIGRATIONS: migration_func = getattr(migration_func_obj, migration_func_name) migration_opts = options.get(migration_func_name, {}) num_to_migrate = max_count - total_migrated try: total_to_do, num_migrated = migration_func(context, num_to_migrate, **migration_opts) except Exception as e: print(_("Error while running %(migration)s: %(err)s.") % {'migration': migration_func.__name__, 'err': e}, file=sys.stderr) raise print(_('%(migration)s() migrated %(done)i of %(total)i objects.') % {'migration': migration_func.__name__, 'total': total_to_do, 'done': num_migrated}) total_migrated += num_migrated if total_migrated >= max_count: # NOTE(rloo). max_count objects have been migrated so we have # to stop. We return False because there is no look-ahead so # we don't know if the migrations have been all done. All we # know is that we've migrated max_count. It is possible that # the migrations are done and that there aren't any more to # migrate after this, but that would involve checking: # 1. num_migrated == total_to_do (easy enough), AND # 2. whether there are other migration functions and whether # they need to do any object migrations (not so easy to # check) return False return True def _run_online_data_migrations(self, max_count=None, options=None): """Perform online data migrations for the release. Online data migrations are done by running all the data migration functions in the ONLINE_MIGRATIONS list. If max_count is None, all the functions will be run in batches of 50 objects, until the migrations are done. Otherwise, this will run (some of) the functions until max_count objects have been migrated. :param max_count: the maximum number of individual object migrations or modified rows, a value >= 1. If None, migrations are run in a loop in batches of 50, until completion. :param options: options to pass to migrations. List of values in the form of .